News this Week

Science  13 Jun 2008:
Vol. 320, Issue 5882, pp. 1040

    Changes in Peer Review Target Young Scientists, Heavyweights

    1. Jocelyn Kaiser

    After a year of gathering advice on how to improve its overloaded peer-review system, the U.S. National Institutes of Health (NIH) last week unveiled a plan to ease the workload on both applicants and reviewers and to help young investigators. The changes incorporate many recommendations from two advisory committees. But NIH rejected a suggestion aimed at eliminating an apparent bias favoring researchers who resubmit their grant applications after being turned down.

    NIH Director Elias Zerhouni asked internal and external advisory working groups last June to suggest ways to cope with a record number of applications, a continued flat budget, and a shortage of quality reviewers. NIH's response to their report (Science, 29 February, p. 1169) was presented last week to the director's advisory committee by the co-chair of both panels, Lawrence Tabak, director of the National Institute of Dental and Craniofacial Research. Cell biologist Keith Yamamoto of the University of California, San Francisco, who co-chaired the external group, says he's “disappointed” that NIH rejected the advice on resubmitted proposals but that he's “basically happy with” the overall response.

    NIH plans to shorten the allowed length of applications from 25 pages to 12, to focus more on the anticipated impact of the research and less on methods and other details. Proposals will be given scores on five criteria, not just an overall score, to provide clearer feedback. In addition, reviewers will score all applications, even those in the bottom of the pile that are now “triaged,” or set aside. At the end of a study section meeting, reviewers will rank applications to reduce ambiguity.

    NIH also followed suggestions for making reviewing more attractive to busy researchers. For example, reviewers can participate in 12 sessions over 6 years instead of 4 years and potentially share the duty with a colleague. Those receiving high-prestige awards from NIH or holding at least three basic research grants will be obliged to serve if asked. NIH will also offer a grant extension of up to $250,000—about 9 months' funding—to some 500 reviewers who have participated in at least 18 study section meetings. Tabak says this is intended to compensate them for time away from the bench spent preparing for and attending each meeting. NIH has not yet estimated the costs, but Tabak says “it is a zero-sum game” assuming most would have their grants renewed anyway.

    However, NIH officials nixed a key recommendation to jettison a system that allows researchers who don't win funding the first time around to resubmit the proposal up to two more times. Reviewers tend to favor these amended applications over first-time submissions, the working groups found, perhaps because the applicants responded to reviewers' comments or because reviewers know it's the investigator's last shot. Since the doubling of NIH's budget ended in 2003, the share of the total pot claimed by first-time submissions has shrunk from about 60% to 30% (see lower graph). To level the playing field, the two panels recommended that all proposals be considered “new” so that resubmitted ones get no particular advantage. NIH also rejected the proposal that fatally flawed applications be labeled “not recommended for resubmission,” instead leaving it up to reviewers to offer this advice in comments.

    Tough sledding.

    Resubmitted applications are claiming a growing share of the overall pool of funded R01 research grants (top), and the success rate for first-time proposals, which make up about two-thirds of all applications, has plummeted to single digits (above).


    These two proposals didn't go over well with the community. “There was a huge outcry about this. People feel like they need a second chance, a third chance,” Zerhouni says. “We're not comfortable with changing the system radically to reduce the number of resubmissions,” says Howard Garrison, public affairs director of the Federation of American Societies for Experimental Biology in Rockville, Maryland, which urged NIH to abandon these ideas.

    Instead, NIH plans to “carefully rebalance success rates among” the three types of submissions so as to fund a larger portion on the first round, according to Tabak. The burden will fall on each institute's advisory council.

    To help out young, first-time investigators, NIH will review their proposals separately within a study section. Officials plan to pilot setting a funding cutoff point for all early-stage proposals across all study sections. Since 2007, Zerhouni has set a goal of funding at least 1500 new investigators a year, about 150 more than in 2006. NIH also plans to double its funding for high-risk awards to about 1% of the agency's R01 budget.

    NIH also tempered a suggestion aimed at distributing scarce resources. The advisory panel had recommended that NIH require principal investigators to spend at least 20% of their time on each grant, creating a de facto cap of four grants. But Zerhouni says it is “not practical to have a hard-and-fast rule” because the amount of time scientists spend on non-research activities, such as teaching, varies by institution. Instead, applicants who already have $1 million in NIH funding will have to explain why they need more.

    NIH plans to implement the changes over the next 18 months.


    Design Changes Will Increase ITER Reactor's Cost

    1. Daniel Clery

    The €10 billion ITER fusion project hopes to demonstrate that a burning plasma can be controlled to produce useful energy. This month, ITER's funders face their own daunting task of keeping the project's budget under control, as scientists present a wish list of design changes.

    The changes are needed, say the researchers, because of advances in fusion science since the baseline reactor design was published in 2001. Although the wish list won't be publicly revealed until ITER's governing council meets in Japan on 17–18 June, insiders say the design tweaks are going to require more money, a fact that will not go down well with governments funding the project. “Where the pain level is for each [ITER] member is impossible to say,” says David Campbell, assistant head of ITER's department of fusion science and technology.

    The design review is not the council's only headache. The prices for steel and copper have skyrocketed this decade, and at the end of last year, the U.S. Congress zeroed out the country's ITER contribution from the 2008 budget. “The June council will be a key meeting,” says Campbell.

    ITER, or International Thermonuclear Experimental Reactor, has been on the drawing board since the mid-1980s. In 2001, the “final” design was ready, and, after much wrangling over the site, the governments of China, the European Union, Japan, Russia, South Korea, and the United States agreed to build it at Cadarache in southern France (Science, 1 July 2005, p. 28). (India joined the effort in 2006.) But before construction starts this year, ITER managers decided to ask researchers to review whether the design could be improved to give the project the best chance of meeting its goals (Science, 13 October 2006, p. 238).

    Making space.

    Construction workers clear the ground for ITER at Cadarache.


    Led by Günter Janeschitz of Germany's Karlsruhe research center and completed late last year, the redesign report is said to recommend 80 modifications, including changes to the plasma's microwave heating system, the complex arrangements of magnets to hold the plasma in place, and the diverter, a device around the bottom of the doughnut-shaped vessel that extracts spent fuel. ITER staff and the Science and Technology Advisory Committee—a panel of fusion experts appointed by ITER members—have been poring over the report, trying to separate out the essential from the merely desirable, and estimating how much the changes will cost and their impact on the construction schedule. “All of these things cost money, … [so] we must be careful not to make a list so long that the bill shocks everyone,” says a senior European fusion researcher who asked not to be named.

    One of the most contentious recommendations concerns a system to control explosive releases of energy at the edges of the plasma called edge-localized modes (ELMs). If they are too large, ELMs can erode the wall of the reactor vessel and damage the diverter. The current ITER design already contains a system to control ELMs: rapidly firing a stream of frozen deuterium pellets into the plasma, each of which causes a mini-ELM that does no damage. But researchers using the DIII-D fusion reactor in San Diego, California, discovered another way: A weak magnetic field can make the edge of the plasma slightly leaky and take the sting out of ELMs.

    Such a system would be simpler and more efficient than pellet injection, but to create the magnetic field requires adding electromagnetic coils inside the reactor vessel—a major and expensive design change. Some think it's too soon to decide on such a major modification. “It's clear the field has an effect. But we don't yet understand the physics. It'll take 3 to 4 years to nail it down,” says Hartmut Zohm of the Max Planck Institute for Plasma Physics in Garching, Germany. Zohm and others suggest that a redesign could make space for the coils with the decision to install them taken later.

    ITER council members will also be eager to hear about the U.S. budget situation. The decision by Congress last December to remove the $149 million ITER funding from the fiscal year 2008 budget was considered unfortunate but not catastrophic by ITER insiders. “In 2009, we'll be ready to get running,” says Ned Sauthoff, head of the U.S. ITER effort, adding: “We're a family. We'll figure out how to get through this.” Last month, the U.S. Senate approved spending $55 million on ITER this year as part of a bill now before Congress to fund the military in Iraq and Afghanistan. The bill's fate is uncertain, however, as the Bush Administration opposes any additional domestic spending.

    The talk in Washington is that, with a presidential election looming, Congress will simply extend the current budget for another 6 months, leaving ITER out in the cold until April 2009. This could prompt some ITER members to query the United States's commitment to the project. Says ITER project construction leader Norbert Holtkamp: “If the U.S. doesn't restore funding in 2009, then we have a very tricky problem. We have to ensure that 2009 is okay.”


    Two Years On, a Mud Volcano Still Rages--and Bewilders

    1. Dennis Normile
    Still going strong.

    Gases waft from the crater of the mud volcano on Indonesia's Java Island.


    As a disastrous mud eruption on Indonesia's Java Island marks its second anniversary, the unprecedented event continues to stir debate about whether it resulted from an exploratory gas well drilling accident or a distant earthquake and how long it will last. The mud volcano, nicknamed Lusi, has been disgorging mud at a rate of up to 150,000 cubic meters per day. Officials are struggling to contain the effluent within dikes that are regularly breached and built anew farther out.

    In November 2006, ground deformation near the volcano ruptured a natural gas pipeline, killing 13 people. Lusi's mud has engulfed 750 hectares, destroying the homes of more than 30,000 people as well as factories and farms. “Sadly, it's not about simple technical problems anymore. It's more [about] economic and social and political problems,” says Satria Bijaksana, a geophysicist at Institut Teknologi Bandung.

    Lapindo Brantas, the oil and gas exploration company that operated the ill-starred gas well, and the government have promised compensation to landowners, but it has been slow in coming. Hundreds of families are still living in temporary shelters. In two separate cases, Indonesian courts have ruled the eruption a natural disaster, absolving Lapindo Brantas of liability. Ivan Valentina Agaung, a lawyer for Walhi, an Indonesian environmental group that filed one of the suits, says the group is appealing to a higher court in hopes of getting Lapindo Brantas to take responsibility for environmental rehabilitation.

    For scientists, Lusi is an intriguing specimen. A flurry of papers refines previous work on the eruption's dynamics and offers insights into the evolution of mud volcanoes. “This is a great opportunity. Nobody knows how other mud volcanoes looked when they were first appearing,” says Adriano Mazzini, a geologist at the University of Oslo.

    There is general agreement on the sequence of events. On 27 May 2006 at 5:54 a.m. local time, a magnitude-6.3 earthquake struck near Yogyakarta, in central Java. Between 5 and 8 a.m. the following day, Lapindo Brantas's gas well, which was being drilled 250 kilometers to the east near the town of Sidoarjo, began to flood. Workers shut the well's blowout preventer to keep the fluid from gushing out the top. They noted that pressure inside the well rose rapidly before gradually subsiding. Early in the morning of 29 May, mud began burbling out of the ground about 150 meters away.

    In a February 2007 article in GSA Today, Richard Davies, a geologist at the University of Durham, U.K., and colleagues claimed that the drillers penetrated a porous limestone formation about 2800 meters below the surface, inadvertently tapping into a highly pressurized aquifer. The borehole's casing didn't extend deep enough to protect rock from cracking under the pressure when the blowout preventer was shut, he concluded. Water then channeled its way to the surface, bringing mud with it (Science, 2 February 2007, p. 586).

    That's not how Mazzini and his colleagues see it. In the 30 September 2007 issue of Earth and Planetary Science Letters, they argued that the region's geological structures, pressurized hydrocarbon deposits, common in the region, and a seismic fault created conditions “perfect for a mud volcano.” The only thing missing was a trigger, Mazzini says. The drilling might have contributed, he says, but he believes a more important factor was that the Yogyakarta earthquake reactivated the fault. At roughly the same time Lusi broke, mud also erupted from eight fissures along a 100-kilometer stretch of the fault line. “I don't think this is a coincidence,” he says.

    Global Positioning System (GPS) data and an obvious kink in a rail line show that ground along the fault has shifted up to half a meter since the Yogyakarta earthquake. But Michael Manga, a geologist at the University of California, Berkeley, who has studied how earthquakes trigger distant volcanic eruptions, contends that the quake was too small and too far away from the fault to influence it. In recent decades, he says, “there were many earthquakes that were both closer and bigger and by any measure more likely to have triggered an eruption.”

    In a paper published online on 5 June in Earth and Planetary Science Letters, Manga, Davies, and colleagues suggest that the fault is likely to be shifting in response to the movement of vast amounts of material to the surface. The mechanism is not clear. Co-author Rudi Rubiandini, a petroleum engineer at the Institut Teknologi Bandung, says the analysis “makes every other reason [for the eruption] impossible.” Most earth scientists agree that the well must have had some effect, says James Mori, a seismologist at Kyoto University in Japan. But he says researchers can't determine whether the volcano would have formed without the drilling.

    While sympathizing with Lusi's victims, geologists say they relish the rare opportunity to study a mud volcano's birth and evolution. GPS and satellite-based interferometric synthetic aperture radar data indicate that the surface near the volcano's vent is collapsing into a funnel shape, characteristic of sand draining from the top bulb of an hourglass. Davies and colleagues concluded in a paper published online on 21 May in Environmental Geology that between June 2006 and September 2007, the funnel's center sank at about 4 centimeters per day, which in 3 years would produce a sag of 44 meters. They also report that areas outside the funnel are rising, probably due to movement of the fault.

    Scientists are puzzling over other phenomena as well. Since March 2007, the flow has periodically stopped for hours or days only to resume with its previous vigor. The likely explanation, Davies says, is that the weight of mud at the surface is collapsing the vent deep underground. Pressure backs up until it breaks through the blockage. In addition, there have been 88 minieruptions of water and methane where the ground is subsiding. Rubiandini believes the subsidence is cracking open pressurized gas pockets. And along the fault, geysers of water have suddenly shot up in the middle of yards, rice paddies, and even within factories, probably due to the rearrangement of subsurface plumbing. “The volcano is taking on a life of its own,” Davies says.


    More than 30,000 people lost their homes to Lusi.


    How long this will go on, he says, is anybody's guess.



    1. Dennis Normile

    The mud volcano Lusi is unique in its longevity and the volume of material ejected. It may also be setting records for the number of failed attempts to plug it.

    Immediately after the 29 May 2006 eruption, Lapindo Brantas—the company whose exploratory drilling, some claim, triggered the eruption—pumped concrete into the well to try to stop the gush of hot, salty water from a subsurface aquifer. When that failed, the company brought in a consultant from Houston, Texas, who directed the drilling of two relief wells intended to intercept the original borehole and pump in high-density drilling mud to plug the leak. This effort was abandoned when the wells were short of their target—also, reportedly, because Lapindo Brantas ran out of money.

    In February 2007, following a proposal from geophysicist Satria Bijaksana and two colleagues from Institut Teknologi Bandung, Lapindo Brantas started dropping into the vent clusters of concrete balls, 20 centimeters and 40 centimeters in diameter, roped together with steel cables. The objective, Bijaksana says, was “to reduce the sheer volume of mud coming out of the vent to a manageable level.” This effort was abandoned after 398 of a planned 1000 clusters had been dropped; a government agency that took over management of the disaster concluded that the balls were having little effect.

    The only hope of plugging Lusi is to drill another relief well to plug the original well at a point below where it was breached, says Rudi Rubiandini, a petroleum engineer at Institut Teknologi Bandung. He estimates that the well would cost $70 million to $100 million. But that is unlikely to happen, he says: “Our government now thinks this is a natural disaster and impossible to kill.”


    Scientists Race Against the Clock to Gauge Landslide Risk

    1. Richard Stone

    XIAO JIA QIAO, CHINA—In a vale ringed by mountains bearing the tan scars of numerous landslides, bulldozers are carving a diversion channel to relieve pressure from the rising waters of a blocked river. On 12 May, the Wenchuan earthquake sent about 2 million cubic meters of limestone rubble hurtling down a mountainside here, obliterating houses in Xiao Jia Qiao village and creating a 70-meter-high dam on the Chaping River. Near the dam, the Chaping's jade-green waters are choked with flotsam, including a few bloated pillows and a pair of red doors. In the last few weeks, the river, now a lake, has risen 50 meters, submerging houses along its banks. “I'm surprised how fast the water is coming up,” says Wei Fangqiang, a physical geographer at the Institute of Mountain Hazards and Environment (IMHE) in Chengdu, who first glimpsed the 300-meter-wide dam on 16 May.

    The gravest threat to survivors of the magnitude-7.9 Wenchuan earthquake, history shows, may be new lakes like this one. In 1933, a landslide dam formed by a magnitude-7.5 earthquake in Sichuan Province burst 45 days after the quake; the flood killed about 8000 people, some 2000 more than the number who died in the earthquake itself, says Wei.

    Rough-hewn floodgate.

    Workers sculpt a channel through debris blocking the Chaping River.


    A 150-person engineering brigade from Hubei Province has been working round the clock for 8 days digging a channel to bypass the dam. With heavy rain in the forecast, Wei says, the Chaping River should rise faster and reconnect with its downstream segment in a few days. If all goes according to plan, the channel should draw down the Chaping gradually, reducing the risk of the dam giving way and unleashing a torrent on people living in tents downstream.

    As perilous as the situation is at Xiao Jia Qiao, the landslide dam here is considered “medium risk,” one of five in this category; IMHE has classified seven others as high-risk. The most dangerous of all, says IMHE geomorphologist Cui Peng, was at Tangjiashan, where some 242 million cubic meters of the Jianjiang River had piled up behind a fragile earth barrier.

    The government had evacuated more than 100,000 people downstream in the Mianyang area and had a contingency plan to quickly evacuate a million more. The main objective has been to keep the flow rate through a diversion channel fast enough to compensate for the rain-fed Jianjiang River's rise. Army troops earlier this week fired rockets at boulders near the channel to try to boost the flow rate, according to the Xinhua News Agency. Water began moving through the channel, and officials declared a “decisive victory” this week in lowering the level of the lake.

    As Science went to press, 69,142 people were known to have perished and 17,551 were missing after the Wenchuan earthquake. Now that the search for survivors has ended, the overriding task is to provide adequate housing and food for more than 1.5 million people who lost homes in the quake and to guard against the spread of disease in the vulnerable displaced population.

    A few days after the initial shock, Cui's team struck out into the disaster zone to examine some of the debris dams (Science, 23 May, p. 996). In the meantime, crewless planes from the Institute of Remote Sensing Applications in Beijing imaged the area to help chart major landslides among the estimated several thousand triggered by the earthquake. A team led by Tang Huiming, an expert on geological hazards at China University of Geosciences in Wuhan, has found evidence of old landslides where fresh ones occurred. Still, the extent of the devastation is mind-boggling, says Tang: “I've never seen anything like this before.”

    In all, Cui and his colleagues have identified more than 100 landslide dams. Researchers with IMHE and the Chengdu Hydropower Survey and Design Institute zeroed in on 33 for detailed analysis. The Chinese government is attempting to divert water at the 12 dams deemed medium or high risk and four others. “It's quite difficult to say” how well the diversion channels will work, says Cui. “Nobody has very good experience for dealing with megalandslide dams.”

    Meanwhile, another threat is looming. Scores of landslides—nobody knows how many—have blocked gullies that are dry now but can fill with water during the rainy season. Many smaller dams are known from remote sensing, says Cui, but “only from a field survey can we say which are dangerous.” IMHE researchers plan to fan out around Sichuan to identify which of the clogged gullies have villages or temporary shelters for displaced people near their outlets. Wei was planning to deliver a report on this threat to Sichuan authorities earlier this week. “I'm worried that the blocked gullies could create a serious disaster,” he says.

    In the months to come, Cui and his IMHE colleagues will help determine where it is safe to rebuild homes in this shattered corner of Sichuan Province. Studying how the mountain slopes have changed is a critical piece of the government's reconstruction effort. “We have to figure out which places are suitable for people to live and which are too dangerous,” says Cui. With the huge number of survivors roughing it in tents and other temporary shelters, the scientists will have to work fast.


    Have Desert Researchers Discovered a Hidden Loop in the Carbon Cycle?

    1. Richard Stone
    Waiting to exhale?

    CO2 flux readings suggest that the Mojave Desert in Nevada is gulping carbon at the rate of a temperate forest.


    URUMQI, CHINA—When Li Yan began measuring carbon dioxide (CO2) in western China's Gubantonggut Desert in 2005, he thought his equipment had malfunctioned. Li, a plant ecophysiologist with the Chinese Academy of Sciences'Xinjiang Institute of Ecology and Geography in Urumqi, discovered that his plot was soaking up CO2 at night. His team ruled out the sparse vegetation as the CO2 sink. Li came to a surprising conclusion: The alkaline soil of Gubantonggut is socking away large quantities of CO2 in an inorganic form.

    A CO2-gulping desert in a remote corner of China may not be an isolated phenomenon. Halfway around the world, researchers have found that Nevada's Mojave Desert, square meter for square meter, absorbs about the same amount of CO2 as some temperate forests. The two sets of findings suggest that deserts are unsung players in the global carbon cycle. “Deserts are a larger sink for carbon dioxide than had previously been assumed,” says Lynn Fenstermaker, a remote sensing ecologist at the Desert Research Institute (DRI) in Las Vegas, Nevada, and a co-author of a paper on the Mojave findings published online last April in Global Change Biology.

    The effect could be huge: About 35% of Earth's land surface, or 5.2 billion hectares, is desert and semiarid ecosystems. If the Mojave readings represent an average CO2 uptake, then deserts and semiarid regions may be absorbing up to 5.2 billion tons of carbon a year—roughly half the amount emitted globally by burning fossil fuels, says John “Jay” Arnone, an ecologist in DRI's Reno lab and a co-author of the Mojave paper. But others point out that CO2 fluxes are notoriously difficult to measure and that it is necessary to take readings in other arid and semiarid regions to determine whether the Mojave and Gubantonggut findings are representative or anomalous.

    For now, some experts doubt that the world's most barren ecosystems are the longsought missing carbon sink. “I'd be hugely surprised if this were the missing sink. If deserts are taking up a lot of carbon, it ought to be obvious,” says William Schlesinger, a biogeochemist at the Cary Institute of Ecosystem Studies in Millbrook, New York, who in the 1980s was among the first to examine carbon flux in deserts. Nevertheless, he says, both sets of findings are intriguing and “must be followed up.”

    Scientists have long struggled to balance Earth's carbon books. While atmospheric CO2 levels are rising rapidly, our planet absorbs more CO2 than can be accounted for. Researchers have searched high and low for this missing sink. It doesn't appear to be the oceans or forests—although the capacity of boreal forests to absorb CO2 was long underestimated. Deserts might be the least likely candidate. “You would think that seemingly lifeless places must be carbon neutral, or carbon sources,” says Mojave co-author Georg Wohlfahrt, an ecologist at the University of Innsbruck in Austria.

    About 20 kilometers north of Urumqi, clusters of shanties are huddled next to fields of hops, cotton, and grapes. Soon after the Communist victory over the Nationalists in 1949, soldiers released from active duty were dispatched across rural China, including vast Xinjiang Province, to farm the land. At the edge of the sprawling “222” soldier farm, which is home to hundreds of families, oasis fields end where the Gubantonggut begins. The Fukang Station of Desert Ecology, which Li directs, is situated at this transition between ecosystems.

    In recent years, average precipitation has increased in the Gubantonggut, and the dominant Tamarix shrubs are thriving. Li set out to measure the difference in CO2 absorption between oasis and desert soil. An automated flux chamber measured CO2 depletion a few centimeters above the soil in 24-hour intervals on select days in the growing season (from May to October) in 2005 and in 2006. The desert readings ranged from 62 to 622 grams of carbon per square meter per year. Li assumed that Tamarix and a biotic crust of lichen, moss, and cyanobacteria up to 5 centimeters thick are responsible for part of the uptake. To rule out an organic process in the soil, Li's team put several kilograms in a pressure steam chamber to kill off any life forms and enzymes. CO2 absorption held steady, according to their report, posted online earlier this year in Environmental Geology.

    “The sterilization treatment was impressive,” says biogeochemist Pieter Tans, a climate change expert with the U.S. National Oceanic and Atmospheric Administration in Boulder, Colorado. “They may have found a significant effect, previously neglected, but I would like to see more evidence.” Indeed, the high end of the Urumqi CO2 flux estimates are off the charts. “That's more carbon uptake than our fastest growing southern forests. It's a huge number. I find it extremely hard to believe,” says Schlesinger, who nonetheless says the Chinese team's methodology looks sound.

    Missing sink?

    Tamarix shrubs are thriving in China's Gubantonggut Desert, but the soil itself may be socking away far more CO2 at night.


    At first, Li was flummoxed. Then, he says, he realized that deserts are “like a dry ocean.” The pH of oceans is falling gradually as they absorb CO2, forming carbonic acid. “I thought, ‘Why wouldn't this also happen in the soil?’ “Whereas the ocean has a single surface for gas exchange, Li says, soil is a porous medium with a huge reactive surface area. One question, Tans notes, is why the desert soils would remain alkaline as they absorb CO2. Li suggests that ongoing salinization drives pH in the opposite direction, allowing for continual CO2 absorption. But where the carbon goes—whether it is stowed largely as calcium carbonate or other salts—is unknown, Li says. Schlesinger too is stumped: “It takes a long time for carbonate to build up in the soil,” he says. At the apparent rate of absorption in China, he says, “we'd be up to our ankles in carbon.”

    One possibility, DRI soil chemist Giles Marion speculates, is that at night, CO2 reacts with moisture in the soil and perhaps with dew to form carbonic acid, which dissolves calcium carbonate—a reaction that warmer temperatures would drive in reverse, releasing the CO2 again during the day. (Unlike most minerals, carbonates become more soluble at lower temperatures.) In that case, Marion says, Li's nighttime absorption would tell only half the story: “I would expect that over a year, there would be no significant increase in soil storage due to this process,” he says, as the dynamic of carbon sequestration in the soil would vary from season to season. Li agrees that this scenario is plausible but notes that his daytime measurements of CO2 flux did not negate the nighttime uptake.

    In any case, other researchers say, absorption alone cannot explain the substantial uptake in the Mojave. Wohlfahrt and his colleagues measured CO2 flux above the loamy sands of the Nevada Test Site, where the United States once tested its nuclear arsenal. From March 2005 to February 2007, the desert biome absorbed on average roughly 100 grams of carbon per square meter per year—comparable to temperate forests and grassland ecosystems—the team reported in its Global Change Biology paper.

    Three processes are probably involved in CO2 absorption, Wohlfahrt says: biotic crusts, alkaline soils, and expanded shrub cover due to increased average precipitation. “We currently do not have the data to say where exactly the carbon is going,” he says. Like the Urumqi team, Wohlfahrt and his colleagues observed CO2 absorption at night that cannot be attributed to photosynthesis. “I hope we can corroborate the Chinese findings in the Mojave,” he says. Arnone and others, however, believe that carbon storage in soil is minimal.

    Wohlfahrt suspects biotic crusts play a key role. “People have almost completely neglected what's going on with the crusts,” he says. Others are not so sure. “I'm mystified by the Mojave work. There is no way that all the CO2 absorption observed in these studies is due to biological crusts, as there are not enough of them active long enough to account for such a large sink,” says Jayne Belnap of the U.S. Geological Survey's Canyonlands Research Station in Moab, Utah. She and her colleagues have studied carbon uptake in the southern Utah desert, which has similar crust species. “We do not see any such results,” she says.

    Provided the surprising CO2 sink in the deserts is not a mirage, it may yet prove ephemeral. “We don't want to say that these ecosystems will continue to gain carbon at this rate forever,” Wohlfahrt says. The unexpected CO2 absorption may be due to a recent uptick in precipitation in many deserts that has fueled a visible surge in vegetation. If average annual rainfall levels in those deserts were to abate, that could release the stored carbon and lead to a more rapid buildup of atmospheric CO2—and possibly accelerate global warming.


    U.S. Climate Change Bill Dies, But the Energy Remains

    1. Jeffrey Mervis

    After weeks of preparation, the U.S. Senate failed to engage in a historic debate last week on how to reduce greenhouse gas emissions. But that hasn't stopped both sides from declaring victory in what amounts to a dry run for next year, under a new president and a new Congress.

    Scientific and environmental groups that see such legislation as a national priority say a Democratic proposal to put a price on carbon and create a trillion-dollar market in carbon credits—which would shift money from polluters to “green” companies, governments, and the public—has at least helped frame a debate they hope to win next year. In rebuttal, Republican opponents and the Bush Administration, which promised to veto it, believe they stood up against a badly flawed bill that would have crippled economic growth and cost families thousands of dollars.

    The actual cause of death for the Climate Security Act of 2008 (S.3036), ironically, was a failure by proponents to limit debate. Their inability to invoke cloture—which requires 60 votes in the 100-member body—meant that opponents would be able to postpone a vote indefinitely. That led Democratic leaders to pull the plug on 6 June. But supporters claim that the 54 senators who expressed support for moving ahead with the legislation is itself remarkable and provides a solid foundation upon which to build.

    “Clearly, we knew we weren't going to get a bill this year,” admits Brendan Bell, Washington representative for the Union of Concerned Scientists (UCS), which helped organize a petition signed last month by 1700 scientists and economists calling for “swift and sharp cuts” in emissions. “But in 2 years, we've gone from people denying we have a problem and saying we need to study the issue to people saying, ‘Let's look at the details of how to address it.’”

    Of course, the details are supremely important. Within 24 hours of the vote, for example, 10 Democratic senators who supported ending debate said that the bill contained provisions that would have to be revised before they could vote for it. That's on top of dozens of amendments introduced by both sides of the aisle but never taken up by the Senate. “To be honest, we have a lot of work to do next year,” says an aide to one prominent Democrat. “We probably have only 40-some votes, and we need 60 if we're ever going to pass something.”

    Getting to a cloture-proof majority may mean a lot of tinkering with the bill, agrees Daniel Lashof, director of the climate center at the Natural Resources Defense Council in Washington, D.C.: “Last week was the first time that the Senate had really focused on climate legislation, and the amendments that people introduced showed their concerns about topics that we'll need to address.” He cited provisions to protect energy-intensive manufacturing processes such as steel that would face competition from companies not constrained by any carbon-trading system and low-income families hit hard by higher energy prices.

    Although a climate change bill is unlikely to return to the Senate agenda this year, Congress could still set the tone of next year's debate, for example, by responding to the public outcry over $4-a-gallon gasoline. “They could go in the right or wrong direction,” says Lashof, “by either adopting a throwback response to drill our way out of high prices or by accelerating efforts to develop alternative fuels.”

    In fact, some advocates of sharp reductions in greenhouse gas emissions say that they are happy the Senate failed to act. “Certainly, we are much better off,” says James Hansen, director of NASA's Goddard Institute for Space Studies in New York City, who regards the UCS petition as too mild and who favors a carbon tax that would be returned to the public. “Giving most of the money away to special interests … is a terrible path to go down,” he says. Such policies risk triggering a taxpayer revolt, he says, which would derail any lasting solution to global warming.

  8. ASIA

    Nepal Counts on Science to Turn Struggling Country Around

    1. Jerry Guo*
    1. Jerry Guo is a writer in New Haven, Connecticut.

    KATHMANDU—Nepal's new leaders have a surprising strategy for making the poor Himalayan nation's transition from monarchy to republic a success: They plan to shower money on science. High on the agenda of Nepal's new legislative body, the Constituent Assembly, is to approve next month a $125 million budget for the Ministry of Environment, Science, and Technology (MEST)—a whopping 12-fold increase over 2007. “This is so much money that scientists may not [be able to] spend it all,” says science ministry senior adviser Devi Paudyal.

    Perhaps most remarkable is the source of the promised windfall: the Maoists, a group once labeled as terrorists that won the largest share of assembly seats in elections in April. In a manifesto published shortly before the election, the Maoists declared that “Without science, a country cannot develop.” Before launching a bloody, decade-long insurgency, the group's leader, Prachanda, had earned a degree in agricultural science from the Institute of Agriculture and Animal Science in Rampur and taught science in a prep school.

    Some in Nepal's tiny scientific community are cautiously optimistic. “Past governments were not aware about the value of science,” says botanist Dayananda Bajracharya, a science adviser to Girija Koirala, the current prime minister. “The new government has promised they will give more attention to science. Hopefully, they will keep their word.” Others say they will believe it when they see it. “Most of the political parties talk about these things, but when it comes to reality, the budget is always full,” says Pramod Jha, a botanist at Tribhuvan University in Kathmandu.

    Based on World Bank figures on research and development spending as a percentage of gross domestic product (GDP), Nepal ranks behind the island nation of Mauritius as well as Burundi, the country with the world's lowest per capita GDP. Nepal's first university, Tribhuvan, opened its doors only in 1959, and the Nepal Academy of Science and Technology (NAST) was established in 1982. One restraint on scientific development is an unchecked brain drain by Nepal's few rising science stars, says Bajracharya.

    Science for the masses.

    Maoist leader Prachanda has promised a big boost for R&D in Nepal.


    The Maoists plan to bet heavily on biotechnology, an area the previous government tried to nurture. Last year, NAST broke ground on a three-story biotech lab in Kathmandu that it hopes to complete by summer 2009; MEST plans to begin construction of a national Biotechnology Research and Development Center later this year. This fall, Tribhuvan, Nepal's top university, will open a graduate program in biotechnology.

    These efforts are primarily intended to exploit Nepal's biological riches. Scientists here in recent years have launched programs to find medicinal plants and pinpoint active compounds. But with scant tools for molecular analyses, “we haven't been able to do much,” says NAST Vice Chancellor Hom Bhattarai. “We want to get modern equipment.”

    With Nepal recently beset by gasoline and electricity shortages, a large portion of the supersized science budget will be devoted to research on clean energy, says Paudyal. One project the new government intends to fund is development of Jatropha curcas, a variety of a shrub used for biofuel, which is better acclimated to high altitude.

    In the long term, raising Nepal's science game will require reducing the country's appalling 51% illiteracy rate—the 15th highest in the world, according to the United Nations. “The public at large thinks science is too sophisticated for a country like Nepal,” says Bajracharya. It may take another (science) revolution to change that.


    Growing Pains for fMRI

    1. Greg Miller

    As the use of functional magnetic resonance imaging has exploded, some researchers say the field could use a dose of rigor. Will new experimental approaches come to the rescue?

    As the use of functional magnetic resonance imaging has exploded, some researchers say the field could use a dose of rigor. Will new experimental approaches come to the rescue?


    Last November, the op-ed page of the New York Times, which typically airs political controversies, managed to create one of its own. It published a column describing a study in which 20 undecided voters had their brain activity scanned by functional magnetic resonance imaging (fMRI) while viewing photographs and videos of the major candidates in the upcoming U.S. presidential election. The findings revealed “some voter impressions on which this election may well turn,” according to the authors, who included a political scientist, a neuroscientist, and several people affiliated with FKF Applied Research, a company based in Washington, D.C., that sells fMRI-based marketing studies.

    The column infuriated some neuroscientists and ignited an animated discussion in the imaging field. “It was really closer to astrology than it was to real science,” says Russell Poldrack of the University of California, Los Angeles (UCLA), who drafted a letter to the newspaper that was signed by 16 other cognitive neuroscientists and published 3 days later. “It epitomized everything that a lot of us feel is wrong about where certain parts of the field are going, which is throw someone in a scanner and tell a story about it.”

    Since its introduction in the early 1990s, fMRI has transformed neuroscience. Now in its teenage years, the fMRI field is still experiencing growing pains. Some cognitive neuroscientists say they're frustrated that many studies—including some of those that garner the most attention in the popular press—reveal little about the neural mechanisms of human cognition. “The problem right now with imaging is that doing experiments right is really, really hard, but getting pictures out is really, really easy,” says Steven Petersen, a veteran brain-imaging researcher at Washington University in St. Louis, Missouri.

    At the same time, there are signs that the field is maturing, as researchers confront the limitations of fMRI. Such efforts include painstaking experiments that match human fMRI data with analogous fMRI data and electrophysiological recordings of neural activity in monkeys, as well as new analytical methods capable of revealing information processing in the brain that would be impossible to detect with standard methods. “I think [these methods] are really going to revolutionize how we think about our data,” says Poldrack. They also have the potential to introduce more rigor into fMRI research—something that's badly needed, Poldrack says, otherwise, “people will start to see fMRI as neophrenology, just telling stories and not giving explanations.”

    Neuroimagers gone wild

    What irked Poldrack and others most about the Times's op-ed was the way the authors inferred particular mental states from the activation of particular brain regions: Activity in the anterior cingulate cortex indicated mixed feelings about Hillary Clinton, for example, whereas amygdala activation indicated “voter anxiety” about Republican candidate Mitt Romney.

    The basic problem, the objectors wrote in their letter, is that it's not possible to infer a particular mental state (such as anxiety) from the activation of a particular brain region (such as the amygdala). Although it's true that anxiety engages the amygdala, says co-signer Elizabeth Phelps, a cognitive neuroscientist at New York University, so do intense smells, sexually arousing images, and many other things. To conclude that Romney makes voters anxious based on amygdala activation alone is unjustified, Phelps says.

    The neuroscientist co-author on the op-ed piece, Marco Iacoboni of UCLA, stands by the column's conclusions as reasonable and says he's been surprised and stung by what he views as an overly harsh and hypocritical rebuke. After all, he points out, most of his critics use similar “reverse inferences” themselves.

    That's true, says Poldrack, and it's a problem the field needs to confront. He and others argue that reverse inferences are particularly common in newer fields such as social cognitive neuroscience and neuroeconomics (not to mention neuropolitics), fields in which researchers are still trying to identify the cognitive processes underlying the behaviors they study. As an example, Poldrack points to a widely cited paper that used fMRI to investigate brain activity in subjects pondering moral dilemmas (Science, 14 September 2001, p. 2105); some of the brain regions that lit up had been linked in previous studies to emotional and “rational” cognitive processes, and the authors concluded that these two types of processes are active, to different degrees, in different types of moral judgments. But the strength of such arguments hinges on how specifically a given brain area is linked to a given mental process. Poldrack points out, for example, that some of the “emotional” brain regions in the morality study have also been connected to memory and language—a caveat that is rarely mentioned in media coverage of the work (Science, 9 May, p. 734).

    Monkeying around

    The general public may be easily seduced by pretty images generated by fMRI (see sidebar), but even neuroscientists sometimes seem to fall under the spell and overlook the method's limitations. One constraint is the narrow sliver of the human experience that can be captured when a person has to keep his or her head still for long periods inside an fMRI scanner. Another is the resolution. Using fMRI to spy on neurons is something like using Cold War-era satellites to spy on people: Only large-scale activity is visible. With standard fMRI equipment, the smallest cube of brain tissue that can be imaged is generally a few millimeters on a side. Each such “voxel” (a mashup of volume and pixel) contains millions of neurons. And although neurons can fire hundreds of impulses per second, the fMRI signal—which indicates an increase in oxygenated blood bringing energy to active neurons—develops sluggishly, over several seconds. This makes fMRI a crude tool for investigating how circuits of intricately connected neurons do the computational work of cognition and behavior, says Roger Tootell, a neuroscientist at Harvard University. “fMRI is really good for telling you where to look,” he says, “but I don't think you can ever really come up with mechanisms.”

    Political blunder?

    The New York Times used this graphic, showing that U.S. presidential candidates Barack Obama and John McCain stimulated relatively little activity in the brains of undecided voters, to illustrate online a brain-imaging study published as an op-ed column last November.


    Tootell is one of a handful of researchers trying to circumvent such obstacles by combining human fMRI with monkey experiments. The general idea, he explains, is to follow up on the human findings by using fMRI to identify analogous regions of the monkey brain and then record the activity of individual neurons there with microelectrodes.

    In some cases, single neuron recordings have confirmed fMRI findings. In 2006, Tootell and colleagues reported microelectrode data showing that 97% of neurons in the monkey equivalent of the fusiform face area—a region of the temporal cortex that appears in human fMRI studies to respond selectively to images of faces—do indeed respond preferentially to faces (Science, 3 February 2006, p. 670). But Tootell says that more recent human fMRI experiments his group has done suggest that neurons in an adjacent “place” region in the temporal cortex respond preferentially to edges, not places per se. The researchers are planning monkey experiments to investigate the preferences of neurons in this region in greater detail.

    Such studies, he says, can also begin to reveal mechanisms of visual object processing in the brain, such as how “face” or “place” neurons acquire their selectivity by combining inputs from low-level neurons that respond to simpler features such as texture, curvature, and the orientation of lines. “It's a beautiful paradigm when you can bring it to bear,” Petersen says of the parallel human-monkey work. The drawback, he says, aside from the incredibly time-consuming experiments, is that it can't be applied to study many types of cognition—language, for example.

    There's a pattern here

    A very different approach to overcoming some of fMRI's constraints comes from new analysis tools borrowed from machine-learning research. In a standard fMRI study, neuroscientists average together the fMRI activation for neighboring voxels. This averaging makes it easier to detect differences between experimental conditions—viewing photos of faces versus places, for example—but it assumes that neurons from different voxels in the region of interest all behave the same way. That's almost certainly not the case, says Nikolaus Kriegeskorte, a neuroscientist at the National Institute of Mental Health in Bethesda, Maryland.

    A new analysis.

    Pattern classifiers can detect differences in the neural activity elicited by different stimuli such as speech sounds (middle: small colored dots represent the fMRI signal of individual voxels) that would be averaged out in the conventional fMRI analysis (far right).


    To sidestep this issue, Kriegeskorte and others have been working with statistical tools called multivariate pattern classifiers to take a finer grained look at brain activity that considers patterns of activation across many individual voxels without averaging. These methods shift the focus from trying to identify specific brain regions that are activated during a particular task to trying to identify how the relevant information is processed in the brain.

    The first demonstration of this approach was a study by cognitive neuroscientist James Haxby, now at Dartmouth College (Science, 28 September 2001, p. 2425). He and colleagues monitored brain activity elicited by hundreds of images of various types of objects, including faces, cats, houses, and scissors, and identified statistically distinct activity patterns elicited by each type of object.

    In 2005, two research teams published papers in Nature Neuroscience showing that similar methods made it possible to determine the orientation of lines a subject was viewing based on fMRI activation in the primary visual cortex, a feat previously thought impossible because neurons that share a preference for lines of a particular orientation pack into columns narrower than a voxel. That got even more people interested, says Rajeev Raizada of Dartmouth, who organized a session on these methods at an April meeting of the Cognitive Neuroscience Society in San Francisco, California.

    Raizada and others at the session presented a variety of new findings illustrating how this new analysis of fMRI data can reveal information processing in the brain that would be overlooked by conventional analyses. Raizada, for example, presented a study in which he and colleagues investigated fMRI responses to the sounds /ra/ and /la/ in the brains of 10 native English and 10 native Japanese speakers. The Japanese language does not distinguish between these sounds, and most native speakers can't hear the difference.

    Inside the scanner, each subject listened to six variations of each /ra/ and /la/ while the researchers collected fMRI data for each variation. Using a pattern classifier, Raizada determined that English—but not Japanese—speakers exhibited distinct activity patterns in the right primary auditory cortex for /ra/ and /la/. In fact, subjects who were best able to distinguish the sounds had the most distinct activity patterns. Each sound is apparently represented by different patterns—but similar overall levels—of neural firing in the auditory cortex of English speakers, Raizada says, which explains why the conventional fMRI analysis can't pick up this distinction.

    Other researchers are taking note of such findings. “This is an exciting new direction,” says Adam Aron, a cognitive neuroscientist at the University of California, San Diego. “Instead of looking at whether this or that brain region is activated, now you're talking about whether the activity in many different voxels can predict what people are seeing or hearing.” Poldrack predicts that classifiers will help rescue researchers from the logical perils of reverse inference. Instead of inferring that a photo of Mitt Romney induces anxiety, for example, researchers could collect patterns of brain activity evoked by known anxiety inducers (photos of spiders, snakes, and hypodermic needles, perhaps) and see whether the pattern Romney elicits is a statistical match.

    An expanding toolbox

    Yet even with the promise of these new tools, fMRI remains limited to revealing correlations between cognitive processes and activity in the brain. “The way to use it well is as one tool in a toolbox, as a way of testing hypotheses where you have converging techniques and evidence,” says Aron.

    To that end, growing numbers of neuroscientists are using fMRI and related methods to investigate the connectivity between different brain regions involved in cognitive functions such as language and memory. One fMRI approach is to identify brain regions whose activity is synchronized when subjects perform a given task. In some cases, researchers are probing further to determine if those areas that fire together are physically wired together, using a relatively new MRI method called diffusion tensor imaging that can visualize the axon tracts that connect regions in the living human brain.

    Others are trying to establish causal links between brain and behavior. Having linked a brain region to a particular behavior using fMRI, for example, some researchers are following up with transcranial magnetic stimulation experiments in which focused magnetic fields noninvasively and temporarily disrupt neural activity in that region. If the behavior is then altered, the region must play a role in controlling it.

    With such a convergence of methods and other advances, perhaps one day it will even be possible to divine the intentions of undecided voters. But that day does not seem near at hand. In the Times op-ed piece, the authors reported that their scans indicated that voters were “unengaged” with two candidates in particular, Barack Obama and John McCain, ironically, the two men now battling for the U.S. presidency.


    Don't Be Seduced by the Brain

    1. Greg Miller

    The images generated by functional magnetic resonance imaging may have a power to captivate that reaches beyond their power to explain.


    Few advances in neuroscience have generated as much public interest as the ability to see the human brain in action. The enthusiasm isn't hard to understand. Methods such as functional magnetic resonance imaging (fMRI) have enabled researchers to bring distinctly human attributes—love, faith, morality—under scientific scrutiny.

    But the images generated by such methods may have a power to captivate that reaches beyond their power to explain. Psychologists David McCabe of Colorado State University in Fort Collins and Alan Castel of the University of California, Los Angeles, recently asked 156 undergraduate students to evaluate several mock news articles describing brain-imaging studies. But the research each described was bogus. One study, for instance, reached the dubious conclusion that because watching television and doing arithmetic problems both activate the temporal lobes of the brain, watching television improves arithmetic abilities.

    Students saw one of three versions of each article: the text alone, the text plus an fMRI image depicting activity in part of the brain, or the text plus a bar chart summarizing the fMRI result. Those who saw the brain image rated the scientific reasoning in the article as more compelling than did the others even though the images themselves added no relevant information, McCabe and Castel reported in the April issue of Cognition.

    People seem to believe that images of brain activity make a behavioral observation more real, says bioethicist Éric Racine of the Institut de Recherches Cliniques de Montréal in Canada. Racine calls this effect “neurorealism” and says it's often amplified by media coverage that oversimplifies research findings and glosses over caveats. In other words, don't let the pretty colors fool you. You don't need an fMRI scan to know that candy tastes good, pain feels bad, and television won't turn you into a genius at math.


    India's Education Bonanza Instills Hope--and Concern

    1. Pallava Bagla

    The government of India is embarking on a major expansion of its higher education system. But is quantity being substituted for quality?

    The government of India is embarking on a major expansion of its higher education system. But is quantity being substituted for quality?

    NEW DELHI—Indian higher education is in a funk. Too few institutions serve too few students, and there are too few professors to teach them. To remedy this, the government has unveiled an ambitious plan that would vastly expand access to higher education. Later this month, as a first step, three new Indian Institutes of Technology (IIT) will join the country's vaunted network of science universities; three more will follow in the coming weeks.

    Over the next 5 years, India plans to invest $21 billion on higher education, a whopping ninefold increase over the previous 5 years. “Our government is committed to investing more, much more, in education, especially science education,” Prime Minister Manmohan Singh said last January when he announced the initiative. Bricks and mortar are a high priority: The government intends to open eight IITs—six this year and two later on—as well as 82 other institutions (see box). “All this marks a quantum leap in the infrastructure available for good-quality teaching and research,” Singh said. Other elements include an expansion of scholarship programs and higher pay. Graduate students will receive a 50% hike in their living stipends to $300 a month, and research associates are getting a 33% salary boost to $400 a month. “A long-felt legitimate need has been met,” says science minister Kapil Sibal.

    Science for the masses.

    India's plan to boost education should make scenes like this biotechnology class at Guru Gobind Singh Indraprastha University in New Delhi more common.


    But some observers worry that India is moving too fast and that as a result quality will inevitably suffer. This year's crop of six IITs will increase the number of these elite institutions to 13. As a sign of how quickly the government is moving, the new IITs, based in Patna, Medak, and in the states of Gujarat, Orissa, Punjab, and Rajasthan, will work out of makeshift campuses at existing universities until permanent facilities open in coming years.

    India has 378 universities and 18,064 colleges, and among men and women ages 17 to 22, the enrollment rate has increased from less than 1% in 1950 to about 10%, or 11.2 million young people, in 2007, according to the University Grants Commission (UGC) in New Delhi. Despite the gains, experts agree that India is underserved. “No country has been able to become an economically advanced country if its enrollment ratio in higher education has been less than 20%,” says UGC Chair Sukhadeo Thorat. (The gross enrollment rate in the United States and Canada is about 60%.) “Higher education is not serving the cause of young people of India,” says Arjun Singh, minister for human resource development.

    Part of the cure is to steer more young people into science. Under a new program called Innovation in Science Pursuit for Inspired Research, 1 million students bound for university over the next 5 years will receive $125 scholarships simply as a reward for considering a future career in science. Another initiative, Scholarships for Higher Education, plans to hand out 10,000 scholarships each year, worth $2200 apiece, to talented students to enroll in bachelor's and master's science courses. “We must make science a preferred discipline of study for our students,” Singh said.

    A key question is who will teach the throngs of new students. The 16 centrally funded Indian universities are already facing a shortfall of nearly 2000 teachers, and IIT has about 900 vacant faculty posts. According to the All India Council for Technical Education, almost a third of faculty positions in academia are unfilled. “Teaching is no longer a glamorous profession,” laments chemist Man Mohan Sharma of the University Institute of Chemical Technology in Mumbai. Entry-level industry jobs command salaries that are three times higher than those of tenured faculty. “Scholarly habits are dying,” says Sharma. That poses a conundrum as India's higher education system grows. “I don't know from where quality faculty would come to teach in these new institutions,” says J. S. Rajput, former director of the National Council of Education Research and Training in New Delhi.

    For that reason, some eminent outsiders are urging India to proceed with caution. “Don't do it too fast. Institutes are limited by the few good people who need to be nurtured,” says David Baltimore, a Nobel laureate at the California Institute of Technology in Pasadena, who earlier this year toured Indian labs. He suggests that India “build a couple of really fine institutions” over a few decades. P. V. Indiresan, former director of IIT in Chennai, shares that view: “Over-expansion of university education cheapens it.” It seems, Indiresan says, that India “prefers large quantity with bad quality rather than a small quantity of high quality.”

    Going slow was never an option, says Sam Pitroda, chair of the National Knowledge Commission in New Delhi, which advises India's prime minister. “There is still resistance at various levels in the government to new ideas, experimentation,” he says. That means, for now, that it is full steam ahead for India's education expansion.


    New Efforts to Detect Explosives Require Advances on Many Fronts

    1. Yudhijit Bhattacharjee

    A boost in U.S. government funding is stimulating research on new ways to stop terrorists before they strike in public places.

    A boost in government funding is stimulating research on new ways to stop terrorists before they strike in public places

    Under the radar.

    The four London bombers were caught on a surveillance camera as they entered a train station in July 2005.


    The four young men who walked into the Luton railway station outside London with backpacks on their shoulders on the morning of 7 July 2005 looked like students on their way to college. They turned out to be suicide bombers. Between 8:50 and 9:47 a.m., they triggered explosions inside subway trains in London and on a double-decker bus that killed 52 commuters. Their weapon of choice was an organic explosive concocted from easily available chemicals.

    The previous year, in Madrid, terrorists had used cell phones to detonate bombs that killed 191 people on four trains. The two incidents are a grim reminder that it is virtually impossible to detect explosives or would-be bombers from a distance. In addition, detection methods now in use at airports and checkpoints, including metal detectors, x-ray screening of baggage, and mass spectrometry of swabs taken from suspicious bags and individuals, are woefully inadequate.

    Faced with the ongoing threat from such improvised explosive devices (IEDs)—the notorious weapons of choice in Iraq and Afghanistan—governments around the world have stepped up efforts to detect them before they wreak havoc in crowded subways, stadiums, shopping malls, and other public settings. Last year, the U.S. National Science Foundation (NSF) awarded $20 million for basic research on sensors, imaging tools, surveillance systems, and other techniques to detect explosives. And in February, the University of Rhode Island (URI) in Kingston and Northeastern University in Boston, each received the first $2 million of what could be $12 million grants from the Department of Homeland Security (DHS) to create centers on explosives detection. “We need to engage the academic community more fully to tackle the IED problem,” says Douglas Bauer, program manager for explosives research at DHS's science and technology directorate.

    The new approaches include sensors to pick up the faintest whiff of explosives in the air; imaging tools to detect, from afar, a bomb strapped to a person's body; and software to sift through video surveillance for suspicious behavior. The diversity is intentional. “There's not going to be a silver bullet,” warns Michael Silevitch, an electrical engineer at Northeastern who co-directs the new DHS center.

    Better sensors

    When security personnel at airports want to check a bag for explosives, they perform a quick chemical analysis by wiping the bag with a swab and running the swab through a spectrometer. A team led by chemist Vinayak Dravid at Northwestern University in Evanston, Illinois, hopes to make such chemical sensing of explosives ubiquitous and automatic. In a 2006 Science paper (17 March 2006, p. 1592), Dravid and his colleagues showed that a device made by attaching a microcantilever beam to a transistor base can work as a biological sensor. The targeted biological molecules bind to receptor molecules coated on the cantilever's surface and cause it to bend. The stress from the bending is transmitted to the cantilever's base, where it dampens the mobility of electrons in the transistor and reduces the flow of current. The change in current signals that the biological entity has been sensed. Dravid and his colleagues want to use the same principle to detect vapors of explosive compounds in the air by coating the cantilever beam with a film that selectively absorbs molecules of explosives such as TNT.

    Researchers have already developed such films from molecules such as thiosalicylic acid, which shows partial selectivity for TNT. One idea is to shoot a current through the beam to ignite a miniexplosion of a TNT molecule on the surface, which would deflect it and reduce the transistor current momentarily. Another idea is to shine different frequencies of infrared radiation on the cantilever; it would bend only at the frequency absorbed by the explosive. The resulting drop in current would alert authorities to a potential threat.

    Dravid envisions a sensor system with arrays of cantilevers that sense the same explosive molecule through these different mechanisms. “If all of them got triggered, you could be reasonably certain that the explosive was in the environment,” says Dravid. Such a system could be installed in the ceiling of a subway station or an auditorium and connected wirelessly to the nearest police station.

    Researchers at Johns Hopkins University (JHU) in Baltimore, Maryland, are trying another chemistry-based approach, using semiconducting polymer films that have pores shaped to allow explosive molecules to embed in them in the same way a key fits into a lock. As the explosive molecules get lodged in the polymer, its conductivity goes up. “The big advantage is that this could be deployed on a wide scale at a low cost,” says JHU chemical engineer Howard Katz, co-principal investigator of the project. Katz imagines lining the walls of airports and subways with the polymer, which would transmit alarm signals to a computer monitored by security officials.

    Neither approach has been tested, and Katz and Dravid are still far from building a prototype. Because the vapor pressure of many explosive compounds is very low, any device must ensure that air samples delivered to the sensor contain a sufficiently high concentration of whatever explosive molecules have been detected.

    Another approach to finding explosions is to pick out their molecular signatures from afar. Xi-Cheng Zhang, an electrical engineer at Rensselaer Polytechnic Institute in Troy, New York, thinks that terahertz (THz) beams might be the answer.

    THz radiation lies between the infrared and microwave parts of the electromagnetic spectrum and can pass through barriers such as clothing and plastic without being a health hazard. In recent years, some airports have begun testing THz imaging systems to look for concealed weapons on airline passengers. Researchers have also found that most explosive molecules absorb THz radiation at frequencies between 0.5 to 1.0 THz, so explosives stand out in reflected THz beams. Zhang and his colleagues, including North-eastern's Silevitch, are hoping to exploit this property to detect explosives from distances of 100 meters or more.

    Silevitch's goal is to “find a suicide bomber standing a football field away from us,” a distance that would give security forces time to act. One fundamental problem is that water vapor in the air degrades THz signals at the frequency useful for detecting explosives. So far, Zhang says, he and his colleagues have demonstrated the concept at distances ranging from 3 to 30 meters, the latter on a cold, dry winter day that offered ideal conditions for the technology. In experiments under more humid conditions, the detectable range dropped to less than 10 meters.

    Long-range detection of explosives would make security at airports and other venues much less intrusive, says URI chemist Jimmie Oxley, co-director of the DHS center. “Our vision for airports is to go back to the past, when there were no checkpoints,” she says. “You would be scanned, but you would not know it.”

    Outsmarting the terrorists

    Researchers must also cope with the ever-changing nature of the threat. In recent years, security officials have been alarmed by the increasing use of homemade explosives such as triacetone triperoxide (TATP) and hexamethylene triperoxide diamine, which were used in the London bombings. Richard Reid, the man who tried unsuccessfully to set off an explosive in his shoe onboard a 2001 American Airlines flight from Paris to Miami, intended to use TATP as his detonating charge to set off another, more conventional, explosive.

    Sniffing a hazard.

    Security officials say using dogs to search for explosives is not good enough.


    Derived from ingredients such as acetone and hydrogen peroxide, TATP is easy to concoct. “With $17 worth of chemicals from the department store, you can make enough TATP to bring down a Boeing 747,” says Ehud Keinan, a chemist at Technion-Israel Institute of Technology in Haifa who has studied TATP for 2 decades. Conventional explosives such as TNT, by contrast, are harder to manufacture and, thus, are typically stolen from military arsenals or purchased on the black market.

    Peroxide-based explosives present a novel challenge for security agencies. Unlike conventional explosives made from nitrates, TATP has a low molecular weight, which makes it difficult to detect with standard mass spectrometry. Police and airport authorities have begun training dogs to sniff out TATP. Some airports are now equipped with ion mobility spectrometers, which can detect peroxide-based explosives from swabs. And some companies, including one owned by Keinan, are selling devices specifically to detect these materials. But officials say better and cheaper detection tools are needed.

    Oxley and others have been working with DHS to update the list of organic compounds that terrorists could form into explosives. “Anticipating the next material is one of our goals,” she says. “The hard part is to determine when an innocuous but energetic material can become detonable. It's usually a matter of scale.” Oxley hopes to determine that threshold for a number of compounds.

    At the Naval Postgraduate School in Monterey, California, computer scientist Neil Rowe and his colleagues are hoping to work on another approach: detecting terrorists before they strike by developing software to identify suspicious behaviors in video feeds. In one experiment, the researchers instructed volunteers to act suspiciously in a parking lot, for example, by leaving a box outside a car. A software program tried to spot these anomalous activities by analyzing factors such as a sudden acceleration in a person's movement, indicating flight from a crime scene. The software correctly flagged two-thirds of the target behaviors observed by surveillance cameras.

    Rowe acknowledges that deploying such a system could yield an unacceptably high number of false positives and that knowing how a suicide bomber might behave in anticipation of an attack is still a distant goal. “You have to look for a number of subtle clues,” he says. But a high failure rate doesn't ruffle Bruce Hamilton, program director for NSF's explosives and related threats program. Blue-sky research, he says, is key to “tackling this horrible problem.”

  13. A Second Chance for Rainforest Biodiversity

    1. Erik Stokstad

    As ever more of the Amazon falls under the ax, a large-scale project is helping to clarify how well various tropical species survive in recovering forests.

    As ever more of the Amazon falls under the ax, a large-scale project is helping to clarify how well various tropical species survive in recovering forests

    Rich habitat.

    Uncut rainforest near the Jari River in northeastern Brazil.


    In 1967, an American billionaire named Daniel Ludwig purchased 16,000 square kilometers of rainforest in Brazil—an area half the size of Belgium. Ludwig, who had made his fortune building supertankers, was betting on a paper shortage and hoped to boost his wealth by growing Eucalyptus trees for pulp.

    Thinking big, Ludwig shipped a preassembled paper mill from Japan and floated it up the Jari River. He built a new town, and his workers chopped down about 1300 square kilometers of rainforest to make way for the plantations. The rest remained untouched. After a little more than a decade, however, the scheme failed. Stymied by rising energy costs and business setbacks, Ludwig pulled out. Logging continues in the area, but many of the clear-cuts have been returning to the wild.

    Ludwig's losses have been science's gain. Given the rate at which rainforests are being cleared, some ecologists say there is a growing need to turn more attention to the woods that sprout up in their place. Whether the land is left to its own devices or managed by humans as tree farms, these second-generation ecosystems are coming to dominate the wooded landscape. Attracted by the Jari property's combination of intact rainforest, vast tree plantations, and regenerating forest, Carlos Peres recognized it was a perfect place to figure out which species persist where. “If you're trying to predict the future, this is what you need to do,” says Peres. A wildlife biologist at the University of East Anglia in Norwich, U.K., he and his team have now published their follow-up of Ludwig's folly in a series of recent papers.

    Different fates.

    The harlequin toad is one of many species that require old-growth forest, whereas the black-spotted barbet can survive in regenerating forest.


    This research is by no means the first to look at the biodiversity of so-called secondary forests—those allowed to regrow on their own—and plantations, but it is one of the largest and most rigorous assessments in the tropics. “It's comprehensive enough that the results are convincing,” says ecologist Robert Dunn of North Carolina State University in Raleigh. Whether those results are good news or bad news, however, is a matter of debate.

    “The big take-home message is that there are a lot of species missing” from secondary forests and plantations, Dunn says. And for Peres's team, the findings reinforce the need to conserve the remaining old-growth tropical forests. “Primary forest is even harder to replace than many researchers expect,” says Toby Gardner of the Federal University of Lavras in Brazil. “For many species, once these virgin forests have gone there is nowhere else to go.”

    Drawing on these and other findings, other ecologists accentuate the positive. They point to the species that can cope, even thrive, in secondary forests and plantations. “There is a huge opportunity for conserving forest ecosystem functions and biodiversity,” says tropical ecologist Daniel Nepstad of the Woods Hole Research Center in Falmouth, Massachusetts. Ultimately, the amount of diversity that persists in the Amazon will be determined by how much land is set aside—and by how hard humans work the rest.

    Return of the forests

    The statistics are grim for old-growth forests. The United Nations Food and Agriculture Organization estimated in 2005 that just 36% of the world's forests remain relatively untouched by humans. That fraction is disappearing quickly in the tropics, by as much as 12% per year, much of it destroyed by slashing and burning for fields or pasture for cattle.

    Yet tropical trees are making something of a comeback. Clear-cut areas and abandoned farms are being turned into timber plantations or being reforested as part of government programs (Science, 23 February 2007, p. 1070). In parts of Latin America and elsewhere, trees are planted for side benefits to agriculture, such as shade and the “live fencing” they can provide.

    And when the land is left alone, new saplings take hold, blossoming into secondary forests. “The amounts of land involved are absolutely staggering,” says S. Joseph Wright of the Smithsonian Tropical Research Institute in Balboa, Panama. According to one global analysis, for every six or seven hectares of tropical forest cut during the 1990s, one hectare regrew (Science, 9 August 2002, p. 999). Costa Rica and Puerto Rico now have more secondary forest than primary. Because these new landscapes will eventually dwarf the intact forests preserved in national parks and other reserves, ecologists say these reborn places will be critical for the future of tropical biodiversity.

    But relatively little is known about the potential of this habitat to serve as a refuge for the same species that depend on old-growth forest. Scientists have tended to focus on tropical forests that show no obvious sign of direct interference, in part because they are storehouses of diversity and are disappearing quickly. “Most secondary forests have been seen as trammeled and uninteresting,” says geographer Susanna Hecht of the University of California, Los Angeles (UCLA). In fact, “they're much more diverse than people think.”

    Most of the research on secondary forests has been done in Costa Rica and other Mesoamerican countries, where original forests were mostly converted to agriculture decades ago. Patches of that land have slowly reverted to forests, whereas the remainder remains in cultivation. Such studies have tended to be small-scale, so the results don't readily apply to the Amazon's immense swaths of deforestation. “From the perspective of conserving rare species, the whole literature missed the effect of scale and disturbance,” says Dunn, who published a meta-analysis in Conservation Biology in 2004.

    New growth.

    When cleared land is left alone, secondary forests like this one in Mato Grosso, Brazil, can take hold.


    Testing for biodiversity

    The Jari landholdings have no shortage of large-scale disturbance. Peres, who grew up in the Brazilian Amazon, had visited the plantations as a teenager. Looking for a new research project in 2002, he recalled their vast size and set up shop there to assess the local biodiversity. Working primarily with his Ph.D. students Gardner and Jos Barlow, Peres initially surveyed a half-dozen major kinds of animals. But as collaborations flourished with Brazilian taxonomists from the Goeldi Natural History Museum in Belém, Brazil, that number swelled to 16 groups of vertebrates, invertebrates, and plants.

    Half the battle was logistical: It was a struggle to keep the team's cars running given the daily 200 kilometers of off-road driving between field sites. Another strain was cutting transects through dense thickets of regrowth—hot, humid forests dominated by 10-meter-tall palms. “It was a crazy few years in the field,” recalls Barlow, now at Lancaster University in the U.K.

    Unlike many other tropical researchers, the team was able to set up multiple field sites, five each of primary forest, secondary forest, and Eucalyptus plantations. The sites were also extremely large—averaging 26 square kilometers for the secondary forest plots, up to 1000 times larger than field plots in previous studies.

    Large plots allowed the team to minimize so-called edge effects. If animals spotted by observers are simply visiting the secondary forest from nearby primary forest, they will inflate the estimate of biodiversity that would exist, say, in a forest tract that is isolated in “a sea of soy,” Gardner explains. “We maximized our ability to understand what lives in the landscape.” And because the primary forest study sites are both large and surrounded by many more hectares of intact forest, they could get an accurate baseline of prelogging biodiversity.

    The study's good news was that the secondary forests restored some of the ecosystem functions of the primary forests. The rate of decomposition of fallen leaves, which replenishes the soil, was about the same in primary and secondary forests (it was much lower in the plantations), the team reported with Leandro Ferreira of the Goeldi Museum in the August 2007 issue of Forest Ecology and Management.

    But for many creatures, the news was bad (see chart). Secondary forests had less than 40% of the bird species found in the Jari primary forest, and those present were those that prefer disturbed areas. The 14- to 19-year-old secondary forests “clearly failed to compensate for the loss of primary habitats and the habitat specialists they contain,” the team concluded in the April 2007 issue of Biological Conservation. Amphibians, trees, and woody vines called liana that are common in tropical rainforests were also particularly depauperate.

    Biodiversity index.

    The percentage of old-growth forest species that survive in Eucalyptus plantations (above) and secondary forests varies from group to group, habitat to habitat.


    Plantations were even less suitable refuges for most old-growth taxa. The rows of 4- to 6-year-old Eucalytpus trees had just 20% of bird species in primary forest. Yet bats and fruit flies did just as well in plantations as in secondary forests, and grasshoppers did better. A summary paper published in the 20 November 2007 Proceedings of the National Academy of Sciences charted all the trends.

    Decreased animal diversity is cause for concern about the health of secondary forests, the team says. In a paper published in the May Journal of Applied Ecology, Malva Hernández of the Universidade Federal da Paraíba, Brazil, and others reported that the “exceptionally impoverished” dung beetle communities in secondary forests could have ecological repercussions, as the beetles bury many kinds of seeds, helping to repopulate the flora. Studies of dung beetles elsewhere have not seen such a stark difference in their diversity among habitats, but the team says the larger study plots make the new findings more reliable.

    For some groups, total diversity—not just old-growth species—didn't change much. Species richness of scavenger flies and mammals, for example, was not measurably different between the three habitats studied by Peres and his colleagues. However, the species were not the same from one forest type to another. In the November issue of the Journal of Tropical Ecology, undergraduate Luke Parry of the University of East Anglia and the Jari team reported that secondary forests had more ungulate browsers but fewer fruit-eating monkeys and particularly lacked vertebrates that disperse large seeds.

    The fieldwork has wrapped up, and now the team is refining its estimates of how much diversity is lost when forests are cut down and then regrow. Overall, Barlow says, the latest work is showing that widespread conversion of primary habitats to secondary forests results in species losses worse than they reported in November: Tree diversity dropped by as much as 86%, for example. “These results highlight the overwhelming importance of primary forest,” he notes.

    A bright side

    In some ways, the results from the Jari landholdings foretell a dire future for forest biodiversity in the rest of the Amazon. Clear-cutting and burning of primary forests, such as what this area endured in the 1970s, are particularly damaging to any next generation of forest because those practices compact soil and alter its chemistry. The loss of tree canopy also makes the land reflect less sunlight; over large areas this change influences weather, reducing rainfall and drying the soil. The altered environs drive away animals. Once they vanish, plants that rely on those species to disperse their seeds have trouble reproducing and may not get reestablished. These severe impacts continue across the Amazon today.

    Moreover, secondary forests throughout the Amazon aren't given enough time to recover the biodiversity of primary forests. “For some [taxonomic] groups, it may take 200 to 300 years to get a pale shadow of what a primary forest contains,” Peres says. In Jari and elsewhere, regrowing forests are logged within 2 decades, and the plantations are cut even more frequently.

    But that hardly makes them worthless. Secondary forests can have their own conservation benefits, says David Lindenmayer of Australian National University in Canberra. In some places, they provide a buffer around protected forests, dampening the impact of development and other human activities. And secondary forests usually benefit species that do best in disturbed areas, Lindenmayer notes.

    Furthermore, other species can often do just fine with just a semblance of old-growth forest structure—an understory and a canopy with trees and gaps of various sizes, for example. “It's not actually the whole forest that needs to be [old-growth],” he says.

    As has been shown in temperate and tropical forests, foresters can salvage biodiversity by retaining some of the largest trees. A few giants can have “a big effect on plantations,” Lindenmayer points out. Within secondary forests, an approach called selective logging—where most of the forest is left in place—can make a huge difference, says UCLA ecologist Stephen Hubbell. If this practice is widely adopted, secondary forest “biodiversity will be okay,” he says.

    Peres's team hopes to continue working in the Jari area, identifying other ways that the biodiversity can be enhanced in the plantations. And even though the loss of biodiversity in the plantations is sobering, Barlow says the overall situation in Jari may be positive. The Brazilian company that owns the land behind Ludwig's grand scheme is now making a profit selling pulp from the plantations and is only selectively logging the primary forest.

  14. Critical Time for African Rainforests

    1. Robert Koenig

    As threats to the Congo Basin's vast forests grow, scientists race to sharpen assessments and stem destruction.

    As threats to the Congo Basin's vast forests grow, scientists race to sharpen assessments and stem destruction

    Congo rapids.

    Rough water near Kinshasa impedes the transport of logs.


    KINSHASA, DEMOCRATIC REPUBLIC OF THE CONGO (DRC)—From a workshop behind her house, botanist Terese Hart can glimpse log-filled barges churning down the Congo River toward a nearby sawmill. Such traffic had come to a virtual standstill during the nation's civil conflicts, but now, she says, the “lights are blazing at night” as massive logs from the forests of Bandundu and Équateur provinces are fed, around the clock, into the jaws of giant saws.

    At nearly 2 million square kilometers, the Congo River Basin's dense tropical rainforest is second in size only to the Amazon's. In Heart of Darkness, novelist Joseph Conrad—who piloted a steamboat on the Congo a century ago—described this as “impenetrable” territory, where “the big trees were kings.”

    Although deforestation is a severe problem in parts of the continent, central Africa's rainforests have so far avoided that fate. A recent analysis estimated that Africa accounted for less than 6% of the total loss of humid forest cover during the 1990s, whereas Brazil's loss represented nearly half of the total. The DRC's remoteness, political instability, bad roads, and unnavigable river rapids had helped save large tracts of its forests from exploitation. But forest degradation has been worsening in other Congo Basin countries, and a combination of factors over the past few years—including a sharp population spike in the eastern DRC and the mounting Asian interest in African timber—have raised the ax over Conrad's “kings.”

    The DRC contains more than 60% of the basin's remaining forests, and “the new scramble for central African resources, exerting massive pressures to open up frontier areas, has the potential to culminate in a ‘perfect storm,’ “says William Laurance of the Smithsonian Tropical Research Institute (STRI) in Panama, who has studied the impact of logging on wildlife in several rainforests.

    Wood sale.

    Local needs for lumber take a toll on Congo forests.


    At issue are “the loss of biodiversity, a massive waste of forest resources, the decline of rainforest people, and—in the long run—possible climate change,” warns University of Kinshasa botanist Constantin Lubini, whose garden is an oasis of flowering trees amid the dusty chaos of Kinshasa's Debonhomme quarter, where vendors sell charcoal along with bread and meat. In the region's fast-growing cities, the widespread use of charcoal and wood for cooking has taken a heavy toll on nearby forests.

    Hart, Lubini, and other scientists—from big-picture geographers who scrutinize satellite images to on-site botanists who measure every sapling on 40-hectare rainforest plots—believe the next few years will be critical in determining the future of what is probably the least exploited yet most scantily studied of the world's humid forest regions. In conjunction with the 6-year-old Congo Basin Forest Partnership (CBFP)—an international association of government officials, nongovernmental organizations, and conservation experts—these researchers are now applying satellite maps, in-depth forest studies, and other tools to help policymakers limit the sort of largescale deforestation that is now decimating rainforests in the Amazon and Far East.

    Eyes in the sky

    Earth-observation satellites have become the watchdogs for deforestation in remote areas, helping document regions in trouble. At the cluttered Kinshasa off ice of the OCEAN (Organisation Concertée des Ecologistes et Amis de la Nature) ecology group, director René Ngongo negotiates through the crowd—from shouting pygmies to low-key forest analysts—toward a map taped to the wall. “These red spots show what's been destroyed,” he says, tapping the satellite-generated map of Congo Basin forest change, “but there is still more green forest, and we want to keep it that way.” The problem is that the map (below) shows the forests as of 2000, and the next update won't be available until later this year.

    For many technical reasons, remote sensing of Congo forests has lagged behind studies of the Amazon, which “is a much easier place to monitor,” says geographer Matthew C. Hansen of South Dakota State University's Geographic Information Science Center of Excellence in Brookings. Persistent cloud cover prevents clear images of some Congo Basin areas, requiring far more images to be processed. Also, central Africa has no dish to receive data; thus, researchers get relatively few images from the Landsat satellites. And because of a glitch in Landsat 7, its images have been flawed since 2003. To make matters worse, it is far more difficult to detect the “selective” logging of just a few trees per hectare that is standard in the Congo than it is to identify clear-cut areas, typical in the Amazon. Forest change “is huge in the Amazon,” Hansen says, making it simpler to map deforestation.

    To tackle those challenges, North American and European groups are bringing new analyses to bear on impoverished Congo data sets. In 2003, the U.S.-funded Central African Regional Program for the Environment commissioned Hansen and Christopher Justice of the University of Maryland, College Park, to produce a decadal deforestation map. It took Hansen, Justice, and their team 3 years to automate the calibration of the infrequent but higher resolution Landsat images with data from a lower resolution NASA instrument (MODIS) that measures tree cover. This map, released last year, shows that much of the forest loss during the 1990s occurred near densely populated areas in the eastern DRC, along principal rivers, and at the basin's periphery. Even though the map is already 8 years out of date, Ngongo and other Congolese activists and officials regard it as a useful baseline for further research.

    Deforestation hot spots.

    Based on satellite images, this map shows that forest loss occurred mainly along the Congo River and near the Uganda and Rwanda borders (far right), areas of rapid population growth.


    Meanwhile, a group led by Belgian forester Philippe Mayaux of the Joint Research Centre's Institute for Environment and Sustainability in Ispra, Italy, used a less comprehensive “grid sampling” technique to parse Congo forest trends across the whole basin from the satellite data. In a paper in the 15 May issue of Remote Sensing of Environment, they concluded that the basin's deforestation rate for the decade ending in 2000 was nearly 0.2% per year. In addition, the rate of forest degradation (thinning of forested areas) was 0.1%. The deforestation was low compared with the Amazon's annual rate of about 0.5%, but it is still of concern because on-the-ground reports indicate that logging in the Congo region is escalating.

    Some new data sets also show promise. Nadine Laporte, whose teams at the Woods Hole Research Center in Falmouth, Massachusetts, have been studying logging roads and biomass in the Congo Basin, says the Chinese-Brazilian CBERS satellites may offer cost-free data to African institutions. Microwave radar imagery from Japanese and Canadian satellites is now helping some scientists better assess forest trends in persistently cloud-covered coastal areas. And Laporte's group is searching for ways to use data from NASA's LIDAR laser-pulse sensor to calibrate optical imagery and improve estimates of forest biomass.

    Perhaps more importantly, French and British officials are separately considering plans to help build a receiving dish in central Africa to acquire and store up-to-date data from satellites as they fly over. Without such additional data, Hansen says, “you can only do accurate update maps for the entire basin every 3 to 5 years.” Looking forward to such data, the University of Maryland's Paya de Marcken is now training central African scientists at a new remote-sensing lab at the University of Kinshasa to handle incoming images.

    On the ground

    Although the Congo satellite maps are outdated, they drive home the vulnerability of the forests. At his office laptop in Kinshasa, Belgian geographer Benoit Mertens opens a satellite map, defines a forest area, and enlarges the pinpointed section to reveal that it is crisscrossed by what he calls “a wishbone pattern” of roads. They are a clear indication that the tract is being logged, says Mertens, who works for the World Resources Institute's Global Forest Watch project. Laporte of Woods Hole says her group's analysis of Landsat images for evidence of forest roads showed considerable logging road construction (about 460 kilometers per year) in the north-central DRC (Science, 8 June 2007, p. 1451). And, says Laporte, “you can make a pretty good assessment of the extent and intensity of logging from the road maps.”

    What's harder to assess is the relative impact of industrial versus “informal” (sometimes illegal) logging. Mertens coordinates a five-country project to monitor the basin's timber industry. The DRC's 156 logging concessions control about 21 million hectares and take out about 500,000 cubic meters of timber a year. But Mertens and other experts say that chainsaw-wielding freelance loggers or farmers who practice “slash-and-burn” agriculture now account for more DRC forest degradation than industrial timber operations.

    Informal logging takes place along many roads and in the forest fringes, with most of the timber used for local fuel or exported from the northeastern DRC to nearby Uganda, where population increases are driving up demand for wood. French forest scientist Robert Nasi of the Center for International Forestry Research in Bogor, Indonesia, estimates that Kinshasa alone (with a population of 8 million) consumes about 4.5 million cubic meters of wood equivalent per year for charcoal. “If you consider that all of the major cities are using fuel wood or charcoal, it dwarfs the selective logging harvest by more than an order of magnitude.”

    But Susanne Breitkopf, who monitors Congo Basin forests for Greenpeace, contends that industrial logging “is now the main threat to the forest” in some major provinces, “not only because of the direct impact of logging on wildlife and ecosystems but also because it acts as a catalyst for further destruction, opening once-remote areas to increased levels of hunting, settlement, and agriculture.”

    Tracking forest fauna and flora

    Remote-sensing data can provide important mapping information, but it takes researchers on the ground—in the midst of the forests—to shed light on exactly how the Congo Basin's forests are changing and what their contribution to the global carbon cycle is. Numerous groups are now studying the impacts of civil wars, forest degradation, mining, and other factors on the region's flora and fauna.

    Some scientists have been tracking the fate of animals that live and breed in the DRC, from giraffelike okapi to great apes. Those species may be at greater risk than the trees around them. The sharp increase in forest hunting and the bush-meat trade, which was exacerbated by the civil conflicts and the incursion of logging roads into the deep forests, have emptied some landscapes. “Mammals are no longer seen along the roads in many forests,” observes ecologist Julien Punga-Kumanenge, who says that the DRC's bush-meat trade—the world's most extensive—has become so widespread that “even big snakes are sold in the markets.”

    Logging itself can lead to wildlife hazards as well. In a coastal study that directly links logging to endangered marine species, a team led by the Smithsonian's Laurance reported this year that many sea turtles that are climbing onto Gabon beaches to nest “are being tangled, impeded, and killed” by thousands of lost logs that block the way to traditional nesting sites. “This is highly relevant because the region contains some of the most important nesting areas in the world for sea turtles, including the critically endangered leatherback turtle,” says Laurance.

    Perhaps the longest running research survey in the DRC is the Ituri Forest project, part of STRI's Center for Tropical Forest Science (CTFS) initiative. Since 1993, Congolese forest scientist Jean-Remy Makana and colleagues have been measuring and assessing all the trees and woody vines in a 40-hectare plot, part of the 21-site CTFS network that monitors more than 3 million tropical trees across the globe. The Ituri studies have found about 470 species of trees and shrubs, along with 240 species of liana (woody vines). “Most of the diversity is not in the big-tree category but in the ‘treelet’ subcanopy category and in the lianas,” says botanist Hart.

    Logged in; burnt out.

    Stray logs keep a sea turtle from its nesting site. Elsewhere, a woman makes charcoal from felled timber.


    Recent studies of biomass across the CTFS sites have indicated that the Congo Basin forest has among the highest carbon content per hectare of any rainforest, perhaps because of the density of its flora. If the Ituri site is typical, then preservation of the Congo Basin would do relatively more to prevent carbon release than preserving forests elsewhere, Makana says.

    Such carbon accumulation could provide incentives to preserve the DRC's forests if the government allowed local forest inhabitants, such as the region's half-million pygmies, to engage in carbon trading. Local groups could lease forest tracts from the government and then sell “carbon units” valued according to the amount of deforestation circumvented. These earnings would serve as an alternative to logging income.

    Even if the carbon scheme proves unfeasible, an ongoing effort to preserve vast tracts of the Congo Basin's rainforests—focusing on a dozen large-scale “landscapes” with a total area larger than Texas—is showing potential. The Mayaux and Hansen studies both indicate that the “landscapes” selected by CBFP were less affected by deforestation and logging exploitation, at least through 2000. Scientists are now studying on-the-ground conservation in those landscapes. And activists such as Hart—who wants a new “landscape” designated in the central DRC—are calling for more stringent protection of biodiversity within them.

    All of these efforts would be better off with more science behind them, says engineer Somnath Baidya Roy of the University of Illinois, Urbana-Champaign, who has developed a mathematical model to project how deforestation might influence climate in key central African parks and reserves. He and others are calling for more extensive land- and sky-based data and more intense research to improve methods of predicting the impacts of deforestation. Says Roy: “We need to do the same sort of work in the Congo Basin that has been done in the Amazon and elsewhere.”

  15. Letting 1000 Forests Bloom

    1. Virginia Morell

    A logging ban has allowed a hot spot of China's biodiversity to recover from decades of clear-cutting, but threats still loom.

    A logging ban has allowed a hot spot of China's biodiversity to recover from decades of clear-cutting, but threats still loom


    Now protected, the forested home of the giant panda has been coming back.


    In 1972, Yin Kaipu, then a young ecologist, hitched a ride aboard a loaded logging truck to a village at the base of the spectacular Qionglai Mountain Range in China's Sichuan Province. To keep from falling off, Yin and his professor, Liu Zhaoguang of the Chinese Academy of Sciences' Chengdu Institute of Biology (CIB) in Sichuan, wrapped their arms around the logs as the truck lurched down the winding dirt roads to the headquarters of the government-run logging company. What they saw was disheartening: The mountains along the road and close to the village were “shaved like a monk's head,” Yin said, recalling his professor's dismay at the extensive clear-cutting.

    “He knew that many special plants were being lost,” along with the “habitat of many species,” says Yin. The logging also posed a threat to a Chinese icon, the giant panda (Ailuropoda melanoleuca), which lived in small, isolated parts of the mountains' dense bamboo forests. Decades later, Premier Zhu Rongji echoed Liu's concern, decrying the Sichuan loggers as “tigers eating the whole sheep” when he toured these and other ranges in westernmost Sichuan in 1997.

    Together, the ranges make up the 700,000-square-kilometer Hengduan Mountain Region, home to the world's most biologically diverse temperate forest, hosting 40% of all of China's plant species. Today, thanks in part to efforts by Yin and many others, the “logging tigers” have been tamed. After two devastating floods, the Sichuan government imposed a logging ban in 1998. Much of the region is being preserved in parks and reserves. And under a national program, local farmers are being rewarded for planting trees on fragile slopes.

    But Hengduan's natural forests are not yet out of the woods. The logging ban is due to expire in 2010, and economic and population pressures still loom as threats to the recovery of the region's forests. What happens to this unique ecosystem could be an indicator of the prospects for other parts of China that have been ravaged by rapid industrialization and population growth. “The end of the logging was the first step,” says Yin. “The challenge is finding a balance between people's needs and protecting the forest.”

    Hengduan rarities.

    Ecologist Yin Kaipu fought to preserve the Hengduan's local plant diversity, including the regal lily (right), which has become a prime garden plant.


    Logging legacy

    China's Hengduan Mountain Region lies more than 1500 kilometers southwest of Beijing. (It is at the epicenter of the devastating earthquake that struck Sichuan last month; see Science, 23 May, p. 996.) “The Hengduans are like islands,” says botanist John Simmons, the retired curator of Britain's Royal Botanic Gardens, Kew, who has collected extensively in the region with Yin. “They're isolated, with a range of ecological niches, from lower [600 meters] to higher elevations [more than 6000 meters],” spurring plants “to speciate like mad.” They house 3500 endemic species of plants, birds, reptiles, amphibians, and mammals, including giant pandas. That's made this part of China a prime target of plant collectors and botanists since the late 19th century. “Nearly every garden today” has a plant from there, says Simmons, most likely a rhododendron, primrose, or lily.

    But beginning in the 1950s and intensifying in the 1970s, loggers devoured the dense forests—part of the country's spurt to become an industrialized nation. The clear-cutting, which Yin and Liu witnessed throughout the Hengduans as they mapped and collected the region's flora, inflicted huge damage on the ecosystem. “Without a forest to protect the soil, we knew a flood would come,” says Yin.

    Ultimately, two catastrophic floods came, in 1981 and 1998, on the upper and lower portions of the Yangtze River. The second flood—a disaster that killed more than 1500 people, left millions homeless, and cost some $20 billion—finally spurred the Sichuan government to ban logging in its natural forests. The next year, China started to replant and preserve forest lands nationwide. These moves were “a major milestone,” says ecologist Jianguo “Jack” Liu of Michigan State University in East Lansing, whose analysis of the recovery will soon appear in the Proceedings of the National Academy of Sciences.

    However, the damage done before the ban was extensive. Some 20% of the region's plant species are now endangered, and much of the panda, takin, and golden monkey habitat was lost to logging. When the clear-cutting finally stopped, “roughly 35% to 40% of Sichuan's natural forests were gone,” says forest ecologist Mu Changlong, deputy director of the Sichuan Academy of Forestry in Chengdu. And in some parts of the province, more than 85% of the forest was cut, even in places that the government had identified as important for soil and water conservation. “The timber companies worked in areas that were the most accessible, with the largest trees and most biodiversity,” says Mu. “In some places, they logged everything up to 3000 meters [in elevation].”

    Theoretically, the loggers were supposed to be followed close behind by planters. “But the loggers were faster, so in many places the clearcuts weren't reforested, and now there are just bushes and sword bamboo,” which inhibit the growth of the original conifer forest, says Mu.

    Worse, in some areas, such as along the Upper Min River in the Hengduan's Min Mountain Range, the vertical terrain was so completely deforested and eroded that getting anything to grow now is “impossible,” says ecologist Wu Ning, CIB's executive director. “We're simply trying to stop the expansion of what has become a desert” by building fences to keep the soil from sliding into the river, says Wu. Sichuan has now reduced its soil erosion by 1.5 billion tons, but eroded sediments in the Upper Yangtze still clog so much of the river that only ships between 5000 and 8000 tons can navigate this far upstream.

    Efforts continue, as well, to bring back forests. Overall, 143,000 hectares have been replanted along the damaged Min River gorge, primarily with exotic red pine (Pinus tabuliformis), a fast-growing species from northern China. But 13,300 hectares of this total have been reforested with a CIBrecommended mix of native broadleaf and coniferous species, as well as exotics, in an effort to avoid the health and fire-prone problems associated with single-species stands, says Yin. And to further relieve the pressure on native forests, the government plans to plant 2.5 billion trees in plantations this year throughout the country. It wants to become self-sufficient in timber by 2050.

    China has also provided economic incentives to farmers. In 1999, it initiated the Grain for Green Program, paying farmers to plant trees and shrubs on agricultural land with more than a 25-degree slope, with the goal of increasing plant cover by 32 million hectares by 2010 (Science, 7 December 2007, p. 1556). “That helped the people, especially those who lost jobs because of the [logging] ban,” says conservation ecologist Ling Lin of the World Wide Fund for Nature in Chengdu. And under the 2002 Rural Contract Law, farmers now own the rights to what they grow, which “should help them better manage the land in a sustainable manner,” says Xu Jintao, an economist at Beijing's Peking University. They can use the rights as collateral, pass them on to their children, and even sell them.

    China's treasured forest.

    The Hengduan Mountain Region contains the world's most diverse temperate forest.

    Moreover, in Sichuan, 75,000 square kilometers—about 15.5% of the province—of the natural forests are now safe in approximately 150 reserves or provincial and national parks, such as Siguniang (Four Sisters Mountain) National Park in the Qionglai Mountain Range. Two, Wolong Nature Reserve and Jiuzhaigou, where dense stands of native conifers edge more than 100 turquoise-colored lakes, are now World Heritage Sites.

    Fragile progress

    But concerns remain. China is now the world's largest manufacturer and exporter of wood products and has an enormous demand for the raw material. “The price of timber in China has increased two to three times over the last 5 years,” says Xu. Should the Grain for Green Program subsidies end, the temptation to cut the trees will be huge, warns Jack Liu. That's why, he and others say, China has recently extended the program and may do the same for the logging ban, at least for the natural forests. In the more remote regions of the Hengduans, people continue to cut trees for fuel and home construction, notes Harvard University botanist David Bouffard.

    The preserves themselves are facing another threat. “From the first day, these parks have been very popular,” says Yin, adding that they now draw millions of visitors annually. “It is too many.” More than 30,000 tourists can wander through Jiuzhaigou on a single day, despite requests from Yin and other ecologists to limit that number. “It's just a sea of people at the entry gate some mornings,” adds Jack Liu, noting that in Jiuzhaigou the only panda “sightings” are occasional scat. “There are just too many people and too much noise,” he complains.

    With a government grant in hand, Yin is now looking for ways to relieve the pressure. One solution may come from his team's 2006 proposal to connect two of the largest panda reserves in the Min Mountain Range by restoring some 30 to 40 square kilometers of panda habitat.

    Yin and others are also helping the State Forestry Administration set zoning limits, denoting some park areas for mass tourism and others for more individual experiences such as wilderness backpacking, and even closing a few altogether. One morning at Siguniang, where a zoning system now exists, Yin walked contentedly along a boardwalk trail through a wetland meadow, pointing out the endemic flowers and shrubs he and his professor had collected more than 30 years ago. On either side of the meadow rose snow-capped peaks, their flanks thick with the classic mixed forest of western Sichuan. “The logging tigers didn't get this far up the valley because there was no road,” says Yin. There is a paved road now. But with luck, the zoning system will help Siguniang's forests survive the growing number of China's tourist tigers.

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution