News this Week

Science  09 Dec 2005:
Vol. 310, Issue 5754, pp. 1594

You are currently viewing the .

View Full Text

Log in to view the full text

Log in through your institution

Log in through your institution


    NASA Starts Squeezing to Fit Missions Into Tight Budget

    1. Andrew Lawler

    While the public is focused on NASA's attempts to prepare the still-grounded space shuttles for a mid-2006 launch, the agency's science program is also in the midst of a painful, though less visible, overhaul. In the past few weeks, NASA managers have decided to delay by 2 years the flight of a new space telescope and halted work on an asteroid mission that is nearly on the launch pad, and they are reconsidering plans to revive a mission to Jupiter's moon Europa. “We've got to get everything under control,” says Mary Cleave, NASA's new science chief. “We're overcommitted.”

    Meanwhile, another part of the agency has begun to cancel a slew of life sciences experiments slated for the international space station, despite a National Academies' report released 28 November that criticized NASA's scaling back of research on the orbiting base. “We're refocusing on near-term needs,” explains NASA exploration chief Scott Horowitz. The agency intends to slice in half the roughly $1 billion it spends annually on biological and physical sciences research; the other half will be devoted primarily to ensuring the health of astronauts on lunar and Mars missions, which are the centerpiece of President George W. Bush's plan to return humans to the moon and send them on to Mars. The vast majority of exploration funding will be devoted to building new launchers.

    Dawn breaks?

    NASA has stopped work on Dawn and its ion-propulsion system to reach two asteroids.


    NASA managers laid out their plans and problems at a meeting last week of the newly reconstituted NASA Advisory Council, which gathered in Washington, D.C., for the first time since Administrator Michael Griffin took over the agency this spring. NASA's $5.5 billion science budget grew slightly in 2006 and is likely to win a modest increase in the president's upcoming 2007 request to Congress. But that budget can't keep up with rapidly rising costs for science projects such as the James Webb Space Telescope (JWST), the successor to the Hubble Space Telescope.

    A technical and scientific review this fall managed to reduce significantly the $1 billion overrun on the $3.5 billion JWST (Science, 2 September, p. 1472). Still, a host of problems, including delays in winning U.S. government approval for a European launch and difficulties in instrument design, forced Cleave last month to postpone the launch date from 2014 to 2016. That delay, in turn, ate up the savings from the fall review. “The cost is still $4.5 billion,” says JWST project scientist Eric Smith. The additional funding, Cleave told Science, must be found within the agency's already-strained astronomy and astrophysics budget.

    The fate of a proposed mission to Europa—already canceled twice because of its high cost—is now again in question. Planetary scientists are eager to return to the moon, and a 2002 National Academies' panel rated it the top planetary priority in its decadal plan. Griffin promised shortly after taking the job last spring that he would press for a conventional mission following cancellation of plans for a nuclear-powered spacecraft that would orbit Jupiter's moons. Cleave told the council, however, that budget pressures might yet again delay the probe.

    “We wouldn't necessarily say our next outer planet mission is to Europa,” Cleave later told Science. Instead, she would prefer to hold a competition to see if scientific interest in the mission has shifted since the decadal report. But given the stresses on the existing science budget, other agency officials and outside scientists say privately that it would be difficult to start an expensive new outer planets mission before 2008.

    Even NASA projects nearing their launch dates are getting extra scrutiny. Cleave recently halted all work on Dawn, a $373 million spacecraft set to blast off next summer on a mission to examine two large bodies in the asteroid belt. Technical and managerial troubles and a resulting spike in costs attracted the attention of NASA headquarters' managers this fall, and Cleave ordered the Jet Propulsion Laboratory in Pasadena, California, to cease work pending a detailed independent assessment to be completed next month. Ironically, the project is part of the Discovery program that is intended to launch missions relatively cheaply and quickly.

    Probes already aloft also face a squeeze. On 6 December, NASA shut down the Upper Atmospheric Research Satellite launched in 1991 that measures ozone, winds, and temperature. In October, it abandoned the Earth Radiation Budget satellite after more than 2 decades in orbit. NASA and outside reviewers are considering the fate of a host of planetary and astrophysics spacecraft as well. In addition, a 6-month delay in launching the Earth probes CloudSat and CALIPSO will cost at least $15 million. Cleave has also ordered cuts to the future Mars program.

    Lunar research, however, is almost certain to receive more funding in coming years, given the White House focus on the moon. Horowitz's office will launch a lunar orbiter in 2008 to reconnoiter for possible landing sites, and Cleave hopes to include a bevy of scientific instruments on the flight. She told the council that supporting any more lunar research, however, would leave less for other areas of science.

    Although Cleave is an ecologist by training, she has no oversight of NASA's biological and physical sciences. That portfolio belongs to Horowitz, who is drastically reducing funding for a host of experiments designed for the space station. He's already canceled at least half of NASA's current life sciences grants and contracts. Much of the research planned for the station has little connection with Bush's plan to return astronauts to the moon and continue to Mars, Horowitz told Science.

    That view is at odds with a new report from the National Academies, which warns that abandoning fundamental biological and physical research “is likely to limit or impede” research into the impact of the space environment on astronauts. The panel notes that “once lost, neither the necessary research infrastructures nor the necessary communities of scientific investigators can survive or be easily replaced.” The panel argues that NASA needs a detailed plan to use the station for a host of research endeavors, including studies on the effects of radiation on biological systems, loss of bone and muscle mass during space flight, fire safety, and flow and heat-transfer issues. Several Democratic lawmakers are critical of the cuts in life sciences research, but staffers and lobbyists say that their voices are unlikely to rescue the projects.

    Stretched thin.

    NASA's Mary Cleave says space science is “overcommitted.”


    Even if Horowitz were to reverse his decision, NASA's plans to halt shuttle flights by 2010 would make it difficult to carry out some research on the space station. William Gerstenmaier, head of NASA's space flight efforts, told the agency's advisory council that the shuttle is needed to return experiments and materials to Earth. Without those flights, he says, “you would have to do more in situ research.” That research, in turn, would require more complex equipment and crew time.

    NASA needs an additional $1.4 billion to redesign space station parts and buy spares so that the station can keep operating without the shuttle, Gerstenmaier added. That money is part of an estimated $6 billion in additional funding for space flight that is not yet included in NASA's future budgets. On top of that long-term fiscal crisis, the agency expects to receive from Congress less than half of the $760 million in damages its facilities suffered from Hurricane Katrina. Congress may also impose an across-the-board cut to all agency budgets to cover hurricane costs, although that didn't stop it from inserting nearly $300 million in pork-barrel projects into NASA's $16.46 billion budget. Such external pressures spell additional trouble for a science effort already suffering from its own excesses.”


    Landmark Paper Has an Image Problem

    1. Gretchen Vogel

    New questions about scientific validity are dogging South Korean cloning researcher Woo-Suk Hwang and his colleagues. On 4 December, Hwang notified Science editors that a figure in online material that accompanies his group's heralded 2005 paper on the derivation of stem cells from cloned human embryos contained duplicate images. The problem follows close on the heels of Hwang's admission that, despite his previous denials, two members of his lab had donated oocytes for his group's experiments and others had been paid for their donations (Science, 2 December, p. 1402).

    Katrina Kelner, Science deputy editor for life sciences, says it appeared that the duplicate panels were not part of the original submission but had been sent in response to a request for high-resolution images after the paper had been received. “From the information that we have so far, it seems that it was an honest mistake,” she says. “We have no evidence that there was any intent to deceive.”

    In May 2005, Hwang and his colleagues reported that they had produced 11 new human embryonic stem (ES) cell lines that carried the genetic signature of patients with diabetes, spinal cord injury, or a genetic blood disorder (Science, 20 May, p. 1096). The paper not only seemed to validate the group's claim a year earlier that it had created a single cell line from a cloned human embryo, but it also reported a huge increase in efficiency for the technique. In the first paper, researchers said they produced one cell line from 230 tries, but in the second, they claimed they produced a cell line in about one of 15 attempts.

    The figure in question is supposed to show patterns of expression for a range of ES cell markers in the 11 cell lines. But it contains four pairs of apparently duplicated images, even though they are labeled as showing different cell lines. Gerald Schatten of the University of Pittsburgh in Pennsylvania, who was the corresponding author on the paper and provided the high-resolution images to Science, declined to comment. A university spokesperson said that the university's office of research integrity had begun an investigation. Schatten and his lab members are cooperating, she said, “and are carefully going through the data we have access to to determine how it could have happened.” She said Schatten would not comment during the investigation, which might last 6 months.

    Rudolf Jaenisch of the Whitehead Institute for Biomedical Research in Cambridge, Massachusetts, says he still has confidence in the reported results. “This is an extremely important study, and I have no reason whatsoever to question any of the published data,” he says.

    Kelner says the journal will issue a correction once the editors are satisfied they understand what had happened.


    Cambridge University Reins In Faculty Patents

    1. Eliot Marshall

    CAMBRIDGE, U.K.— Computer scientist Ross Anderson would like to be free to patent his own inventions and make private deals, even though he's a university employee. The last thing he wants is “bureaucrats” getting in his way. So far he's been lucky: He works for the University of Cambridge, which has given him and other staff members tremendous leeway—even permitting them 100% ownership of some patents. Very few other universities allow such latitude. But this week, Cambridge is pushing a new policy that would curtail some of that independence and require all inventors on staff to let the university own and more actively manage staff patents.

    Hands off.

    Ross Anderson doesn't want Cambridge University to own his inventions.


    Anderson, spokesperson for a group called the Campaign for Cambridge Freedoms (CCF),* sees the new rules as intrusive. He and allies such as molecular biologist Mike Clark, whose income from monoclonal antibody discoveries is a major revenue source for the university, are fighting to retain some of the old ways of doing business. A campus-wide vote this month will determine which side prevails.

    The university has been advancing its claims on intellectual property (IP) for several years, prompting fierce debates at every turn. It is setting up a management group called Cambridge Enterprise and wants rules that apply consistently across the board, says university deputy vice chancellor Anthony Minson. So this week, the university sent out ballots to roughly 4000 eligible academic voters to get their approval for its IP rules. Academics will also get to vote on an opposing scheme from CCF that would block some aspects of the university's plans. For example, the dissenters do not want the university to be able to assert ownership of privately sponsored research that is not restricted by the donor. And their plan could prevent the university from intervening in some intramural IP disputes.

    A lot of money is at stake, but both sides stress lofty principles. “This is not primarily about money,” says Minson, a virologist who helped draw up the university's proposal. “It's about accountability” to taxpayers who help fund the facilities where the research takes place. The goal, he said in an e-mail, is “to achieve fairness by equal treatment of all staff regardless of funding source … and to ensure that the university has the information” it may need to “resolve potential conflicts” among staff and students.

    In contrast, Anderson says the battle is really about academic freedom and creativity. Cambridge is “the last university in the U.K. where the academics own [their own patents],” he says. “If the university locks down IP, it will become much more difficult for academics to spin out” ideas into commercial ventures.

    Minson disagrees. The university has promised its staff what he believes are “more generous terms than any other university in the U.K.” Although the administration intends to claim ownership, it will let independent-minded inventors such as Anderson and Clark do the patenting and negotiate deals themselves if they want to. And he says a sliding-scale formula would return most income to the inventor: 90% below £100,000 a year, dropping to around 30% at £200,000. Because the scheme is flexible, Minson says, “I just don't accept” the argument that a “bureaucracy will sit heavily” on Cambridge's creative spirits.

    The referendum is expected to draw about 1500 votes. Anderson says 84 academics have publicly endorsed the CCF amendments, and he believes there are another “several hundred” solid supporters. But university leaders have been selling their plan aggressively within the ranks. The dissenters concede that they're facing an uphill battle.


    Europe Trumpets Successes on Mars and Titan

    1. Govert Schilling*
    1. Govert Schilling is an astronomy writer in Amersfoort, the Netherlands.

    PARIS— Less than a week before it has to persuade European governments to approve its budget for the next several years, the European Space Agency (ESA) has been parading some of its achievements in 2005. These include the first batch of published results from the Huygens probe to Saturn's enigmatic satellite Titan (the most distant landing ever accomplished) and tantalizing glimpses of underground water from the Mars Express mission's ground-penetrating radar—the first subsurface view of another world. “There have been many nail-biting moments, but 2005 has been a great year for European space science,” says ESA science director David Southwood.

    Alien world.

    Peaks and dark plains on the surface of Titan were snapped by Huygens during its descent on 14 January 2005.


    On 14 January, after hitching a 7-year ride on NASA's Cassini spacecraft bound for Saturn, Huygens descended through Titan's murky atmosphere and landed on an alien but weirdly familiar world in which the rocks are made of water ice and monsoons of liquid methane rain down from the orange sky. Many of Huygens's results have already been released (Science, 21 January, p. 330; 28 January, p. 496; 13 May, p. 969; 23 September, p. 1985), but the first comprehensive set of scientific papers, published last week on Nature's Web site, fills in the details. They indicate that Titan—which is larger than the planet Mercury—is a frigid world sculpted by intermittent downpours of methane that carve out valleys and leave tarlike puddles of hydrocarbon goo. Huygens also found evidence for ammonia-spewing cryovolcanoes, detected bolts of lightning, measured wind patterns in the atmosphere, and analyzed the organic-rich airborne dust particles, as well as the reddish surface material.

    At a 30 November press conference here, Jonathan Lunine of the University of Arizona's Lunar and Planetary Laboratory in Tucson also presented a detailed comparison of small-scale Huygens descent images and wide-angle Cassini radar maps, obtained during the orbiter's Titan flyby on 28 October. “We've now been able to pinpoint the Huygens landing site to within a few kilometers,” he says. It's “kind of surprising,” Lunine adds, that the dark, hydrocarbon- rich areas in the Huygens images are also dark in the radar maps, indicating a smooth terrain—very different from the icy cobbles seen by Huygens at its landing site. Unfortunately, Cassini won't have another opportunity to radar-map the landing site until 2008, says Lunine.

    In the other ESA success story, another radar instrument, Italy's Mars Advanced Radar for Subsurface and Ionospheric Sounding (MARSIS) on board Mars Express, has provided scientists with a first peek beneath the martian surface (Science Express, 30 November 2005, Although Mars Express arrived at the Red Planet 2 years ago, MARSIS was not deployed until last summer because of concerns that unfurling the long radar booms might damage the spacecraft. So far, team member Jeffrey Plaut of NASA's Jet Propulsion Laboratory in Pasadena, California, says he's “absolutely thrilled by its performance.” For instance, MARSIS was able to detect radar reflections from the subsurface base of the ice layer close to the planet's north pole, indicating that the deposit is about 1.8 kilometers thick, it contains less than 2% dust, and the underlying crust must be very strong.

    Ice-rich material may also f ill a 250-kilometer-wide buried crater found in Chryse Planitia at Mars's midnorthern latitudes. “The find of subsurface craters is in itself not surprising,” says planetary geologist Michael H. Carr of the U.S. Geological Survey in Menlo Park, California, “but if they are filled with ice, that would be a very interesting discovery, since we don't know where the water went that was present on Mars in its early history.” The search for subsurface liquid water may have to wait until next spring, when Mars Express is in a better orbit for detailed radar observations of the planet's low-lying Hellas Basin, where water may be closer to the surface.

    France's OMEGA instrument, which maps martian minerals from orbit, has confirmed that the Red Planet must have been wet for extended periods in the distant geologic past. In last week's issue of Nature, OMEGA principal investigator Jean-Pierre Bibring of the Institute of Space Astrophysics in Orsay, France, and his colleagues describe how the device found claylike minerals known as phyllosilicates in locations where erosion has exposed very ancient terrain. They date back to an era when liquid water was abundant, some 3.8 billion years ago. “These spots are the most favorable to have hosted the possible emergence of life,” says Bibring. “I hope the future European Exo-Mars astrobiology lander will go there.”

    Gerhard Neukum of the Free University in Berlin, who heads the camera team of Mars Express, agrees that the planet went dry globally about 3.5 billion years ago. “But locally and regionally, there has been glacial and fluvial activity every few hundred million years or so, maybe until the present time,” he says. New images from the High-Resolution Stereo Camera show clear evidence of young glaciers in Deuteronilus Mensae and recent lava flows on the flanks of the Olympus Mons shield volcano. Says Neukum: “Mars is not dead.”

    Whereas the Huygens mission was over within a few hours of touchdown (its batteries were only designed to last a short time), ESA recently extended the Mars Express mission until November 2007. But according to project scientist Agustín Chicarro of ESA's R&D center in Noordwijk, the Netherlands, the craft's solar-charged batteries will last for at least five more years, and there's enough onboard propellant for another 2 decades. “ESA has never shut down any mission because of money constraints,” says Chicarro. “Let's hope they'll continue the tradition.”


    U.K. Doubles Stem Cell Funding

    1. Michael Schirber

    Heeding warnings that it risks falling behind, the U.K. government announced on 1 December that it will increase its funding of stem cell research from £50 million to £100 million ($85 million to $170 million) over the next 2 years. But even more is needed if the country is to compete with places such as California, which pledged $3 billion over the next decade, says a new report by the government-appointed U.K. Stem Cell Initiative.

    “It's very encouraging,” panel chair John Pattison, a former Department of Health director, says about the government's commitment. However, like the panel, which recommends the United Kingdom spend at least £600 million ($1 billion) between 2006 and 2015, he urges the government to do more.

    The United Kingdom is already well positioned, the panel notes. It has been home to several important stem cell advances, including the first cloned mammal Dolly the sheep and the world's first stem cell bank. And it has a strict but facilitating regulatory environment. “The U.K. has enthusiastically supported growth of the emerging areas of both embryonic and adult stem cells,” says stem cell biologist Roger Pedersen of the University of Cambridge. Both he and the panel emphasize that long-term investment is needed to keep talented researchers from going to the United States, Singapore, or South Korea.

    Funding is also needed to reduce the lag between scientific advances and development of medical treatments. Funding agencies give this sort of translational research lower priority, Pattison says. The report recommends that the government establish a public-private partnership to develop stem cell tools for testing the toxicity of drugs. “We have made a good start here in the U.K.,” Pattison says, “but additional funding is needed to capitalize on that early investment.”


    ERC Moves Forward Despite Budget Impasse

    1. Gretchen Vogel

    BERLIN— The long-awaited European Research Council (ERC) now has three veteran science chiefs to guide the agency through its birth.

    ERC, designed to fund basic science across Europe, is supposed to award its first grants in 2007. However, high-level disagreements over the E.U. budget have kept scientists guessing about how hard a hit the fledgling body's proposed €1.5 billion yearly budget might have to absorb. Uncertainty notwithstanding, ERC's scientific council last week elected Fotis Kafatos as chair. Kafatos, a molecular entomologist at Imperial College London led the European Molecular Biology Laboratory from 1993 to 2005 and is credited with revitalizing one of Europe's top research institutions. Rounding out the triumvirate are vice-chairs Helga Nowotny, an expert on science and society at the Wissenschaftszentrum in Vienna, and physicist Daniel Esteve of the French Commission for Atomic Energy CEA Saclay.

    This big?

    Fotis Kafatos, chair of the European Research Council scientific council, hopes political wrangling won't shrink the new agency's budget.


    The three, along with the rest of the science council, are well equipped to fend off political attempts to divert ERC funds to particular fields or countries, says Frank Gannon, president of the European Molecular Biology Organization in Heidelberg. “All signs are that the process is working the way the scientific community wants it to,” he says.

    In the meantime, U.K. Prime Minister Tony Blair put forward a budget proposal on 5 December that did nothing to ease researchers' fears. Earlier this year, E.U. officials proposed doubling the overall research budget, to just over €10 billion ($12 billion) per year. But as political disagreements escalated, those proposals took a hit; Blair's compromise would scale back the research budget to closer to €6 billion yearly. Both Kafatos and Nowotny say that to be viable, the ERC will need at least €1 billion per year. European heads of state will meet next week to try again to seal a deal.


    Universities May Have to Pay More in Support of Graduate Training

    1. Yudhijit Bhattacharjee

    National Institutes of Health (NIH) officials regularly say that training the next generation of biomedical scientists is a high priority for the $28 billion agency. But last week at a town hall-style meeting in Bethesda, Maryland, they conveyed a different message to universities: Pony up more of your own resources to shoulder the costs of training, or face a decline in the number of graduate students and postdocs that NIH supports.

    The meeting explored a fiscal crunch facing the Ruth Kirschstein National Research Service Award (NRSA) program, which supports more than 17,000 Ph.D. students and postdocs, primarily through institutional training grants. NIH currently provides the major share of trainees' tuition, paying the first $3000 plus 60% of the remainder, and covers a share of each trainee's health insurance. But faced with steadily rising tuition and health care costs, along with a flat budget, NIH says it must transfer more of the burden to universities or reduce the number of NRSA trainees. If the program's funding doesn't grow, the current formula would result in a loss of “4000 slots by 2015,” says NIH deputy director Norka Ruiz Bravo (see graphic).

    To ease the problem, the agency is considering three options. The first would retain the existing formula but cap the reimbursable amount at $16,000 to $18,000, roughly the current average subsidy. The second option would provide a fixed allowance—again capped at $16,000 to $18,000. The last would continue the current policy, staying on budget by squeezing both the number of institutional grants and the number of trainees per grant.

    Some call for NIH to shift funds into NRSA from other areas. The choices on the table reflect the lack of “an appropriate distribution and management of training and educational funds” within the NIH budget, believes Glen Gaulton of the University of Pennsylvania School of Medicine in Philadelphia. “All three are lousy options,” says Robert Simoni, head of biological sciences at Stanford University in California, who nevertheless supports the status quo.

    Fewer trainees.

    NIH projects a loss of 4000 NRSA awards over 10 years if spending remains level.


    NIH officials defend flat-lining their investment in training. “Prudent policy requires an appropriate balance between training budgets and the funds available for research support,” says Ruiz Bravo. But she acknowledges that “an annual loss in training positions would threaten the stability of ongoing programs and impede consideration of training programs in new and emerging scientific fields.”

    The proposed ceiling on tuition would force universities to shift funds “away from investment in new investigators and research equipment,” complains Linda Dykstra of the University of North Carolina, Chapel Hill. Speaking on behalf of the Association of American Universities, whose 62 members are a mix of public and private institutions, Dykstra favored retaining the current formula and reducing the number of trainees. Most participants from public universities, however, came out in support of a cap, a change that presumably would affect them less than the most-expensive private schools. Based on those who spoke, the audience on the NIH campus appeared evenly divided among the three options.

    NIH expects to make a decision on NRSA's future by spring.


    Young Scientists Get a Helping Hand

    1. Yudhijit Bhatttacharjee

    Getting that first faculty job represents the end of one arduous journey for a biomedical scientist—and, given the difficulties and cost of establishing a new lab, the start of another. Last week, the National Institutes of Health (NIH) rolled out three initiatives intended to smooth that transition to becoming an independent researcher.

    One of them, expected to be finalized by spring, is a 5-year award for postdocs that will provide initial salary support and then convert to a full-fledged research grant once the scientist gains a faculty position. The other two are already being tested: an independent investigator grant program that does not require applicants to submit preliminary data and a process to speed up the resubmission of R01 grant applications by new investigators who fail on their first attempt. NIH officials hope that the three initiatives will help young scientists get their labs up and running more quickly—a goal agency Director Elias Zerhouni calls his “number one priority.”

    At $250,000 a year, the new transition awards will be more than three times larger than a typical career development award, and they come with an equal amount of institutional overhead compared to the 8% indirect cost rate allowed by the career awards. The goal is to give universities an added incentive to recruit young investigators and provide newly hired faculty members with some breathing room before applying for their first major grant, says Story Landis, director of the National Institute of Neurological Disorders and Stroke in Bethesda, Maryland.

    “If you came with this kind of dowry,” Landis says, “deans, even in troubled times, should be willing to take a chance [on you].” Biologist Thomas Cech, president of the Howard Hughes Medical Institute in Chevy Chase, Maryland, and chair of a recent National Research Council (NRC) report on fostering independence among young biomedical researchers, calls the award “a wonderful move forward.” Landis won't say how many awards NIH plans to give, although Cech says it should be at least 100.

    The NRC report inspired another of the initiatives: a new grant competition at the National Institute of Environmental Health Sciences (NIEHS) for investigators lacking enough preliminary data for a full-fledged NIH proposal. NIEHS plans to give out six such grants next year, and other institutes may join in.

    A third effort, by the Center for Scientific Review, the NIH unit that evaluates grant applications, aims to speed up the turnaround time for new investigators so they can resubmit a revised application by the next triyearly deadline. Beginning in February, 40 study sections will meet earlier than usual to review submissions from first-time applicants and provide written evaluations within a week. Applicants will also receive 20 extra days to file a resubmission.


    Girding for the Next Killer Wave

    1. Richard Stone,
    2. Richard A. Kerr*
    1. With reporting by Pallava Bagla in New Delhi.

    A year after the Indian Ocean tsunami, nations along the coast have created the framework for a regionwide warning system

    BANGKOK— At 10:42 p.m. on Sunday, 24 July, a strong undersea earthquake rattled the Nicobar Islands, 660 kilometers west of Thailand. Minutes after the 7.3-magnitude quake struck, Thailand's National Disaster Warning Center (NDWC) swung into action. Director Plodprasop Suraswadi appeared on national television to issue the country's first-ever tsunami watch: If the quake generated a tsunami, he warned, the wave would hit the resort island of Phuket at 12:12 a.m.

    The advisory, broadcast on all Thai channels, was not an evacuation order. But with memories of the devastating 26 December 2004 Indian Ocean tsunami still fresh, hundreds of people on Phuket and along the Andaman Sea coast of the Malay Peninsula grabbed what they could and fled to higher ground. A crucial piece of data came in just before midnight: Off the Similan Islands, 50 kilometers from the Andaman coast, a tide gauge measuring sea level had barely bobbed. There would be no tsunami. Suraswadi took to the airwaves to sound the all clear.

    Big heave.

    The 26 December 2004 quake exposed coral off Simeulue Island in Aceh province. Dudi Prayudi of the Indonesian Institute of Sciences and Aron Meltzner of Caltech measure the uplift at 1.2 meters.


    If the NDWC had been operational last year, thousands of lives might have been spared. The Indian Ocean tsunami killed 5396 people in Thailand; another 2951 people are still listed as missing. Warnings could have saved countless lives elsewhere. Some 230,000 people died in a dozen nations, including 168,000 in Indonesia's Aceh province at the tip of the island of Sumatra.

    The lesson in ill-preparedness has sparked a mad dash to create a tsunami warning system for the Indian Ocean. As the first anniversary of the disaster approaches, an alarm network is beginning to emerge—a loose web of deep ocean sensors, tide gauges, and seismic stations operated by individual countries, along with mechanisms for sharing data and disseminating public warnings. Last month, for example, Indonesia, the country deemed most vulnerable to the next big Indian Ocean tsunami, deployed two sea-floor pressure sensors and associated buoys, the vanguard of a 10-sensor network. “We want to show the world that we are ready,” says Jan Sopaheluwakan, deputy chair of earth sciences at the Indonesian Institute of Sciences in Jakarta.

    By establishing warning centers, Thailand and other countries have begun to fill a lethal void. They will issue tsunami advisories more often, and in most instances the resulting wave will be puny or nonexistent—ratcheting up anxiety and prompting people to flee the seaside needlessly. “People are going to have to be understanding about this,” says NDWC's Cherdsak Virapat, director of Thailand's International Ocean Institute in Bangkok.

    Asleep at the wheel

    The Indian Ocean tsunami last December caught governments woefully off-guard. The trigger was a monster earthquake at a magnitude of 9.3, centered west of Aceh, on the northwestern tip of Sumatra. The quake struck at 7:59 a.m. Indonesia time, and within 40 minutes a wave, the first of three destructive moving mounds of seawater, had inundated the city of Banda Aceh. Nearly 2 hours after the earthquake, the first wave barreled into Phuket and neighboring seaside provinces of Thailand. It was a Sunday morning; most government offices were closed. Staff in a meteorological office in northern Thailand saw the seismic report but had no idea that a tsunami might be imminent, says Virapat. “Every year, someone would ask, 'What should we do if there is a tsunami?'” The possibility seemed remote, he says.

    Minutes later, the Nicobar Islands, including an Indian Air Force base at Car Nicobar, were pummeled. It took another 90 minutes for the tsunami to travel across the Bay of Bengal. But no one sounded the alarm, and the waves claimed 15,000 in India and 31,000 in Sri Lanka.

    Stunned by the realization that the human toll need not have been so high, representatives of Indian Ocean nations met in Bangkok last January to begin planning for a tsunami alert system. Discussions bogged down over who would host a regional warning center. By spring it was clear that each country would establish its own center, although the Intergovernmental Oceanographic Commission of UNESCO was invited to coordinate an Indian Ocean Tsunami Warning and Mitigation System, the subject of an IOC meeting next week in Hyderabad, India. It is expected to cost $200 million to bring the system online over the next few years.

    IOC is counting on five nations—Australia, India, Indonesia, Malaysia, and Thailand—to cover the entire Indian Ocean, with other nations enhancing the coverage. “No single nation can protect itself or provide protection to others alone,” says IOC executive secretary Patricio Bernal. Real-time data will stream into one or more “sub-regional centers,” he says, where it will be rapidly processed and fed back to national warning centers, which would decide on their own whether to issue tsunami advisories to their citizens. India continues to resist sharing real-time seismic and tidal data, out of concern that certain information could compromise its nuclear weapons program (see sidebar). Nevertheless, a basic Indian Ocean-wide system is expected to be in place by July 2006, says physical oceanographer William Erb, head oOC's office in West Perth, Australia. More advanced assets, such as the deep-ocean tsunameters, will come later.

    A hazardous way ahead

    As governments gear up to cope with the next tsunami, scientists have pieced together a vivid picture of the shattered Sunda fault off the island of Sumatra—and an idea of what could be in store for the region.

    The December quake's 1300-kilometerlong offshore rupture shunted stress southward beneath the sea floor, prompting seismologists to warn that the section of fault adjacent to Sumatra could be the next to fail. No one knew how close to failure that segment was, but geophysicists John McCloskey, Suleyman Nalbant, and Sandy Steacy of the University of Ulster in Coleraine, Northern Ireland, warned in the 17 March issue of Nature that the fault had not broken since 1861. That was enough time to build up energy for a sizable earthquake. On 28 March, it struck at a hefty magnitude 8.7.

    As in December, the region was unprepared. The U.S. National Oceanic and Atmospheric Administration's (NOAA's) Pacific Tsunami Warning Center (PTWC) in Ewa Beach, Hawaii, registered the earthquake 8 minutes after it occurred and issued a tsunami bulletin 11 minutes later. Without any deep ocean sensors or tide gauges off Indonesia, “it took hours to determine if, in fact, [the earthquake had] created a tsunami,” notes David Johnson, director of NOAA's National Weather Service. The bang ended with a whimper: The wave recorded at Cocos Island was just 23 centimeters. The tsunami was trivial in large part because the quake had heaved the sea floor upward beneath islands and surrounding shallow waters, not in deep waters where motions can spawn massive waves (Science, 15 April, p. 341).

    To catch a wave.

    Last month, the first two of 10 pressure sensors were deployed off Sumatra. DART buoys will also listen in on the Sunda Trench and other seismic danger zones.


    Now the Ulster group, joined by paleoseismologist Kerry Sieh of the California Institute of Technology in Pasadena, is warning that the risk is moving southward. The next section of fault down the line—from 1°S to 5°S, offshore of the Sumatran city of Padang—could well be poised for disaster. This segment last failed in 1833; the accumulated stress could drive a quake larger than magnitude 8.5. A subsequent tsunami would threaten a million people along 500 kilometers of low-lying Indonesian coast.

    New findings underscore the risk. Earlier this week, at the fall meeting of the American Geophysical Union (AGU) in San Francisco, California, the Ulster group, with colleagues at the National Institute of Geophysics and Volcanology in Rome, reported preliminary computer simulations of possible south Sumatra tsunamis. They first modeled a range of possible earthquakes of magnitude 8.0 to 9.0 and then used the resulting sea-floor movement to drive a model of tsunami wave generation. Initial results show that the coast from Padang south could be devastated.

    Elsewhere around the Indian Ocean, the tsunami risk from a massive quake off Padang is relatively low. The new simulations suggest that farther from Sumatra, most wave energy would be dissipated in the vast emptiness of the ocean. Here, the fault bends along the southward-facing Indonesian archipelago in a way that a far-traveling tsunami would be directed away from December's hard-hit targets: Thailand, India, and Sri Lanka.

    Other stretches of the deep-sea Sunda fault are less worrisome. At the AGU meeting, seismologists Emile Okal and Seth Stein of Northwestern University in Evanston, Illinois, reported that, based on the behavior of similar faults around the Pacific, the continuation of the fault to the south off the Indonesian island of Java is not likely to generate a devastating magnitude- 9 quake. And to the north of last December's break, the fault hasn't even produced magnitude 7s. “Our guess would be you're not going to have big, thrusting earthquakes there” of the sort that generate a tsunami, says Stein. Instead of the tectonic plate thrusting down into the mantle and shoving up the sea floor to generate a tsunami, he says, to the north the plates probably slide by each other San Andreas-style, without triggering tsunamis.

    Round-the-clock surveillance

    With south Sumatra identified as the area of high tsunami risk, experts are hoping to get a better fix on how far inland tsunamis of various sizes would run, while devising evacuation plans and reinforcing infrastructure. The models have “spurred us to get to work,” says Jose Borrero of the Tsunami Research Center at the University of Southern California (USC) in Los Angeles, who charted the ebb and flow of last December's tsunami based on his field surveys and satellite imagery (Science, 10 June, p. 1596). His team is now modeling inundation scenarios in the Padang region.

    Indonesia is taking the threat seriously. Padang's vulnerability is “bitter news” for the local population, says Sopaheluwakan. “To prevent Padang from becoming the next disaster,” he says, the government is working with local authorities to develop a comprehensive evacuation plan. If an earthquake of magnitude 6 or larger occurs in the Sunda Trench, an immediate evacuation order will be broadcast for any coastal area that a wave would strike within 30 minutes of the quake, Sopaheluwakan says.

    Indonesia won't rely solely on seismic signals in making a call on a tsunami. Last month, scientists deployed the first two seafloor sensors of the German Indonesian Tsunami Early Warning System. The devices, whose development was spearheaded by the National Research Centre for Geosciences in Potsdam and the Leibniz Institute of Marine Sciences in Kiel, measure sea- floor vibrations and pressure changes in the water column. Data are transmitted by acoustic modem to a buoy linked by satellite to Jakarta. The system is designed to alert Jakarta within tens of seconds of an oncoming tsunami.

    After the crew on the Sonne, a German research ship, positioned the first sensor and buoy on the Sunda Trench southwest of Padang on 20 November, they made a port call in Padang. If a tsunami were heading there, the area would be tough to evacuate. “Only three streets lead out of the city to higher ground. On a normal day, those three streets are usually full to overflowing with traffic,” expedition scientist Ernst Flüh, a geophysicist at the Leibniz institute, noted in a Web log on the Deutsche Welle Web site. Locals he met were placing high hopes in the German sensors. “Over and over again we had to explain that one or two buoys do not make an early warning system,” he wrote.

    The second buoy and sensor set was deployed northwest of Padang on 24 November. The system won't be operational until another eight are installed over the next 2 years. They will run in a line off the coast from Banda Aceh to Bali, each separated by at most 200 kilometers. The German government is footing the system's € 45 million bill.

    A network of deep ocean tsunami buoys operated by other countries will monitor the rest of the Indian Ocean. The U.S.-made Deep-Ocean Assessment and Reporting of Tsunamis tsunameters—each a buoy and an associated bottom pressure sensor—already serve as sentinels for the PTWC in Hawaii. It's the only such device that's been “tried and tested,” notes IOC's Erb. At a price tag of $250,000 per buoy and a design life of 1 year, the network won't come cheap, nor will it come quickly: The U.S. factory that produces the buoys was inundated by Hurricane Katrina, so production is lagging, sources say. Thailand plans to buy two and have them in place in the Andaman Sea by early 2007. India expects to deploy up to a dozen, and Malaysia will place three more in the Straits of Malacca, the South China Sea, and the Sulu Sea.

    Some experts contend that investing heavily in high-tech tsunameters such as these, the sexiest and costliest components of the warning systems, is overkill. They say seismographs and tide gauges, coupled with heightened vigilance, are sufficient for most countries. But everyone wants new technology.

    Indian Ocean nations meanwhile are upgrading or adding seismic stations and sharpening their ability to map earthquake hazards and analyze data. Thailand, for example, plans to triple the number of its digital stations to 45 by the end of 2008. Countries are also installing digital tide gauges. Before the tsunami, Malaysia's shore-hugging gauges could not transmit data in real time. It is now installing six gauges on far-flung islands that will transmit data to Kuala Lumpur by satellite and increase warning times by minutes. Through IOC, the United States is kicking in $16.6 million over 2 years for these efforts, primarily in India, Indonesia, the Maldives, Sri Lanka, and Thailand.

    The most valuable legacy of the 26 December 2004 tsunami may be the national disaster centers that countries are setting up to monitor and act on the data that will be pouring in. India plans to have its tsunami early warning center in Hyderabad up and running by fall 2007. Thailand's NDWC, opened on 30 May, features a 24-hour operations room with live feeds of seismic, tide gauge, and other data from around the Indian Ocean and from Japan and the United States, banks of televisions tuned to news stations, and clusters of desks where analysts are primed to sound the alarm. “Now we have all the information we need to forecast tsunamis,” says NDWC geologist Passkorn Kunthasap.

    With input from PTWC experts, NDWC scientists have designed a simple schematic for making snap decisions. For offshore earthquakes registering 7.0 to 7.7 on the Richter scale, the center will issue a tsunami watch. A stronger earthquake will trigger a warning and immediate evacuation order. Thailand has recently erected three warning siren towers on Phuket and the peninsula, with plans for 62 more next year.

    An open question is whether the national centers “will have the resources and stamina to stay active and alert for what amounts to from now to eternity,” says Costas Synolakis, director of USC's Tsunami Research Center. If the centers are devoted solely to their raison d'être—watching for a tsunami that may not come for generations—political and financial support could melt away. “In several years, people would forget and get lax,” says Erb. IOC has been urging nations to broaden their mission to a number of natural hazards.

    Thai officials have taken that to heart. They hope NDWC will stimulate a more rapid response to flooding, which each year claims dozens of lives and inflicts about $750 million in economic losses, roughly equal to the damage to infrastructure and lost tourism revenue from last December's tsunami. But as the first anniversary of the deadly wave approaches, NDWC and its sister centers will at least have a palliative effect. “We're on watch 24 hours,” says Admiral Thaweesak Daengchai, NDWC's executive manager. “And we're not afraid anymore.”


    A Dead Spot for the Tsunami Network?

    1. Pallava Bagla

    NEW DELHI— The budding regional tsunami warning system in the Indian Ocean may get little useful information from one key partner: India. The Indian government insists it will not release seismic recordings in real time, because if it were to resume nuclear testing, the detailed seismic signatures would immediately be broadcast to the world. Officials have also told Science they will not share online tide-gauge data, out of concern that such information could aid an aggressor attempting an invasion by sea. Delays in pinpointing an earthquake's location or confirming wave propagation could delay a tsunami warning.

    India's status as data holdout contrasts with its commitment to creating the region's most ambitious warning center for tsunamis and cyclone-generated storm surges. Under a $30 million plan, India will increase the number of its tide gauges fivefold and more than triple its seismic stations from 51 to 170. The first of 17 new broadband seismic stations came online at Port Blair, capital of the Andaman and Nicobar Islands, last May. And India plans to deploy up to 12 tsunameters—Deep-Ocean Assessment and Reporting of Tsunamis (DART) buoys and sea-floor sensors that detect pressure changes in the water column—although it is not expected to share these readings in real time either. Data will feed into a nerve center in Hyderabad, planned to be operating by September 2007.

    Indian scientists predict that the new tools, coupled with inundation models under development at the National Institute of Oceanography in Goa, should reduce the time required to assess tsunami risk after an earthquake from 40 minutes to 10. To minimize false alarms, Indian officials say that a tsunami warning will be issued after a major quake only if a significant pressure increase is registered by a DART, once these are in place in the Bay of Bengal, the Arabian Sea, and the southern Indian Ocean.

    Sharper hearing.

    India has fired up its first new broadband seismometer, in Port Blair.


    India's reluctance to share data could come back to haunt it. India has refused to hook up its vaunted array of seismometers to the Global Seismographic Network, 128 stations that record temblors and listen for signatures of nuclear detonations to help verify compliance with the Comprehensive Test Ban Treaty, which India has not joined. The seismic network is crucial to quickly pinpointing a quake's magnitude and location— and for analyzing tsunami threats.

    Some Indian officials acknowledge a risk. “Our existing policy of not sharing online seismic data has to change,” says Valangiman Subramanian Ramamurthy, a nuclear scientist and secretary of the Department of Science and Technology. He says India is reassessing its relationship with international networks, and India may agree to divulge data on earthquakes greater than 5 on the Richter scale in “near-real time.” That would help, but near-real time equates to a roughly 40-minute lag as Indian experts process data before releasing it.

    Earlier this year, some tsunami experts were highly critical of India's policy (Science, 28 January, p. 503). But concerns have been eased by ongoing efforts to bolster seismic stations elsewhere in the region and by the prospect of DARTs managed by other countries. “I am less pessimistic now,” says Costas Synolakis of the University of Southern California in Los Angeles. “India's [seismic] recordings are not as essential for early warning, particularly for sources 'far' from India,” he says. Maybe not—if a killer wave doesn't come before the rest of the Indian Ocean states bring their new instrumentation on line.


    In the Wake: Looking for Keys to Posttraumatic Stress

    1. Richard Stone
    Compounding the tragedy.

    Anguish over loved ones lost in the December tsunami was one trigger for PTSD.


    BANGKOK— Last December's tsunami left a trail of shattered families and anguished survivors. Thousands of victims in the region are thought to suffer from posttraumatic stress disorder (PTSD), whose symptoms include flashbacks, panic attacks, amnesia, and out-of-body sensations. Now, in what's billed as the largest study of its kind, Thai researchers have embarked on a hunt for genes that may leave people vulnerable to PTSD.

    Most previous PTSD studies involved victims who had endured traumas of varying duration and intensity: witnessing torture, for example, or experiencing a bomb blast. This work has not nailed vulnerability genes as yet, although it has found candidates. An advantage of the Thai government-funded study, organized by the Thailand Center of Excellence for Life Sciences (TCELS), is that most victims share a common genetic heritage and were exposed at the same time to the same stimulus, namely the tsunami wave. A pharmacogenetic component of the study aims to assess whether an individual's response to drug therapy depends on his or her genetic makeup. It's “a highly novel and potentially unprecedented approach,” says Robert Malison, a psychiatric researcher at Yale University.

    Beginning in February, psychiatric epidemiologist Nantika Thavichachart of Chulalongkorn University and colleagues interviewed more than 3000 adults on the coast. “Victims were committing suicide more than 3 months after the tsunami,” says TCELS president Thongchai Thavichachart. About 600 were diagnosed with chronic PTSD. Researchers drew blood from victims, healthy siblings, and unrelated individuals.

    In the $3 million study's next phase, to begin in early 2006, researchers will create “immortalized” cell lines from each blood sample and then fish for gene variations, or alleles, that may underlie susceptibility to PTSD. A team led by geneticist Verayuth Praphanphoj of Thailand's Department of Mental Health will target about 20 genes by zeroing in on DNA markers called single-nucleotide polymorphisms. His group will also take a second tack, trawling for genetic signals in a whole-genome association study of a few hundred individuals.

    Experts suspect that several genes are involved in susceptibility to PTSD, considering the constellation and variability of symptoms, some of which overlap with those of anxiety disorder and depression. Preliminary results are due in late 2006. Yale's Joel Gelernter, for one, has high expectations. “There's a very good chance” that the study will pinpoint more candidate PTSD genes, he says.


    Will a Preemptive Strike Against Malaria Pay Off?

    1. Gretchen Vogel

    Researchers are trying to determine whether routinely treating children for malaria before they contract it will save lives without promoting drug resistance

    The fight against malaria is famously frustrating. A vaccine is still years away, drug resistance is on the rise, and mosquito-thwarting bed nets, although effective, have proved difficult to get to the people who need them. Now researchers are testing a bold new strategy aimed directly at protecting malaria's most likely victims: infants and young children. Akin to a preemptive strike, the strategy involves giving antimalaria drugs routinely to infants regardless of whether they are infected with malaria parasites.


    If the IPT strategy works, antimalaria drugs could be delivered to infants at the same time they receive vaccinations for childhood diseases.


    Treating hundreds of millions of children for a disease they might not have flies in the face of standard public health practice. But evidence so far suggests that this simple and inexpensive treatment, called IPT for intermittent preventative treatment, may significantly slash the disease burden in young children. Nearly 1 million children die each year of the disease.

    There is some precedent for the strategy. The World Health Organization (WHO) already recommends that all pregnant women in malaria-affected regions receive IPT. The agency says that whether an expectant mother is infected or not, she should receive one dose of malaria medicine in the second trimester and another in the third. But some malaria experts question whether the costs and benefits were weighed carefully enough before the practice became official policy. In particular, some worry that such large-scale interventions could backfire by promoting drug resistance. “We can do better” to ensure that an investment in IPT in infants will pay off in terms of lives saved, and that it will also avoid causing harm, says David Schellenberg of the Ifakara Health Research and Development Centre in Kilombero, Tanzania.

    In an effort to weigh the costs and benefits as quickly as possible, researchers in 2003 formed the IPTi consortium. (The 'i' is for infants.) The group, which includes WHO, UNICEF, and scientists from 14 institutions in 11 countries, received $28 million in funding from the Bill and Melinda Gates Foundation. By coordinating trials and sharing data, the consortium hopes to have enough hard evidence to be able to recommend a policy for whether and how to implement IPTi by the end of 2006. At a meeting* of malaria researchers last month in Yaounde, Cameroon, IPTi was high on the agenda, as consortium members presented new results from one of the half-dozen trials under way across Africa.

    Prevention on the cheap?

    One of the key advantages of IPT for expectant mothers is that it can piggyback on existing public health programs by treating women when they visit health clinics for routine antenatal checkups. Several studies have shown that just two treatments with a standard malaria drug pair called sulfadoxine-pyrimethamine (SP) is as effective at preventing malaria complications such as maternal anemia and low birth weight as is more frequent prophylaxis, though much cheaper and easier to administer. The hope is that infants, who would receive antimalaria drugs at the same time they receive vaccinations against polio, diphtheria, and measles, would also benefit from routine intermittent treatment.

    The first data on IPT in infants—which helped inspire the formation of the consortium—were remarkable. In 2001, Schellenberg and his colleagues reported that in a study of 700 babies in Tanzania, IPTi cut rates of clinical malaria by almost 60% compared with rates in infants who received a placebo. Another Tanzanian study in 2003 showed that IPTi reduced malarial fevers by 65% in the first year of life.

    But more recent studies suggest that such dramatic results can't be expected everywhere. In a trial of nearly 1500 infants in Ghana, described in October in the British Medical Journal, treatment cut malaria episodes by just 25% compared to a placebo. Hospital admissions for anemia, one of the most dangerous malaria complications, were 35% lower in the treatment group.

    One explanation for the different findings may be the patterns of disease transmission in the two study areas, says Brian Greenwood of the London School of Hygiene and Tropical Medicine, who helped lead the Ghana trial. At the Tanzanian study site, malaria spreads at a relatively low rate year-round. At the site in Ghana, the disease is transmitted during the 6-month rainy season, when residents face about 10 times the rate of infective mosquito bites as faced by those in the Tanzanian study. Greenwood notes that for a subset of Ghanan babies who received their first two doses during the rainy season, results were nearly as good as those in Tanzania; it reduced clinical cases of malaria by 52% and anemia by 72%.

    But mosquito bite rates and differing seasons of infection can't explain all the differences seen in IPTi trials. Results from a trial in Mozambique, first reported last month in Yaounde, “are not as exciting as we'd hoped for,” admits Andrea Egan of the University of Barcelona in Spain, who coordinates the IPTi consortium. A study of 1500 infants, also living in an area of moderate year-round transmission, showed a 22% reduction in clinical malaria rates compared to rates in babies who received a placebo but no difference in anemia rates.

    Egan suspects differences in both bed net use and nutrition contributed to the smaller effect. More than half the population in the Tanzanian trial slept under bed nets, she says, whereas in Mozambique, bed net use was almost nil. In addition, in both Ghana and Tanzania, the treatment and control groups received a routine iron supplement, whereas babies in Mozambique did not. Egan speculates that babies in Mozambique might have had such high baseline rates of anemia that protecting them from malaria didn't make a noticeable dent. Consortium members expect to know more soon. Three studies nearing completion, one in Gabon and two in Ghana, are in part designed to elucidate how environment and epidemiology affect IPTi, says Peter Kremsner of the University of Tübingen in Germany, who is helping direct the trial in Lambaréné, Gabon.

    First, do no harm

    Perhaps the biggest concern about IPTi, however, is whether it could backfire by increasing the malaria parasite's resistance to medications. Drug resistance is one of the most serious problems in the fight against malaria, rendering many of the cheapest and safest drugs ineffective in curing the disease. Indeed, this week researchers reported in The Lancet the first evidence for resistance to artemisinin-based drugs, the newest therapy against parasites that can evade other drugs (see sidebar).

    In many areas, resistance to the drug combination SP is already well established. Cheap and safe, SP remains a first-line defense against the disease. It is also the first choice for IPTi. Giving the drug to otherwise healthy children might not necessarily increase SP resistance, notes Egan. If the approach succeeds in reducing clinical malaria rates, she says, overall use of the drug might also decline, and resistance rates could even fall. Answers should come from a consortium-sponsored trial involving 12,000 infants in Tanzania that is monitoring rates of resistance as IPTi is introduced.

    Some researchers are also worried that IPTi might leave infants more vulnerable to malaria later in their lives. For children living in malaria-endemic areas, early infections are something of a mixed blessing. Although they can be deadly, infections seem to confer some immunity, protecting the babies who survive from becoming seriously ill when infected later. If that process is interrupted, the disease might be delayed but not prevented.

    Researchers watching for the so-called rebound effect have reported mixed results. Schellenberg and his colleagues reported in April in The Lancet that children in Tanzania who had received IPT as infants still had significantly lower rates of malaria through age 2. The researchers suggest that IPT might actually be helping boost the body's natural defenses against the disease by giving children a head start in fighting off mild infections. But in Ghana, again, the results are less encouraging. Overall rates of malaria went up slightly among IPT-treated children between ages 16 and 24 months, although episodes of cerebral malaria, the most serious form of the disease, decreased.

    Nevertheless, consortium members are largely optimistic that studies will support expanding IPT to infants. Reported side effects have been minimal, and even the 22% reduction in malaria among infants in Mozambique is “still very positive,” Egan says. Says Kremsner, “If there are soon six and seven studies showing protection, that counts. If that goes along with considerable safety and good tolerability, the policy decision becomes fairly straightforward.”

    • * Fourth Multilateral Initiative on Malaria Pan-African Malaria Conference, Yaounde, Cameroon, 13-18 November.


    Cracks in the First Line of Defense

    1. Gretchen Vogel

    The wonder drugs have a weak spot. This week, scientists report the first evidence that the malaria parasite has developed resistance to artemisinin-based drugs, which had been hailed as the last best hope against parasites that can already elude other treatments. So far, the evidence comes just from lab tests of parasites isolated from infected people; no patient has died of artemisinin-resistant malaria. But researchers say the observation is an urgent reminder that the compound and its relatives, just beginning to be employed widely around the world, could fail if not used carefully.

    Based on extracts from the sweet wormwood plant Artemisia annua, used for centuries in Chinese traditional medicine, artemisinin and its derivatives such as artesunate and artemether had seemed almost invincible. Even in areas where multidrug-resistant parasites render most other malaria medications useless, treatments containing artemisinins routinely cure 90% of patients within days.


    Because the compounds are powerful and fast-acting, scientists had hoped that they might pack such a wallop that resistant strains would be slow to appear. To be doubly safe, officials have stressed the importance of using the compounds only in tandem with other drugs, an approach called artemisinin combination therapy (ACT).

    The importance of that ACT strategy is highlighted in the 3 December issue of The Lancet, in which Ronan Jambou and his colleagues at the Institut Pasteur in Dakar, Senegal, compared the effects of various drugs on malaria parasites from three different parts of the world. In an effort to develop an early-warning system for signs of resistance, the researchers took blood samples from 530 malaria patients in Cambodia, French Guiana, and Senegal. In samples from Cambodia, where use of artemisinin-based drugs has been tightly regulated as part of ACT therapy, they found no evidence of resistance. But in samples from Senegal and French Guiana, where artemisinins are either unregulated or approved for use without other drugs, lab tests revealed the presence of parasites that could survive the drug. In addition, they identified several mutations that are likely to confer the resistance. “This is the first step toward treatment failure with this drug,” Jambou says.

    “When you use drugs in monotherapy, sooner or later you will develop drug resistance,” says Pascal Ringwald of the World Health Organization. But he says the news comes several years sooner than most people expected.

    Even so, Jambou says, if countries heed the early warning and crack down on unrestricted use of the drugs, there is a good chance they can preserve artemisinin's usefulness. He notes that it took 40 years for public health experts and governments to withdraw chloroquine from regular use after the first treatment failures: “If we use these compounds carefully, we still have time.”


    Calls Rise for More Research on Toxicology of Nanomaterials

    1. Robert F. Service

    Environmentalists and industry insiders alike urge major investments to maintain the emerging technology's spotless safety record

    A rising chorus of government, industry, academic, and environmental leaders is calling for dramatic increases in funding to study possible adverse health and environmental effects of nanotechnology. These individuals—who don't often sing from the same songbook—argue that without this research, nanotechnology is setting itself up for the same kind of consumer backlash that has haunted genetically modified foods. In the past few weeks, the heads of DuPont and Environmental Defense and committees for the British Royal Society and the Science Council of Japan all have joined the choir.

    Huge investments are at stake, they point out. The U.S. National Science Foundation projects that by 2015 nanotechnology will have a $1 trillion impact on the world economy and employ 2 million workers worldwide. Today, global spending on nanotechnology R&D is approximately $9 billion a year, about one-third of it in the United States. The U.S. federal government alone spends more than $1 billion a year on nanotechnology research. But only $39 million of that goes to studies targeted at understanding the effect of nanoparticles on human health and the environment. According to the Woodrow Wilson International Center for Scholars, which released an international database of nanotoxicology research projects last week, that still makes the United States the largest funder of nanotechnology environmental, health, and safety studies. The European Commission ranks second, with about $7.5 million.

    Many experts now say that's not enough to test the hundreds of nanomaterials companies are pursuing. “Organizations as diverse as environmental NGOs [nongovernmental organizations], large chemical companies, nanotech start-ups, insurance companies, and investment firms all agree that the federal government should be immediately directing many more of the dollars it is currently investing in nanotechnology development toward identifying and assessing the potential risks of nanomaterials to human health and the environment,” Richard Denison, a senior scientist with Environmental Defense in New York City, said last month in testimony before the United States House of Representatives Committee on Science. Denison and other nongovernmental witnesses at the hearing agreed that the United States should spend at least $100 million a year on testing how exposure to a wide array of nanoparticles affects cells and organisms. At the same hearing, Mathew Nordan, vice president of research for Lux Research Inc., a nanotechnology research firm, upped the ante: He suggested that governments worldwide devote as much as $200 million a year to a national nanotechnology toxicology initiative aimed at testing each of the myriad nanoparticles for threats to human and environmental health. “It only takes one bad apple to spoil the bunch,” Nordan said.

    Safe or sorry?

    Because a large percentage of their atoms lie on the surface, nanomaterials could be highly reactive—and potentially harmful.


    Earlier this summer, DuPont CEO Chad Holliday and Environmental Defense's president Fred Krupp jointly penned an op-ed article in the Wall Street Journal arguing that nanotoxicity research should be boosted to 10% of the U.S. National Nanotechnology Initiative budget, up from the current level of about 4%. And a report published last week of a recent workshop organized by the United Kingdom's Royal Society and the Science Council of Japan said that “significant funding is urgently needed” for environmental, health, and safety studies of nanotechnology. On 30 November, after the U.K. government outlined a program to study the risks of nanotechnology, the Royal Society and the Royal Academy of Engineering called for earmarked funds to keep the initiative from turning into an ad hoc patchwork of research projects.

    But with funding tight, says David Rajeski, who heads the Wilson Center's Project on Emerging Nanotechnologies, what's needed most is not more money but coordination. “We need an international nanorisk research program built on shared knowledge and a clear set of priorities,” Rajeski says. As a possible f irst step, the Wilson Center recently compiled a database of more than 350 environmental health and safety studies in the United States, the United Kingdom, Canada, Germany, and Taiwan. Among the biggest gaps, it found, are studies of workplace safety issues, such as unintended worker exposure to nanoparticles from accidents.

    Clayton Teague, who directs the U.S. National Nanotechnology Coordination Office, says that efforts are well under way to coordinate nanotoxicology research. In the United States, he says, a working group from 24 federal agencies is finishing a report that will set priorities for nanotoxicology research. And on the international front, progress could come as early as this week at a meeting of the Organisation for Economic Co-operation and Development in Washington, D.C. OECD member countries are considering setting up a permanent working group on establishing international nanotoxicology research priorities. Such measures, Teague and others argue, will better help governments decide just how much funding is needed for nanotoxicity research and ensure that it is money well spent.

    Still, many toxicologists argue that commercialization of nanomaterials is rapidly overtaking efforts to study their impact on human and environmental health. “There has been a tremendous amount of discussion about increasing and coordinating nanotoxicology funding,” says David Warheit, a nanotoxicology researcher at DuPont in Newark, Delaware. “But it's not happening as quickly as it should.”

  15. ENERGY

    For Nuclear Fusion, Could Two Lasers Be Better Than One?

    1. Michael Schirber

    Whereas fusion energy from the sun is free, generating it on Earth costs. But laser researchers think they may have a budget route to boundless electricity

    Doing nuclear fusion research usually means big bucks. The major industrialized nations are about to commit themselves to the $12 billion ITER fusion reactor project, the most expensive experiment ever, and both the United States and France are spending billions of dollars building the most powerful lasers in the world, in part to test another route to fusion. But another strategy has been developing quietly in the wings, one that uses two less powerful lasers instead of one big one. Advocates say that with a little encouragement, it could steal a march on the big facilities and carve a new, cheaper path to fusion.

    All these approaches share a common principle: When hydrogen nuclei fuse to form helium, they release energy. But getting them to do so requires enormous temperatures and pressures, such as those in the core of the sun. The ITER reactor will use huge superconducting magnets to contain a hydrogen plasma and heat it enough for the nuclei to fuse. Designers are hoping it will produce more energy than is needed to run it.

    View this table:

    But a subset of fusion researchers believe that the same result could be achieved using lasers rather than magnets to compress and ignite the hydrogen. The technique requires an enormous laser—the size of a sports stadium—to crush millimeter-sized capsules of hydrogen to 20 times the density of lead, inducing temperatures hotter than the core of the sun. Known as inertial confinement fusion, this technology is some years behind the ITER-style reactors. Researchers are pinning their hopes for a proof of feasibility on new machines such as the National Ignition Facility (NIF), now under construction at Lawrence Livermore National Laboratory in California, and the Laser Mégajoule (LMJ) being built by France's Atomic Energy Commission near Bordeaux.

    But this strategy may be like lifting a sledgehammer to crack a nut, say advocates of fast-ignition laser fusion. Instead of using one very energetic laser to both compress the fuel and ignite it, fast ignition divides these tasks between two smaller lasers. The difference between the conventional single-laser method and fast ignition is akin to the difference between two types of internal combustion engines: In a diesel engine, a piston compresses the fuel-air mixture until it is hot enough to ignite spontaneously; whereas in a gasoline engine, the piston only compresses, and a spark plug lights the fuel. This division of labor in laser fusion relaxes certain requirements on the energy and uniformity of the compression stage. “It seems on paper that it is easier to use two lasers,” says Riccardo Betti of the University of Rochester in New York.

    Advocates of fast-ignition fusion have less experimental evidence to justify their optimism compared to rival technologies. An international team of researchers working in Japan has demonstrated that fast ignition can achieve fusion, but it is still a long way from showing it can achieve breakeven, the point at which the amount of energy produced equals what is put in. Efforts are under way in Japan and the United States to upgrade existing laser experiments for bigger tests of fast ignition, and European researchers have proposed a dedicated facility. Although uncertainties remain, fast ignition can potentially reach fusion with only a third of the compression used in conventional inertial confinement. “This translates into savings of a factor of 10 in the amount of energy needed to drive the compression,” says Peter Norreys of the Central Laser Facility at Rutherford Appleton Laboratory in Didcot, U.K.

    Division of labor

    The basic objective is similar in both methods of laser fusion. Researchers focus light beams at a small, spherical shell containing the hydrogen isotopes deuterium and tritium. The intense heat causes the outer shell surface to rapidly boil off, and the material inside recoils and implodes. The huge pressure in the center then strips electrons off the hydrogen isotopes, creating the bugaboo of all fusion technologies: a highly ionized gas, or plasma. Plasmas are “notorious for having a host of instabilities” that can prevent a smooth burn, says Mike Dunne of the Rutherford Appleton Laboratory.

    Thirty years ago, plasma physicists thought that a laser producing pulses with an energy of about a kilojoule (kJ) would be enough to collapse the hydrogen plasma and ignite fusion in the core. It wasn't. At high densities, they discovered, the plasma becomes unstable and the compression uneven. “The same kind of instability occurs in fluid dynamics, when a heavy liquid is supported by a lighter one,” says Tito Mendonça of the Superior Technical Institute in Lisbon, Portugal. Cold material from the edges mixes into the core, effectively quenching the fire.

    Researchers concluded that the way to get around this instability is to have a thicker shell around the fuel, but this also required lasers with much higher energies than were available at the time. This is where NIF and LMJ come in. Each of these billion-dollar projects, due for completion by the end of this decade, will provide roughly 2 megajoules (MJ) of laser energy. “My reading is that it is 90% certain they will get to ignition,” Dunne says. The gain in energy is expected to be 10 to 20 times the energy supplied by the lasers. For actual energy production, however, gains would have to reach at least 100, because big lasers are currently very inefficient, Betti says.

    Neither NIF nor LMJ can do fast-ignition studies now. They were built primarily with weapons research in mind and, by design, they use the conventional method because it mimics what happens in a hydrogen bomb. But there is talk of converting part of the laser capacity at NIF to short pulse, says Chris Barty of Lawrence Livermore. The primary motivation would be to create a “backlighter,” a kind of high-speed camera to study the compression of targets. But the setup could be used for fast ignition as well. Currently, however, NIF has no funding for this retrofit, so researchers in this new field are making do with lasers at academic facilities with 1/100th the energy. “The good thing is that [fast ignition] has not been tremendously expensive up to now,” says Max Tabak of Lawrence Livermore. “You can make progress with small teams.”

    “Foolish” geometry.

    A gold cone is key in getting sufficient laser power into the capsule of hydrogen and igniting fusion.


    Tabak, lead author of a 1994 paper that described a two-laser technique, is one of the pioneers of this cottage industry. The initial idea had been around for a few years, Tabak says, but he and his colleagues brought the pieces together for the first time. Early theoretical work showed that it would take a laser pulse of enormous power—but lasting a short time—to spark the plasma. This led to the development of the first petawatt (10M15 watts) lasers in the 1990s. Although their power is equal to roughly 1000 times that flowing in the entire U.S. electricity grid, they deliver it in pulses of only a few tens of picoseconds (10−12 seconds), about a kilojoule each. If most of the pulsed energy penetrates the plasma and reaches the dense core, the fuel will ignite and burn.

    The trouble is that when the laser light penetrates the plasma, it is converted into a beam of electrons that spreads out, lessening its power. Researchers came up with several ways to mitigate this beam spread. One favored new scenario is to insert a tiny hollow cone into the fuel capsule, a funnel-like short cut for the ignition energy so that the electron beam only has to travel a few tens of micrometers from the tip of the cone to the compressed core. There was initially a lot of skepticism that this “foolish geometry” would complicate the fuel assembly, recalls Ryosuke Kodama of Osaka University in Japan. However, using Osaka's Gekko XII laser facility, a collaboration of Japanese and British researchers led by Kodama in 2001 shot a 0.1-PW (60-J) pulse down the barrel of a gold cone, while a 1.2-kJ laser compressed the hydrogen fuel on the tip. The team observed a 10-fold—and later a 1000-fold—increase in the number of fusion-induced neutrons. “Gekko showed the possibility for fast heating in the imploded plasma,” Kodama says. Although still far from the coveted ignition, this news made “a big splash,” Tabak says.

    Energy costs

    The Gekko XII researchers estimated that about a quarter of the ignition laser's energy went into the fuel. This was a higher percentage than expected, according to Tabak. “Good things happened that we don't understand,” he says. He is optimistic that the cone setup can be improved, but the next step will require more energy. U.S. researchers are in the process of adding a 2-PW (5-kJ) ignition laser to the 30-kJ OMEGA facility at the University of Rochester. Dubbed OMEGA EP, the experiment is expected to be ready in 2007. At Osaka University, a 1-PW (10-kJ) ignition laser, called FIREX-I, will complement the full compression laser (10 kJ) from Gekko XII in 2007. But beefing up the ignition energy could cause problems, Barty warns, as the laser-induced electron beam could shoot right past the core without sparking the fuel. “It's a concern,” Norreys says, “but it's not a showstopper.” There are ways essentially to slow the electron beam by shortening the wavelength of the infrared lasers.

    Even so, Betti says these upgraded machines will probably not be powerful enough to reach fusion ignition. He estimates at least a 60-kJ compression laser is needed to achieve the breakeven point between energy in and out, and probably 700 kJ for a practical energy plant. The ignition laser may have much greater power, but its energy is lower, and energy is what costs money in the laser business, Dunne says.

    In September, a panel of European scientists, including Dunne, presented a plan to really put the principle of fast ignition to the test. The panel wants European governments to build a civilian laser facility, called HiPER, with a 200-kJ compression laser and 10-PW (70-kJ) ignition laser, at a cost of $850 million. “This is a good time in Europe to start thinking about a facility,” Betti says. Early results from FIREX-I and OMEGA EP could guide the development of HiPER's design. The proposal is currently being considered by the European Strategy Forum on Research Infrastructures, which is drawing up a road map of large science projects within the European Union.

    The HiPER team hopes that building such a facility will free laser-fusion researchers from having to rely on military facilities. According to the panel, only about 15% of laser “shots” at NIF and LMJ will go to the academic community. Fast ignition offers the chance to achieve significant gains at a 10th of the energy needed for conventional inertial confinement. “Now there's a civilian route to the end point,” Dunne says.

    HiPER has another advantage, too: It can be used as a general laser facility for other branches of science, such as modeling stellar interiors and supernova explosions, studying nuclear interactions for medical imaging and waste management, and accelerating particles faster than current methods can. “There will be good science that comes out,” Dunne says.

    The lower price may also make fast ignition more practical as a possible source of energy. Researchers admit that fast ignition currently is not the favorite in the fusion race. But considering the need, “we should be working on anything that has a prayer,” Tabak says.