News this Week

Science  09 Feb 2007:
Vol. 315, Issue 5813, pp. 746

You are currently viewing the .

View Full Text

Log in to view the full text

Log in through your institution

Log in through your institution


    International Team Releases Design, Cost for Next Great Particle Smasher

    1. Adrian Cho

    An international team has released a preliminary design and cost estimate for the International Linear Collider (ILC), the hoped-for straight-shot particle smasher that many researchers say is the future of their field.


    Barry Barish says the price tag for the ILC and its 16,000 accelerating cavities will hold steady.


    In Beijing this week, the ILC Global Design Effort (GDE) team reported that the “value” of the 35-kilometer-long machine would be $6.65 billion and 13,000 person-years of labor, plus or minus 30%. The value differs from a cost estimate because it does not account for inflation until the machine is completed—in 2016 at the earliest—or so-called contingency to cover potential cost overruns, which different countries handle in different ways, says Barry Barish of the California Institute of Technology in Pasadena, who leads the GDE.

    Including such factors would, for example, likely double the amount entered in the ledgers of the U.S. government. So if the United States hosted the machine and bore half the expense, its contribution would total about $7.5 billion, Barish says.

    The value estimate is roughly equal to the cost of the Large Hadron Collider (LHC), the 27-kilometer-long circular accelerator under construction at the European particle physics laboratory CERN near Geneva, Switzerland. The LHC cost 4.7 billion Swiss francs (about $3.8 billion), but that does not include the tunnel, which was dug for an earlier machine, or the older accelerators that will feed the LHC.

    The LHC and ILC “are in the same ballpark, so I think it is doable,” says Albrecht Wagner, chair of the board of directors for the German particle physics laboratory DESY in Hamburg. “But in the end, it's up to the politicians.” Some researchers, however, say it is too early to name a price. “I'm afraid that the cost will increase,” says Kaoru Yokoya, who leads ILC R&D at the KEK particle physics laboratory in Tsukuba, Japan. “A big increase will kill our project.”

    The ILC will probe in detail phenomena researchers expect to glimpse at the LHC, which is scheduled to be turned on in November. Many physicists expect the LHC to produce the long-sought Higgs boson and possibly a raft of new particles predicted by a theory called supersymmetry. But the LHC will smash protons into protons, and each proton is itself a tangle of particles called quarks and gluons. So the resulting collision will be too messy to reveal some key properties of the new particles. The ILC will collide electrons into their antimatter siblings, positrons, to make cleaner collisions that will allow physicists to nail down decay rates and other parameters needed to forge a complete theory.

    But first physicists must persuade governments to spring for the machine, and the value estimate is meant to help that process along. In particular, the estimate responds to a request from the U.S. Department of Energy (DOE) to know what the total cost would be. DOE has requested $60 million for ILC R&D in fiscal year 2008.

    Over the past year, the GDE transformed a rough “baseline configuration” tacked together from 2 decades of R&D at various labs into a coherent “reference design,” making changes that reduced the cost by about 25%. “They have done a good job without stretching the limits beyond reason,” says Günther Geschonke, an accelerator physicist at CERN. “The design has evolved into a cost-conscious design.”

    The GDE made several changes to the basic layout of the machine. For example, the baseline configuration called for a 7-kilometer circular tunnel near each end of the collider to house accelerators known as cooling rings that would concentrate the electrons and positrons. The new design calls for one circular tunnel near the center that will house both rings. The baseline also split the beams in two to collide particles in the hearts of two detectors; in the new design, the beams collide at just one interaction point, and researchers will swap the gigantic detectors into and out of the line of fire. Now “there is no way to make it substantially less expensive without reducing the scientific scope of the project,” Barish says.

    The reference design and value estimate will undergo an international review this spring. That review is one key to keeping the project from repeating the story of the Superconducting Super Collider, the ill-fated accelerator in Waxahachie, Texas, that Congress killed, uncompleted, in 1993 after its budget exploded from $4 billion to $8 billion. A DOE representative will serve on the review team, says Robin Staffin, associate director for high-energy physics at DOE.

    ILC researchers will now use the reference design as the starting point for a more detailed “engineering design.” Meanwhile, physicists will lobby for an international framework between governments to fund the project. A decision about where to build the machine is still far off. “My guess is that the international framework will be in place in 2010 at the earliest,” says Mitsuaki Nozaki, a particle physicist at KEK and Asian regional director of the GDE. “Until then, we cannot say anything about the final site.”


    BP Bets Big on UC Berkeley for Novel Biofuels Center

    1. Eli Kintisch

    Can an oil company take the lead in the biofuels revolution through an unprecedented investment in academic research? BP last week made a 10-year, $500 million bet on such a corporate-funded, big-science approach. It chose the University of California (UC), Berkeley, to host its new Energy Biosciences Institute (EBI), a venture that BP's Jim Breson calls “a privately funded national lab.” Academic plant scientists and engineers will work alongside BP specialists in the institute, conducting a mix of basic and applied work (see p. 790). A happy partnership, say experts, could yield a new model for industrial support of academic science in service of societal needs. But, as in any marriage, there could be some tensions along the way.

    When BP decided to boost its biofuels research agenda, its choices were limited. Having only four biologists on the company's roster of 102,000 employees worldwide ruled out an in-house research effort, says Breson. And Steven Koonin, who stepped down as provost of the California Institute of Technology to become chief scientist at BP in 2004, nixed the idea of simply awarding grants to professors, as a consortium of energy companies led by ExxonMobil is doing (see table). A grants program doesn't foster the “foot traffic, hall traffic, water-cooler talk” that Koonin believes is essential for a successful research enterprise.

    The answer, Koonin decided, was private-public “big science.” BP launched a competition to host the institute. UC Berkeley and its partners, neighboring Lawrence Berkeley National Laboratory (LBNL) and the University of Illinois, Urbana-Champaign, edged out four other university teams, says Koonin, in part because of strong connections to biotech, well-equipped scientific facilities, and a history of doing big science. The institute will sit in a yet-to-be-designed building on the UC Berkeley campus, with 25 themed labs. The institute, and a smaller sister facility at Illinois, will include space for proprietary BP research as well as open labs for academics. A joint board will select projects in what LBNL biochemist Christopher Somerville calls a “spirit of flexibility.”

    Many operational aspects of the new EBI have yet to be worked out, and some could be contentious. UC Berkeley is no stranger to corporate-funded science as well as the controversy that it can spark. It was criticized for a $25 million, 5-year deal with European pharmaceutical giant Novartis in 1998 that gave the company exclusive licensing provisions for any drug discoveries, and the arrangement was not renewed. Although BP wants its closed areas to be busy with commercial development, Breson emphasizes the company's commitment to sharing intellectual property (IP) with academic scientists and honoring their need to publish. “This has got to be about open research,” he says.

    But, of course, the devil is in the details. UC Berkeley official Beth Burnside told Science that no more than 30% of the total funding would be spent on the 50 BP scientists who will work in proprietary labs at the institute facilities on the partner campuses. Koonin later said that hadn't been finalized and could change yearly. “We're inventing this as we go along,” he admits.

    A common problem that EBI needs to avoid, says sociologist Lawrence Busch of Michigan State University in East Lansing, who has studied the Novartis deal, is “mismatched expectations” about research pay-offs. Breakthrough drugs and lucrative licensing deals “just didn't happen,” he warns. One silver lining, Busch says, was that “fears of academic freedom lost or that Novartis was buying the department really didn't pan out.” Burnside says UC Berkeley scientists leveraged Novartis funds to do much more molecular biology.

    Other scientists stress the importance of building trust through daily scientific interactions to prepare for the inevitable fights over IP and other thorny issues. Microbiologist Jeffrey Gordon of Washington University in St. Louis, Missouri, a veteran partner with Pfizer and other drug companies that Pfizer has acquired, says that the most successful projects flow from joint research proposals reviewed by all the partners. A 2001 decision by computer chip giant Intel to set up a lab adjacent to the UC Berkeley campus raised the productivity of university software researchers, says computer scientist Joseph Hellerstein, because there were suddenly “more smart people to work with.” In contrast, he says, a locked room meant for proprietary research “was never used.”

    Although the type of industry-university collaboration envisioned at EBI “isn't that new,” says economist Donald Siegel of UC Riverside, who has studied such arrangements, the scale—a decade-long commitment and a $50 million annual budget—is unprecedented. The magnitude also raises questions about the proper role of industry on campus, says economist David Audretsch of Indiana University, Bloomington: “We need companies to be working with campuses, but we don't want industry-driven universities. This is right on that line.”


    African Leaders Endorse Science Initiatives

    1. Robert Koenig

    PRETORIA, SOUTH AFRICA—Science and technology (S&T), although billed as the main themes of last week's African Union (AU) summit in Addis Ababa, Ethiopia, had to wait as leaders addressed pressing issues such as the Darfur conflict. But on the last day, the heads of state approved initiatives aimed at bolstering research and increasing the continent's clout on intellectual-property issues. With little public debate, the leaders urged member states to revitalize their universities and spend at least 1% of gross domestic product (GDP) on research and development by 2010. In addition, the summit called for more extensive S&T agreements with other developing regions, announced new scholarships to stimulate the study of science, and declared 2007 to be Africa's “scientific innovation year.”

    “Maintaining the status quo is not an option for Africa,” said Rwandan President Paul Kagame. He said Rwanda planned to double its S&T spending to 3% of GDP over the next 5 years and bolster its research institutions. Most African nations are now well below the 1% benchmark. Agronomist John Mugabe, who directs the S&T office of the New Partnership for African Development, which promotes social and economic progress in Africa, says the key point is that “the presidents themselves committed to strengthening science and technology.”

    Upping the ante.

    Rwanda's President Paul Kagame pledged to double science and technology spending.


    One initiative endorsed by Africa's science ministers and tacitly approved by AU leaders is the development of a 20-year biotechnology strategy to channel resources into regional specialties. For example, East Africa, which already has livestock biotech expertise, would be encouraged to build on this base. Other niches include crop biotech in the west, pharmaceutical biotech in the north, medical biotech in the south, and bio-diversity research in Central Africa.

    Solid-state physicist Phil Mjwara, director-general of South Africa's S&T department, says, “there wasn't much discussion” of specifics; details will need to be ironed out by science ministers. He agrees that biotechnology is important but cautions that “it will take a long time to really get it going.”

    Mjwara and Mugabe agree that creating a pan-African Intellectual Property Organization (IPO)—a concept endorsed by summit leaders—would be an important step toward establishing common standards and goals. Currently, there are two separate organizations on the continent, each representing 16 nations; 20 other countries, mostly in northern Africa, are not represented at all. The heads of state asked for a report by July on how to incorporate existing IPOs into a pan-African IPO that would take up issues such as traditional knowledge and genetic resources.

    Not included on the summit agenda, Mugabe says, was a controversial proposal to create a new African science and innovation fund. He expects it will be taken up by science ministers later this year. Its goal would be to pool resources from African nations and outside donors and distribute competitive research and development grants throughout the continent.

    “There is a real potential for research synergies” if these agreements lead to pan-African strategies, said Mjwara. But finding common ground on a diverse continent whose 52 nations have widely varying resources and priorities will continue to be a challenge.


    Racism Allegations Taint Chinese Effort to Recruit Overseas Talent

    1. Xin Hao

    A new program that aims to inject some international blood into China's scientific workforce has come under fire for excluding ethnic Chinese. Officials at the Chinese Academy of Sciences (CAS) counter that critics are missing the point.

    Last September, CAS launched a program to provide fellowships for up to 50 “international young researchers” under the age of 35 to work in CAS laboratories for 1 year with salaries capped at 100,000 yuan, or $12,900, renewable for a second year. The announcement received little publicity until last week, when an anonymous posting on the popular U.S.-based Internet forum Jiaoyu Yu Xueshu (Education and Scholarship) noted that the Chinese version of the fellowship rules limits the awards to fei huayi waiji (foreign nationals not of Chinese descent). That phrase sparked a flurry of angry reactions on Chinese Internet sites abroad blasting the program as “racist” and “discriminatory.”

    The complaints have merit, some experts say. The distinction between huayi (persons of Chinese descent) and fei huayi is an “interesting” form of racism, says Stevan Harrell, an anthropologist at the University of Washington, Seattle, who has done fieldwork in China for more than 20 years. The premise is that “anyone who is of Chinese descent somehow has a connection with China”—even a second- or third-generation American huayi, for instance—and therefore is not considered a “real” foreigner, Harrell explains.

    The deputy director of CAS's education department, Li Hefeng, defends the program. Li points out that CAS already has numerous incentive programs to attract huayi, such as the One-Hundred-Talent Program. “About a third of researchers in U.S. and Japanese labs are international,” says Li, “but CAS has almost none.” Until recently, “we worried whether fei huayi could adapt to our country's work, living, and especially cultural environment,” he says. But conditions are improving, Li says, and in the future there may be no need for separate programs for huayi and fei huanyi. The “main goal,” he maintains, “is to expand international exchange and collaboration.” In this case, international means non-Chinese ancestry.


    Rett Symptoms Reversed in Mice

    1. Greg Miller

    Some of the dramatic neurological problems of Rett syndrome can be reversed in an experimental mouse model, researchers have found. Although the work does not have direct therapeutic applications, scientists studying the devastating genetic disorder hail the findings as a sign that treatments are at least possible in principle. “This is very exciting,” says Huda Zoghbi, a geneticist at Baylor College of Medicine in Houston, Texas. “It gives us researchers and the families and patients hope that, as we uncover [biochemical] pathways that could be safely manipulated, we can recover some function in these girls.”


    Girls with Rett syndrome have impaired mobility; mouse studies hint at a remedy.


    Rett syndrome affects roughly one in 10,000 girls. They develop normally for the first 6 to 18 months of life, but they then begin to lose mobility and their cognitive development stalls. Rett girls can live well into adulthood, but they endure severe mental and physical deficits.

    Mutations in a gene called MECP2 are to blame. How mutations in this single gene on the X chromosome, itself a regulator of other genes, causes neurological problems isn't known (Science, 8 December 2006, p. 1536), but the new study, published online this week by Science (, suggests that the damage to the nervous system may not be permanent.

    Jacky Guy and Adrian Bird of the University of Edinburgh, U.K., and colleagues created mice with a genetic roadblock—a string of DNA—inserted into Mecp2 (the mouse version of the human gene) that prevented cells from reading the gene to make the protein it encodes. Female mice with the blocked Mecp2 developed normally for 4 to 12 months before showing Rett-like symptoms, including impaired mobility, an abnormal gait, tremors, and breathing difficulties.

    Then the researchers turned Mecp2 back on, exploiting another gene they'd bestowed on the mice. This gene encoded a hybrid protein: a DNA-splicing enzyme fused to an estrogen receptor. The enzyme can recognize and remove the roadblock in Mecp2, but the attached estrogen receptor prevented it from entering the cell nucleus. By injecting the mice with tamoxifen, a drug that binds estrogen receptors, the researchers sent the hybrid protein scuttling into the nucleus to snip out the roadblock and restore Mecp2.

    After the mice had received five weekly tamoxifen injections, the Rett-like symptoms all but disappeared. A few of the rodents continued to walk with their hindlimbs abnormally far apart, but otherwise they were hard to distinguish from their genetically normal relatives. It was a pleasant surprise, because researchers had feared that the developmental loss of Mecp2 led to missing or permanently disabled neural connections. “The general perception is that once the brain has missed out big time on some ingredient of normal development, it's never going to be able to recover,” Bird says. “We thought maybe we'd get amelioration of the symptoms, but we didn't anticipate that things would be reversed on the scale that we found.”

    The findings suggest that the lack of Mecp2 doesn't do irreversible damage to neurons, says Rudolf Jaenisch of the Whitehead Institute for Biomedical Research in Cambridge, Massachusetts. “It's almost like a dream result,” he adds.

    Still, the study doesn't point to an obvious strategy for treating Rett syndrome. Human gene mutations can't be repaired by the technique the Edinburgh team used, and simply boosting MECP2, either by gene therapy or administering the protein, is not likely to work, Zoghbi and others say. That's because girls with Rett syndrome already make MECP2 protein in about half their cells, thanks to a good copy of MECP2 on their second X chromosome. These cells would end up with a surplus of MECP2, which appears to be just as damaging as a deficit is. Finding alternative strategies won't be easy, but the mouse work suggests that such efforts are well worth pursuing.

  6. 2008 U.S. BUDGET

    Research Rises--and Falls--in the President's Spending Plan

    1. Jeffrey Mervis*
    1. Reporting by Yudhijit Bhattacharjee, Adrian Cho, Constance Holden, Eli Kintisch, Andrew Lawler, Eliot Marshall, Robert F. Service, and Erik Stokstad.

    Just as he has stayed the course in Iraq, President George W. Bush has stuck to his guns with his budget proposals. On 5 February, he sent Congress a 2008 budget request for science that favors a handful of agencies supporting the physical sciences and puts the squeeze on most of the rest of the federal research establishment as part of an overall $2.9 trillion plan that clamps down on most civilian spending.

    Last year, the president's budget proposed the American Competitiveness Initiative (ACI), a 10-year doubling of the National Science Foundation (NSF), the Department of Energy's (DOE's) Office of Science, and core labs at the National Institute of Standards and Technology (NIST). At the same time, he asked Congress to cut the National Institutes of Health (NIH), which makes up almost half the $60 billion federal science and technology budget. His 2008 request follows suit. If it were to be adopted intact—a wholly unlikely prospect, given the Democratic control of Congress—government support for basic research would rise by a minuscule 0.5% from his 2007 request

    Pick a number.

    The current House spending plan may be the best yardstick to measure the president's request for 2008, because Congress hasn't completed work on the 2007 budget.

    View this table:

    Accordingly, the new request is getting a decidedly mixed reception from science groups. “The [president's] budget has much positive news for the nation's research universities, but it also raises some very serious concerns,” says Robert Berdahl, president of the 62-member Association of American Universities (AAU). He praises the ACI funding but chastises the Administration for “shortchanging” defense research and making what amounts to a $500 million cut in NIH funding. Democratic legislators who follow science are equally ambivalent. “The president's budget includes a few good targets for R&D funding but ignores too many of our country's priorities,” says Representative Bart Gordon (D-TN), chair of the House Committee on Science and Technology.

    Biomedical lobbyists don't see much of a silver lining. “It's distressing,” said Jon Retzlaff, legislative chief at the Federation of American Societies for Experimental Biology, which has called for a 6% boost for NIH. “Our goal will be to have this changed when it goes through Congress.”

    The 2008 budget is harder to interpret than most presidential requests because Congress hasn't finished work on the 2007 budget. The previous, Republican-led Congress adjourned in December after ordering agencies to keep spending at 2006 levels until 15 February, and the new Democratic majority is now putting together a final 2007 budget. A resolution adopted on 31 January by the House would hold most federal agencies to2006 levels, but with a few notable exceptions, including NSF, DOE's Office of Science, and NIST. The Senate is expected to take up the measure shortly.

    The 2008 request for science will play out in the midst of the bitter debate over the Iraq war and efforts to shrink the federal deficit. Here are some individual agency highlights:

    NIH: Its 2008 request pays special attention to younger scientists despite containing less than the amount Congress is expected to approve for 2007. New and competing grants in the president's plan would surpass 10,000 for the first time in years, says NIH Director Elias Zerhouni, on the way toward achieving “stable … sustainable” growth. Funding for high-priority “Roadmap” projects under the NIH director's control will increase, as will special grants for first-timers.

    Biomedical lobbyists figure that NIH would actually receive $500 million less under the president's budget than in 2007, in part because its contribution to the Global Fund to Fight AIDS, Tuberculosis, and Malaria and other diseases in the developing world would rise from $99 million to $300 million. They also note that the total number of grants would drop. Asked about the likelihood of such a cut back, a Senate majority staffer said, “We're not going to let that happen.”

    NSF: Director Arden Bement says he's “more than happy” for a budget that has room for a $52 million initiative to add computational elements into the biological and physical sciences and engineering as well as a $24 million increase in a $90 million program for major research infrastructure at universities with limited endowments. NSF hopes to add200 graduate research fellowships in 2008.

    With regard to major new facilities, the agency has requested $32 million to begin upgrading the Laser Interferometer Gravitational-Wave Observatory, and it has stretched out the scheduled ramp-up of national environmental and ocean observatory networks because of continued tweaking of their designs. Bement says he also hopes to win congressional approval in 2007 to begin building a $120 million Arctic research vessel.

    HOMELAND SECURITY: The Department of Homeland Security (DHS), whose Science and Technology directorate was reprimanded by Congress last year for mismanaging its finances, wants to cut its university programs by 20%, to $39 million. And although officials also plan to add four university-based centers of excellence to its current roster of seven, that expansion will likely translate into less money for existing centers. Jay Cohen, the directorate's new undersecretary, promises that the program will bounce back. “Please view this as a transition year as we realign the [centers] program with our mission,” he says, adding that DHS remains “fully committed to basic research.”

    Still competitive.

    The Administration has requested a second year of big increases for the three agencies under ACI.


    DEFENSE: The silver lining in an 8.7% cut for basic research at the Department of Defense (DOD) is a big boost to the National Defense Education Program, from $19 million to $44 million. The program, begun in 2006, provides scholarships to U.S. undergraduate and graduate students in science and engineering disciplines related to national defense in hopes of replenishing DOD's aging science and technology employees. “The idea is definitely resonating with folks on the Hill,” says AAU's Matthew Owens. The basic research portfolio at the Defense Advanced Research Projects Agency would rise by 5.5%, to $153 million.

    ENERGY: Undersecretary of Science Raymond Orbach says that a second consecutive year of large increases will allow DOE to take “our first step into new territory.” The expected 2007 increase, he adds, “repaired the damage” to programs long starved for funds.

    The 2008 request would give $160 million toward the $6 billion International Thermonuclear Experimental Reactor in Cadarache, France. Projects related to applied energy science would enjoy a whopping 26% increase overall above the 2007 request, notably for biomass research ($113 million) and nuclear waste recycling studies ($395 million).

    SPACE: NASA Administrator Michael Griffin says he is happy with the proposed 3.1% increase over the president's 2007 request. But the 2007 budget coming down the pike from the new Democratic-run Congress is quite another matter. “We're on the receiving end of a budget we don't like,” grouses Griffin, referring to a half-billion-dollar reduction in NASA's effort to build a new launcher to take humans to the moon. That cut, if it holds up, could greatly extend completion of the system, now planned for 4 years after the shuttle's last flight in 2010.

    NIST: The proposed 21% boost over the House-approved 2007 funds would advance a proposed lab expansion at the agency's Boulder, Colorado, campus and an upgrade to the NIST Center for Neutron Research in Gaithersburg, Maryland. It would also provide $22 million for five new programs in areas such as nanotechnology and climate science. “This is actually a great day for NIST,” says the agency's director William Jeffrey. As in previous requests, the president would zero out the Advanced Technology Program (ATP), which helps companies commercialize nascent technologies. The House spending bill offers ATP a 1-year reprieve, at $79 million.

    EPA: The Environmental Protection Agency's Office of Research and Development would suffer a 4.1% cut from the House-approved level. That's less than the 6.7% reduction the Administration sought for 2007. EPA chief scientist George Gray says that the 2008 request includes small boosts for intramural research on nanotechnology and risk assessment.

    NOAA: The National Oceanic and Atmospheric Administration is seeking $40 million more for ocean research as part of an overall $123 million boost under the president's Ocean Action Plan (Science, 2 February, p. 585). But NOAA's main program for research—the Office of Oceanic and Atmospheric Research (OAR)—would drop by 3.3% from the House 2007 spending bill, which hews to 2006 levels.

    AGRICULTURE: The Administration reprised its goal of boosting competitive research at the U.S. Department of Agriculture. The National Research Initiative would see a 35% jump to $257 million, just slightly less than the agency requested last year. And the White House restated its proposal to create competitive awards for 9.4% of the so-called formula funding handed out to land-grant universities each year. Agriculture school lobbyists say they will continue to oppose that move unless the overall pie gets bigger. That doesn't happen in the Administration's request for $530 million, at roughly the 2006 level.

  7. 2008 U.S. BUDGET

    NASA Tries to Make Space for Science

    1. Andrew Lawler

    NASA came under heavy fire last month from the National Academies' National Research Council for cutting important earth science missions, and just this week, another academy panel complained that overruns in large projects such as the James Webb Space Telescope, slated for a 2013 launch, are hurting smaller astrophysics efforts. But NASA officials say that their 2008 budget will put the agency's beleaguered $5.5 billion science program back on a more balanced track.

    Exhibit A, say agency officials, is the proposed restoration of several projects hanging in limbo during the past year. A joint U.S.-Japanese spacecraft designed to measure global precipitation will be launched in two phases in 2013 and 2014. And after several years of overruns and delays, two smaller astrophysics missions—Kepler, designed to discover extrasolar planets, and the Wide-Field Infrared Survey Explorer (below), which will produce a detailed map of the infrared universe—will by next year be given the money they need to fly.


    That news may not be enough to satisfy astrophysicists worried that the queue of missions is quickly emptying. “There are no low-cost, quick-response science programs being prepared today,” says Martha Haynes, an astrophysicist at Cornell University and vice-chair of the academy panel that released its report on 7 February. NASA requested the report last year.

    It is clear, however, that NASA's science program remains in crisis after having to pony up $2.44 billion from its 2007–2011 budget plan to cover shuttle and space station shortfalls. No spacecraft are slated to follow the large Earth-observing platforms now in orbit, and the earth sciences budget will remain at about $1.5 billion for the foreseeable future. Several important astrophysics flights, such as the Space Interferometry Mission, remain on hold because of budget constraints.

    Even lunar science, now in favor because of its connection to human exploration, faces an uncertain future after the projected cost of a few sophisticated rovers doubled to $1.5 billion. Deputy exploration chief Douglas Cooke says such rovers may not be necessary at all. “We need to have a really good map,” he says. “It's hard to say we need much more than that.” Nevertheless, NASA Ames Research Center in Mountain View, California, will put together a cheaper and faster option for the rovers.

  8. 2008 U.S. BUDGET

    NIH Children's Study Sparks Fight

    1. Eliot Marshall

    Director Elias Zerhouni says the National Institutes of Health (NIH) can't afford it. But legislators believe that a massive search for the sources of chronic ailments such as asthma and diabetes is worth $3 billion over the next 25 years.

    This week, for the second straight year, President George W. Bush's budget request to Congress zeroed out funding for the National Children's Study, which would enroll 100,000 newborns (beginning in the womb) and monitor their health and exposure to environmental risks for a quarter-century (Science, 10 December 2004, p. 1883). Authorized by Congress in 2000, the study received $12 million in 2006 to support seven “vanguard” centers that will help design the study. Last week, a House spending plan for 2007 included $69 million for the study, and supporters are gearing up to push for $111 million in the 2008 fiscal year that begins on 1 October.

    Why does this study have such strong legislative backing despite NIH's plea of poverty? Supporters argue that it would more than pay for itself if it leads to even a 1-year 1% reduction in the key diseases it examines. But one biomedical lobbyist who requested anonymity claims that advocates created a large and geographically diverse constituency by naming the vanguard centers and identifying nearly 100 future study sites. The lobbyist, who called the tactics an “outrage,” likened them to methods used by any special-interest group to retain federal funding for a large project.


    The children's study is not a special-interest project, says lobbyist John Porter, a partner at the Hogan & Hartson law firm in Washington, D.C. Porter, a former Republican representative from Illinois who chaired the House panel that sets NIH's budget, has been hired by the vanguard centers. “We respected the fact that funding is very tight and difficult for investigators,” he says, adding that Congress has put the study in the NIH director's budget rather than having it compete against investigator-initiated grants at individual institutes.

    The study's cost “is not trivial,” concedes Leonardo Trasande, a preventive medicine researcher who has championed it alongside his colleague Philip Landrigan at Mount Sinai School of Medicine in New York City. But he says it's the best way to find the complex factors that are feeding an “epidemic” of chronic childhood illness.

  9. 2008 U.S. BUDGET

    NSF Education Program Rebounds

    1. Jeffrey Mervis

    The Bush Administration's efforts to rein in the National Science Foundation's (NSF's) education programs may be coming to an end, with an assist from a congressionally mandated review of existing federal efforts to improve math and science education.

    NSF's 2008 budget includes $30 million to revive its Mathematics and Science Partnership program, a competitive grants program that teams universities and local school districts to improve elementary and secondary school math and science. Begun in 2002 as a $200 million presidential initiative, the NSF program had shrunk to a proposed maintenance-level $46 million in 2007 while a similar program in the Department of Education that awards block grants to every state grew rapidly (Science, 24 February 2006, p. 1092). The Administration's request would permit a new round of competitive 5-year awards, says NSF Director Arden Bement, along with developing “tool kits” for teachers based on successful programs to date.

    Many legislators have long argued that NSF could do a better job than the Department of Education, and a review of all federal programs by the interagency Academic Competitiveness Council (ACC) to be released later this month seems to bear them out. “I think that the ACC recognizes the quality of our assessments and our success in achieving significant student outcomes,” says Bement, who has seen a draft of the upcoming report. Robert Shea, associate director for management at the White House Office of Management and Budget, told Science last month that NSF is among those agencies “with a strong track record for rigorous evaluation” that could be models for other agencies trying to improve science and math instruction.

  10. 2008 U.S. BUDGET

    Science Adviser Battles Cancer

    1. Jeffrey D. Mervis

    Last month, John Marburger became the longest-serving presidential science adviser in U.S. history. The 66-year-old laser physicist has been in the thick of many policy battles in the 5 ½ years he's served President George W. Bush, but he's currently engaged in an even tougher fight—against cancer.

    In November, Marburger was diagnosed with non-Hodgkin's lymphoma and began a round of treatments. The effects of the chemotherapy have forced the former president of Stony Brook University and one-time director of Brookhaven National Laboratory to work much of the time from his home in New York.

    This week, Marburger's condition became public when he missed the Administration's annual science budget briefing, over which he normally presides. “Jack can't be here because he is currently fighting a battle with cancer, which he is winning,” explained Richard Russell, deputy director for technology and Marburger's longtime aide. Marburger is expected to testify next week before the House science committee.


    Scientists Tell Policymakers We're All Warming the World

    1. Richard A. Kerr*
    1. With additional reporting by Michael Balter in Paris.

    They've said it before, but this time climate scientists are saying it with feeling: The world is warming; it's not all natural, it's us; and if nothing is done, it will get a whole lot worse

    Hot times.

    It won't get this bad, but the world will warm substantially.


    The last time the Intergovernmental Panel on Climate Change (IPCC) assessed the state of the climate, in early 2001, it got a polite enough hearing. The world was warming, it said, and human activity was “likely” to be driving most of the warming. Back then, the committee specified a better-than-60% chance—not exactly a ringing endorsement. And how bad might things get? That depended on a 20-year-old guess about how sensitive the climate system might be to rising greenhouse gases. Given the uncertainties, the IPCC report's reception was on the tepid side.

    Six years of research later, the heightened confidence is obvious. The warming is “unequivocal.” Humans are “very likely” (higher than 90% likelihood) behind the warming. And the climate system is “very unlikely” to be so insensitive as to render future warming inconsequential.

    This is the way it was supposed to work, according to glaciologist Richard Alley of Pennsylvania State University in State College, a lead author on this IPCC report. “The governments of the world said to scientists, ‘Here's a few billion dollars—get this right,’” Alley says. “They took the money, and 17 years after the first IPCC report, they got it right. It's still science, not revealed truth, but the science has gotten better and better and better. We're putting CO2 in the air, and that's changing the climate.”

    With such self-assurance, this IPCC report may really go somewhere, especially in the newly receptive United States (see sidebar, p. 756), where a small band of scientists has long contested IPCC reports. Coordinating lead author Gabriele Hegerl of Duke University in Durham, North Carolina, certainly hopes their report hits home this time. “I want societies to understand that this is a real problem, and it affects the life of my kids.”

    Down to work

    Created by the World Meteorological Organization and the United Nations Environment Programme, the IPCC had the process down for its fourth assessment report. Forty governments nominated the 150 lead authors and 450 contributing authors of Climate Change 2007: The Physical Science Basis. There was no clique of senior insiders: 75% of nominated lead authors were new to that role, and one-third of authors got their final degree in the past 10 years. Authors had their draft chapters reviewed by all comers. More than 600 volunteered, submitting 30,000 comments. Authors responded to every comment, and reviewers certified each response. With their final draft of the science in hand, authors gathered in Paris, France, with 300 representatives of 113 nations for 4 days to hash out the wording of a scientist-written Summary for Policymakers.

    The fact of warming was perhaps the most straightforward item of business. For starters, the air is 0.74°C warmer than in 1906, up from a century's warming of 0.6°C in the last report. “Eleven of the last twelve years rank among the 12 warmest years in the [150-year-long] instrumental record,” notes the summary ( Warming ocean waters, shrinking mountain glaciers, and retreating snow cover strengthened the evidence.

    So the IPCC authors weren't impressed by the contrarian argument that the warming is just an “urban heat island effect” driven by increasing amounts of heat-absorbing concrete and asphalt. That effect is real, the report says, but it has “a negligible influence” on the global number. Likewise, new analyses have largely settled the hullabaloo over why thermometers at Earth's surface measured more warming than remote-sensing satellites had detected higher in the atmosphere (Science, 12 May 2006, p. 825). Studies by several groups have increased the satellite-determined warming, largely reconciling the difference.

    This confidently observed warming of the globe can't be anything but mostly human-induced, the IPCC finds. True, modeling studies have shown that natural forces in the climate system—such as calmer volcanoes and the sun's brightening—have in fact led to warming in the past, as skeptics point out. And the natural ups and downs of climate have at times warmed the globe. But all of these natural variations in combination have not warmed the world enough, fast enough, and for long enough in the right geographic patterns to produce the observed warming, the report finds. In model studies, nothing warms the world as observed except the addition of greenhouse gases in the actual amounts emitted.

    From studies of long-past climate, including the famous hockey-stick curve of the past millennium's temperature (Science, 4 August 2006, p. 603), the IPCC concludes that the recent warming is quite out of the ordinary. “Northern Hemisphere temperatures during the second half of the 20th century were very likely higher than during any other 50-year period in the last 500 years,” the report concludes, “and likely the highest in at least the past 1300 years.”

    Contrarians have conceded that greenhouse gases may be warming the planet, but not by much, they say. The climate system is not sensitive enough to greenhouse gases to overheat the globe, they say. For the first time, the IPCC report directly counters that argument. Several different lines of evidence point to a moderately strong climate sensitivity (Science, 21 April 2006, p. 351). The eruption of Mount Pinatubo in 1991 thickened the stratospheric haze layer and cooled climate, providing a gauge of short-term climate sensitivity. Paleoclimatologists have determined how hard the climate system was driven during long-past events such as the last ice age and how much climate changed then. And models have converged on a narrower range of climate sensitivity.

    The IPCC concludes that both models and past climate changes point to a fairly sensitive climate system. The warming for a doubling of CO2 “is very unlikely to be less than 1.5°C,” says the report, not the less than 0.5°C favored by some contrarians. A best estimate is about 3°C, with a likely range of 2°C to 4.5°C.

    What next?

    Looking ahead, the report projects a warming of about 0.4°C for the next 2 decades. That is about as rapid as the warming of the past 15 years, but 50% faster than the warming of the past 50 years. By the end of this century, global temperatures might rise anywhere between a substantial 1.7°C and a whopping 4.0°C, depending on the amount of greenhouse gases emitted. In some model projections, late-summer Arctic sea ice all but disappears late in this century. It is very likely that extremes of heat, heat waves, and heavy precipitation events will continue to become more frequent. Rain in lower latitudes will decrease, leading to more drought.

    Warmer, then hot.

    A middle-of-the-road scenario calls for warmings of more than 6°C in high northern latitudes.

    Long-range forecast.

    IPCC scientists in Paris warn of greenhouse warming getting out of hand.


    On some hot topics, the IPCC comes down on the conservative side. It sees evidence of more intense hurricane activity in the North Atlantic, something many researchers contest, but paints a murky picture elsewhere, in line with doubters' reservations (Science, 10 November 2006, p. 910). As to the so-called meridional overturning circulation (MOC)—the conveyor belt of currents that delivers warm water to the far North Atlantic—there is not enough evidence to say whether it has slowed under global warming, according to the IPCC, contrary to a high-profile report of a 30% slowing (Science, 17 November 2006, p. 1064). But the IPCC goes on to project a very likely reduction in MOC flow by the end of the century, perhaps on the order of 25%. Contrary to the climate catastrophe movie The Day After Tomorrow, however, a slowing of the MOC would not freeze up the North Atlantic. The region won't even cool off, thanks to greenhouse warming. And it's very unlikely the MOC will abruptly shut down this century, the report says.

    The IPCC is overly conservative, in the opinion of some newly outspoken scientists, when it comes to the fate of the world's great ice sheets—on Greenland and Antarctica—and the likely rise in sea level. The facts are not in much dispute. The ocean is warming and therefore expanding, mountain glaciers are melting into the sea, and Greenland is melting around its edges as well. That drove up sea level as fast as 3 millimeters per year lately. The IPCC projects that sea level will continue to rise 28 to 43 centimeters in this century, depending on emissions.

    It is also generally agreed that the IPCC calculation leaves out a potentially important factor. Some glaciers draining ice from Greenland and West Antarctica have sped up in the past 5 to 10 years, some of them doubling their speed (Science, 24 March 2006, p. 1698). But this glacier acceleration is not included in the IPCC sea-level projection “because a basis in published literature is lacking,” according to the report.

    That didn't sit well with some researchers, such as Stefan Rahmstorf of the Potsdam Institute for Climate Impact Research in Germany. He authored a Science paper last month that extrapolated from the recent sea-level rise to a rise ranging from 0.5 meter to a near-disastrous 1.4 meters by the end of the century. Days before the 2 February IPCC report release, he and others—call them counter-contrarians—spoke out in news reports. The IPCC sections on sea-level rise are “obviously not the full story because ice-sheet decay is something we cannot model right now, but we know it's happening,” Rahmstorf told the Associated Press. “A document like that tends to underestimate the risk.” And the day before the IPCC release, a second paper—co-authored with seven colleagues—was published online by Science ( The seas have been rising at the uppermost rate projected in past IPCC reports, the authors noted. Sea level “may be responding more quickly than climate models indicate,” they wrote.

    Such publicly expressed concerns are likely to become more common, says climate modeler Michael MacCracken of the Climate Institute in Washington, D.C. Now that the contrarians have been dealt with, he says, scientists no longer concerned about appearing to be alarmist will be speaking out about the IPCC being overly cautious.


    U.S. Policy: A Permanent Sea Change?

    1. Richard A. Kerr
    Ungentle reminder.

    Katrina's destruction brought global warming to mind.


    This time around, issuing yet another international assessment of the planet's climatic health (see main text) might look like throwing gasoline on a fire. Even in the United States, where skepticism about human-induced climate change has long dominated government policy, public concern about global warming was already as high as it ever had been. U.S. media had been at fever pitch on climate for a year and more. And local, state, and national politicians from both of the country's major political parties were pushing or even implementing their own proposals for reining in greenhouse gas emissions.

    “It takes a sudden jolt sometimes before we become aware of a danger,” former vice president Al Gore says in his Oscar-nominated global warming documentary An Inconvenient Truth. If so, the shock to public and political perceptions this time may have come in large part from Mother Nature. There has been “a chronic drip of [media] stories about weather effects that are hard to control,” notes geoscientist Michael Oppenheimer of Princeton University, from raging hurricanes such as Katrina and melting Arctic sea ice to mid-January daffodil blooms in Washington, D.C.

    Will the weird weather drive the body politic to eventual action on global warming? Some observers believe so. “Probably it's robust,” says Oppenheimer. The sentiment toward U.S. action “is just not going to go back.” But many of the jolting climate events may themselves go away, at least temporarily. Ice-melting warmth in the Arctic and surges of deadly hurricanes in the Atlantic, among other climate trends, are also subject to natural swings that could temporarily slow or even pause some of global warming's more dramatic effects.

    The first American surge in attention to global warming started with a similarly combustible mix of climate science and weird weather. On one of the hottest days of 1988 in muggy Washington, D.C., with drought gripping much of the American West and huge wildfires racing through a tinder-dry Yellowstone National Park, leading climate researcher James Hansen of NASA's Goddard Institute for Space Studies in New York City testified on Capitol Hill. This was greenhouse warming, he told Congress confidently.

    News coverage, at least, took off, according to a new accounting by political scientists Maxwell Boykoff of the University of Oxford and Jules Boykoff of Pacific University in Forest Grove, Oregon. It peaked again in 1992, when the first President George Bush signed the United Nations Framework Convention on Climate Change. Then media attention promptly plummeted. It perked up only briefly, the Boykoff brothers note, in response to political events, such as when President George W. Bush rejected the Kyoto Protocol limiting greenhouse gas emissions in 2002.

    Nevertheless, global warming never entirely vanished from the American consciousness. In the past couple of years, “the whole issue has moved up the agenda” again, says political communications researcher Matthew Nisbet of American University in Washington, D.C., and it did it without the usual boost from a single, triggering political event. The public still hasn't elevated global warming to its highest levels of environmental concern, Nisbet notes, but the subject has permeated television, movies, and books, segued onto the business, style, and gardening pages, and broken into daily conversation. Media attention reached an all-time high in 2006, Nisbet has found, as gauged by the number of articles in elite newspapers. And the mutualistic relation between the media and politicians was cranked way up as reporters fed off the policy debate and politicians drew strength from media coverage.

    So what got the pot boiling so high in the United States this time around? Many observers see global warming moving to the front burner much the way ozone destruction did in the 1980s. Theoretical predictions were prompting some governments to begin to curtail chlorofluorocarbon (CFC) emissions. Then researchers recognized the springtime ozone hole hovering over the Antarctic, galvanizing international negotiations on eliminating CFCs.

    Global warming may never have the equivalent of the ozone hole, but the cumulative effect could be the same. Americans “are starting to see changes in the weather,” says Eileen Claussen, president of the Pew Center on Global Climate Change in Arlington, Virginia. There was the 2004 hurricane season, with four hurricanes wreaking havoc across Florida, followed by Katrina in 2005; year after year of record-breaking shrinkage of Arctic sea ice, accompanied by images of hungry polar bears; glaciers accelerating their rush to the sea in Greenland and Antarctica, driving up sea level; and those daffodils in the nation's capital.

    Political and economic factors have also helped fuel the fire. Many alternatives to expensive oil, for example, would also ease greenhouse warming. But to the extent that dramatic climate events have heightened interest, global warming activists have a sometimes unreliable helper, researchers note. The climate system swings from warm to cool and back, from wet to dry. El Niño's unusual warmth in the tropical Pacific is just one of many natural climate variations. And the climate system does it all quite on its own.

    Such natural variability could temporarily reinforce or rein in global warming effects. In a study in press at the Journal of Climate, for example, modeler Gabriele Hegerl of Duke University in Durham, North Carolina, and colleagues reconstructed past Northern Hemisphere temperature from records such as tree rings and then apportioned the warming using a simple climate model. They found that rising levels of greenhouse gases account for about one-third of the large, rapid warming over the first half of the past century, but another third must have been a natural warming.

    What warms the hemisphere can also cool it, according to a modeling study reported 30 January in Geophysical Research Letters. Modeler Rong Zhang and his colleagues at the Geophysical Fluid Dynamics Laboratory in Princeton, New Jersey, found that oscillations in the flow of warm currents into the North Atlantic could have added substantially to the early-20th century warming, contributed to the mid-century pause in warming, and bolstered the obvious greenhouse-fueled warming of the past few decades. If what has been going up and down for at least a century goes down again, the ongoing warming that took off in the ′70s could noticeably slow in the next 10 or 15 years.

    Warm and warm again.

    The world, especially the Arctic, warmed in the 1930s and ′40s (warm colors), in large part due to natural variability.


    Natural variability is even stronger in places such as the Arctic (Science, 5 January, p. 36), one of two climatic “canaries in the coal mine” Gore cites in his film. In the 1930s to 1940s, the Arctic warmed to even higher temperatures than now, only to cool back down by the 1970s. Drawing on 12 climate models, Arctic researcher James Overland of the National Oceanic and Atmospheric Administration's Pacific Marine Environmental Laboratory in Seattle, Washington, sees the recent rapid Arctic warming “as a fairly strong natural variability signal on top of long-term [humanmade] change. It's very likely we could have a 5-year period of colder temperatures, and people could say, ‘Aha, we don't have global warming.’”

    Other worrying climate trends could pause or moderate as well. Part of the surge in powerful Atlantic hurricanes since 1995 is attributable to a natural cycle in the proportion of major storms, according to meteorologist Gregory Holland of the National Center for Atmospheric Research in Boulder, Colorado (Science, 10 November 2006, p. 910). So the Atlantic could quiet down a bit, he says, although perhaps not until 2020 or later. Another climate threat—the collapse of the climate-moderating currents of the Atlantic—seems to have receded already. Late in 2005, oceanographers reported measuring a sizable 30% slowdown in the so-called meridional overturning circulation only to concede late last year that their record was so noisy that they couldn't reliably detect any change after all (Science, 17 November 2006, p. 1064).

    Just how natural climate variability will interact with political divisiveness, the public's mood swings, and the cyclic economics of energy is unclear at this point. Some say, however, that the mix of bizarre weather and politics boosted by the media now ensures there's no turning back. Even if the climate craziness fades for a while, “I think national legislation is inevitable in 4 years,” says Claussen. Others, however, think such confidence may be misplaced. Science historian Naomi Oreskes of the University of California, San Diego, recalls the energy craze of the late 1970s, when soaring oil prices drove dreams of energy independence through conservation and alternative fuels. That passed as soon as prices fell. “We got excited for a while, but we didn't take the serious steps.”

  13. BRAIN, MUSIC, AND SOUND RESEARCH CENTER: Study of Music and the Mind Hits a High Note in Montreal

    1. Michael Balter

    After battling to have their field taken seriously, two musician-scientists have founded an interdisciplinary center to understand how—and perhaps why—humans make music

    Wired for sound.

    Emotions will be monitored at McGill's music hall.


    MONTREAL, CANADA—It's Saturday night in Montreal's Latin Quarter, and the flamenco dancers are in full whirl at the Casa Galicia restaurant. A tall woman with a gardenia in her hair stomps the wooden floor in staccato bursts as loud as firecrackers, while her partner yelps out a full-throated cante. At a table nearby, Isabelle Peretz sips a glass of Rioja with a wistful smile. “This music always gives me an intense frisson,” she says.

    But music is more than just a Saturday night pleasure for Peretz, a neuropsychologist at the University of Montreal (UM). Peretz has devoted her career to understanding why and how the human brain allows us to create and respond to sequences of sounds. In 2005, together with brain imaging specialist Robert Zatorre of McGill University in Montreal, she created the International Laboratory for Brain, Music, and Sound Research (BRAMS), a joint project of UM and McGill. Last fall, BRAMS received a total of $12 million in a matching grant from government and university sources. The money will allow the members of BRAMS, including Peretz, Zatorre, and nine other Montreal-based lead investigators, to explore music's mysteries. They seek to understand how humans cooperate to perform together, how children and adults learn to play music, and the relationship between music and language. “BRAMS will allow us to use music as a portal into the most complex aspects of human brain function,” says Zatorre.

    Music lover.

    BRAMS Co-Director Isabelle Peretz.


    Already, BRAMS researchers have pinpointed areas of the brain involved in the perception of pitch. And they have demonstrated that pianists' ability to remember long, complicated pieces relies on close coordination between distinct motor and auditory memory circuits in the brain, in contrast to previous assumptions that the melody was key and that the fingers would simply follow.

    Even before BRAMS was created, the Montreal group had established a world-class reputation in music research. “They are number one by a long shot,” says Jamshed Bharucha, a music researcher and the provost of Tufts University in Medford, Massachusetts, who reviewed the BRAMS grant. Josef Rauschecker, a neuroscientist at Georgetown University Medical Center in Washington, D.C., also a grant reviewer, agrees: “The two of them, plus the colleagues they have assembled, are unbeatable as a team.”

    The new laboratories on the UM campus include sound booths, a brain imaging lab, an echo-free chamber, and a performance lab sporting a $200,000 computerized Bösendorfer piano, which records the precise details of each keystroke and foot-pedal movement. One of BRAMS's most spectacular labs, however, will be located on the other side of town, at the concert hall of McGill's Schulich School of Music. Researchers plan to install wireless physiological sensors in up to 25 of the 188 seats to monitor group emotional reactions to live performances. The sensors will monitor heart rate, skin electrical responses, and even facial musculature of audience members who give informed consent, says McGill researcher Stephen McAdams, who heads this part of the project. Another 50 seats will be equipped with palm pilots, which subjects will be trained to use to indicate their reactions by moving a stylus “through emotional space,” McAdams says.

    These high-tech facilities will allow the BRAMS team to ramp up its research considerably. “We will be able to study musical perception and production in more natural contexts,” says Peretz. “We can move from the study of single individuals to several subjects.” The group is also designing musical instruments that can be played by subjects undergoing magnetic resonance imaging (MRI) brain scans. Zatorre says such studies “will allow us to understand why different individuals have greater or lesser facility for learning, and what brain activity patterns differentiate a virtuoso from a merely competent player.”

    For Peretz, 50, and Zatorre, 52, the creation of BRAMS is the high note in more than 25 years of research into music and the brain. But during much of that time, each of them was playing solo, and relatively few scientists were even listening. As a girl in Belgium, Peretz spent 11 years studying classical guitar. By the time she began university studies, however, “I knew that I was better at science.” She received her Ph.D. at the Free University of Brussels in 1984 and came to McGill in 1985. With her doctoral supervisor José Morais, she proposed on theoretical grounds that the brain had specialized neural pathways for producing and responding to music, even if it shared some cognitive domains, such as hearing, with language. But many researchers were skeptical, arguing that the neural circuits involved in music mostly overlapped with those used in language and other functions.

    At UM, Peretz began studying subjects suffering from congenital amusia, an inherited form of tone deafness that does not affect language or other mental functions. In the 1990s, she expanded this research to brain-damaged patients suffering from so-called acquired amusia, showing that many of them also had normal language abilities. That convinced many doubters that she was right: Music and language did seem to occupy different “modules” in the brain (Science, 1 June 2001, p. 1636). “Isabelle's work has fundamentally changed our understanding of how the brain processes music,” says archaeologist Steven Mithen of the University of Reading in the United Kingdom, author of The Singing Neanderthals

    If there are special areas in the brain for music, where are they? That question attracted neuroimaging expert Zatorre, who, like Peretz, has had a lifelong affinity for music—he has played the organ since he was a child. Born in Buenos Aires, Argentina, Zatorre majored in both psychology and music at Boston University, where he played the organ at local churches, earning $25 for each service. He received his Ph.D. in experimental psychology at Brown University in 1981, and the same year started a postdoc at McGill, where he has been ever since. At McGill, Zatorre was in on the early days of the neuroimaging revolution, using MRI and other techniques to study the processing of both music and speech. His team was the first to demonstrate differences in brain activation patterns between musicians who have absolute pitch and those who don't as well as the first to localize emotional responses to music in the brain. For example, Zatorre and colleagues showed that the “shivers-down-the-spine” feeling that many people get when they listen to pleasurable music correlates with activation of brain regions such as the amygdala and the orbitofrontal cortex—regions that are also associated with responses to food and sex.

    Musical mind.

    Robert Zatorre with computerized piano.


    Despite these accomplishments, until recently Peretz and Zatorre, like other music researchers, “were marginalized,” Peretz says. Bharucha agrees: “I had an uphill battle convincing people that music was important as a cognitive or brain phenomenon.” The turning point for Peretz and Zatorre came in 2000, when the New York Academy of Sciences asked the pair to organize a conference on the biological foundations of music. Soon afterward, the field began to explode. One reason, Zatorre says, was that brain researchers such as himself began collaborating with cognitive psychologists such as Peretz. “The cognition people started realizing, ‘Hey, there is a brain out there.’ And the neuroscientists and imaging people began realizing that they had something to contribute to the [cognitive] models.”

    The collaboration between Peretz and Zatorre, for example, has identified the brain region apparently defective in people with congenital amusia, which affects 5% or more of some populations. In the October 2006 issue of Brain, Peretz, Zatorre, and their co-workers showed that on MRI scans, tone deaf individuals had less white matter in the right frontal inferior gyrus, an area just behind the right side of the forehead, compared to musically normal controls. The results were consistent with earlier work by Zatorre and others suggesting that the right frontal cortex is implicated in perceiving pitch and remembering music. In contrast, language is most often localized on the left side of the brain.

    Yet whereas work on musical learning is showing progress, “music remains a mystery,” Peretz says. “The biggest question is what it is for.” Researchers have suggested many scenarios for why musical abilities might have evolved, such as enhancement of social solidarity and increasing communication between mothers and children (Science, 12 November 2004, p. 1120). But Peretz is cautious about such speculations. Although she has long argued against claims by researchers such as Harvard University cognitive scientist Steven Pinker that music is just “auditory cheesecake” with no adaptive function, she conceded in a recent review in Cognition that most hypotheses about music's role in human evolution are inherently untestable. “I believe that music is in our genes, but belief is not science”—more evidence is needed, she says.

    Nevertheless, Zatorre says, “Pinker has served as a useful foil” for researchers who believe that music serves a unique biological role. And one reason for the explosion in this research, Zatorre adds, is that “music can serve as a probe into just about every mental function, including perception, motor performance, memory, attention, and emotion.” In addition, some BRAMS research may have practical applications. For example, patients learning to speak again after strokes or other brain injuries do so more rapidly if they sing along with another person. Studying the neural changes that accompany such therapies could boost their effectiveness, says Peretz. Zatorre says he eventually wants to improve the technology of cochlear implants so the hard of hearing can listen to music. “Current implants are designed to maximize perception of speech,” he says. “They are not good for music.”

    Although the creation of BRAMS seems likely to keep Peretz and Zatorre in the front ranks of music neuroscience research, both express some nostalgia for the musical careers they left behind. Zatorre is thrilled that the chapel in the former convent where BRAMS is housed has a working organ. “My dream is to do experiments in the morning and play in the afternoon,” he says. Peretz still has her guitar. “I made my choice,” she says, “but sometimes I still have regrets.”


    Controversial Marrow Cells Coming Into Their Own?

    1. Constance Holden

    Despite wide skepticism, Catherine Verfaillie persevered in her research and remains optimistic that her MAP cells will one day be useful in therapy

    Still a believer.

    Verfaillie in her Minnesota lab.


    Catherine Verfaillie of the University of Minnesota made a big splash in 2002 when she reported in Nature that her lab had cultivated a type of cell with some seemingly remarkable properties. Called multipotent adult progenitor (MAP) cells and derived from the stromal cells of the bone marrow, the cells seemed to be able to turn into “most, if not all” cell types, the team wrote. And when injected into a mouse embryo, they appeared to contribute to most somatic cell types, creating a so-called chimeric mouse. MAP cells were surprisingly versatile for adult cells, whose fates were generally believed to be predetermined—in fact, they looked almost as good as embryonic stem (ES) cells.

    Although Verfaillie was cautious in her claims, the press seized on MAP cells as an alternative to ES cells, which are controversial because creating them involves the destruction of an embryo. Scientists, too, were eager to try them out. But in the years since, MAP cells have failed to live up to expectations. They proved hard to grow, and the talk over coffee at meetings was that no one could replicate the Nature work.

    Last month, however, Verfaillie's cells gained fresh attention. In the January issue of the Journal of Experimental Medicine, a group at Minnesota and Stanford University reported that they had used MAP cells to rebuild the blood system in mice. The work has impressed one skeptic, Stanford blood stem cell researcher Irving Weissman, who collaborated on the new work. Weissman calls the result “remarkable.” His skepticism, he adds, “makes me a perfect collaborator, because I insisted on very rigorous criteria for the experiments.”

    He emphasizes, as does Verfaillie, that these cells are clearly not as versatile as ES cells. But despite their limitations, they could prove to be useful therapeutically. “A lot of people have lost interest in MAP cells by this point,” says Weissman. “What our paper will help do is get everybody to look at it again.” Others agree. “I'm sure it will revive interest in MAP cells,” says stem cell researcher Paul Schiller of the University of Miami, Florida.

    Going to the marrow

    Bone marrow basically contains two types of stem cells: those that give rise to blood, and the stromal stem cells that develop into bone, fat, muscle, and cartilage. MAP cells are derived from early precursors in this latter population.

    Verfaillie's team stumbled on MAP cells more or less by accident when, in trying to grow mesenchymal stem cells (a type within the heterogeneous stromal cell population), they came up with a culture system that seemed to select for even more primitive cells (Science, 21 June 2002, p. 2126). The tiny cells are strictly artifacts of lab culture, requiring at least 30 population doublings before displaying some of the characteristics of ES cells.

    Verfaillie was not prepared for the extreme reaction when she went public with her findings in 2002. “Certainly the perception on the part of everyone was these cells were going to do it”—that is, accomplish feats hitherto ascribed only to ES cells—says David Scadden, co-director of the Harvard Stem Cell Institute. But, he says, they turned out to be “a very fiddly cell to work with.” Even scientists who obtained cells directly from Verfaillie couldn't make them perform. For example, Marcus Grompe, a liver stem cell researcher at Oregon Health & Science University in Portland, says he tried very hard but failed to get MAP cells to develop into hepatocytes in mice whose livers had been destroyed.

    Verfaillie has retained her high hopes for MAP cells, although she acknowledges that, in retrospect, the original chimera result might have been wrong. “We cannot exclude currently that that is not due to fusion” rather than chimerism, she says. Fusion, in which introduced cell types merge with the locals, first came to scientists' attention shortly before the Verfaillie Nature paper appeared (Science, 15 March 2002, p. 1989) and ultimately led to disillusionment with the notion that adult cells were capable of turning into other types of cells. Now researchers believe that in many—perhaps most—reported cases of so-called adult cell plasticity, the appearance of cells switching identities was in reality misleading signals from fused cells.

    But Verfaillie and Weissman ruled out fusion in the mouse blood paper and say the findings are indisputable. The researchers took mouse MAP cells and expanded them with up to 80 population doublings. They then transplanted them into 28 mice whose bone marrow had been destroyed by radiation. Because MAP cells are slow to grow and thus would be unable to repopulate the blood system fast enough to save a mouse from dying of radiation, they also injected blood stem cells. In the study, reported online 15 January in the Journal of Experimental Medicine, the MAP cells survived and contributed to the blood systems of 21 of the 28 mice.

    Blood makers.

    MAP cells populate mouse lymph node.


    “Scientists must now understand that mouse MAP cells can make normal blood,” proclaimed co-author Weissman. That makes them a promising treatment option because the population is readily expandable, in contrast to hematopoietic stem cells, which are difficult to expand in the lab. But first, he says, scientists would have to find a way to boost the speed at which MAP cells work, perhaps by pushing them to a more advanced stage of development before they are transplanted. And of course the study must be replicated with human MAP cells.

    Verfaillie attributes their success in part to the development of improved culture conditions over the past half-dozen years, which are resulting in more homogenous cell populations with high levels of Oct4, the main marker they share with ES cells. Verfaillie, who has been working at both Minnesota and the University of Leuven in Belgium for the past year and is now settled at Leuven as head of its stem cell institute, has also trained a number of groups, mainly in Europe, in MAP cell cultivation.

    She says some 30 papers—about two-thirds with her as a co-author—have so far been published on MAP cells, and more are in the works. Last November, for example, Felipe Prosper's group at the University of Navarra in Pamplona, Spain, published a paper in Blood reporting that MAP cells contribute to tissues lining the walls of veins and arteries in mice.

    Verfaillie also believes the cells may hold promise for mending sick livers and other organs. Some cells, she says, may “have to commit in the dish first—the environment in vivo may not give the right signals.” She says a half-dozen papers currently in preparation or under review will present evidence that undifferentiated MAP cells can differentiate to specific cell types and be of therapeutic benefit in mice. Athersys, a biotech company in Cleveland, Ohio, is licensed to produce the cells, which are patented by the University of Minnesota.


    So far, says Verfaillie, what gives her group the most trouble is trying to make nerve cells and heart cells. Nerve cells are ectodermal tissue, and “it seems easier to make mesoderm and endoderm,” she says. And even though heart tissue is mesodermal, no one has had any luck coaxing MAP cells to function as heart cells either. Even in vitro, “we still can't make cardiac anything, which is strange,” says Verfaillie. Nonetheless, she's more optimistic about MAP cells' ultimate versatility than is Weissman, who remains skeptical despite the recent blood success.

    It's still not clear exactly how MAP cells will stack up to ES cells. ES cells are the gold standard of pluripotency—which is usually defined as the capability of generating all cell types in the adult body. There are several important markers of pluripotency, namely Oct4, Nanog, and Sox2. The ability to form benign tumors called teratomas is one of the basic tests for pluripotency, as is the generation of a chimeric mouse from injecting a single cell into a mouse blastocyst. MAP cells have Oct 4, but they lack both Nanog and Sox2. Nor can they form teratomas. “We termed them ‘multipotent’ because the cells definitely do not have all the features of ES cells,” says Verfaillie.

    Verfaillie says she regrets the hype over her findings, which she sees as “in large part politically motivated”—and by no means confined to MAP cells. With these and other types of stem cells, she says, it is too soon to predict where treatments will be found.


    Stem Cell Candidates Proliferate

    1. Constance Holden

    The stem cell landscape is getting crowded. Over the past few years, scientists have reported a variety of new types of stem cells, from both animal and human sources, including fetal liver, mouse testes, bone marrow, and umbilical-cord blood. One of the most recent—and widely publicized—studies, by Anthony Atala of Wake Forest University in Winston-Salem, North Carolina, described the potential of cells isolated from the amniotic fluid to differentiate into all three germ layers in vitro and into bone and brain cells in live mice (Science, 12 January, p. 170).

    All the reports are preliminary, and none of the new cell types appears to be able to duplicate the potential of embryonic stem (ES) cells. Nonetheless, in combination, they may one day become prime players in stem cell therapy. A few examples:

    Multiple progenitors.

    MAP cells, such as these from rat, are just one type of stem cell under investigation.

    • USSCs: A group at the University of Düsseldorf in Germany has identified a population of cells from human cord blood, which they call USSCs, or unrestricted somatic stem cells. They have “many overlapping features with MAP cells,” they report, differentiating into a variety of tissues in vitro.

    • EPCs: Douglas Losordo of Tufts University School of Medicine in Boston, Massachusetts, is currently conducting a study injecting bone-marrow-derived cells called endothelial progenitor cells (EPCs) into patients with angina in hopes of creating new blood vessels to the heart.

    • MIAMI cells: A group at the University of Miami School of Medicine in Florida has reported a population of “pluripotent” cells from human bone marrow that they have dubbed marrow-isolated adult multilineage inducible (MIAMI) cells. MIAMI cells may complement MAP cells, they say, expressing the two stem cell markers, Nanog and Sox2, that MAP cells lack.

    Robert Lanza of Advanced Cell Technology Inc. in Worcester, Massachusetts, predicts that a variety of different stem cells will prove optimal for different diseases. And in some cases, stroke for example, in which blood vessels and neurons are damaged, more than one stem cell type will be used. Marcus Grompe of the Oregon Health & Science University in Portland agrees that “there's a good chance” that new stem cell therapies will rely primarily on non-ES cells. ES cells are essential research tools. But, he says, at the current state of knowledge, “I haven't met a single person who wasn't leery of them for putting into people.”

  16. A Sustainable Future, If We Pay Up Front

    1. Daniel Clery

    The oil shocks of the 1970s sparked a short-lived golden age of energy R&D. Today's energy concerns haven't caused a similar bonanza, yet


    It's been a year since President George W. Bush called on researchers to help the United States shake off its “addiction” to oil and its reliance on foreign sources of energy. “By applying the talent and technology of America, this country can dramatically improve our environment, move beyond a petroleum-based economy, and make our dependence on Middle Eastern oil a thing of the past,” he said. Since then, talk of breaking the petroleum habit has intensified. Political instability in key regions that supply oil and natural gas and oil prices near an all-time high have made energy the issue of the moment, with newspapers, magazines, and TV news programs extolling the virtues of rooftop solar cells, wind turbines, biofuels, and hybrid-energy cars.

    All this is reminiscent of the late 1970s, when oil shocks prompted a surge of interest in alternatives to conventional sources of oil. But there are some significant differences. Today's concern is driven by worries about climate change in addition to fears of energy insecurity. And whereas the 1970s oil shocks prompted a gusher of new funding for energy-related R&D, this time around, research dollars—or euros—have yet to begin flowing in earnest. The science has also changed over the past quarter-century: Now geneticists and synthetic chemists are tweaking agricultural crops to make better liquid fuels; materials scientists are wringing more efficiency out of photovoltaic cells; and computer modellers are redesigning nuclear reactors from the ground up. In the following news pages, we profile some of the people who are following these new paths in energy research.

    Energy-policy experts and several recent studies agree that if we are to make any substantial change to our energy supply, huge increases in funding will be needed. But prospects for sharp increases in energy research seem unlikely. “There certainly should be [more funding],” says carbon sequestration expert Robert Socolow of Princeton University. Although there are some signs of new money in targeted areas of alternative-energy research both in the United States and elsewhere, “I don't see major increases because of the federal deficit,” says physicist Ernest Moniz, an energy-policy specialist at the Massachusetts Institute of Technology in Cambridge and former undersecretary at the U.S. Department of Energy (DOE). “There are tough fiscal times ahead.” Indeed, in the United States, the anticipated yearlong budget freeze could delay even some of the projects that have been tapped for increases (Science, 15 December 2006, p. 1666).

    These dim funding prospects follow a period of relative decline in energy R&D. Paul Runci of the Pacific Northwest National Laboratory in Richland, Washington, produced an analysis last year of energy R&D investment in 11 nations that together provide 95% of the energy research funding in industrialized countries. Almost all of those countries steadily increased funding until the late 1970s to mid-1980s, when levels tapered off until the mid-1990s. They have remained largely stagnant ever since (see figure). The biggest falls have been in funding of nuclear and fossil fuel R&D (see figure). In some countries, fossil fuel research is less than 10% of what it was at the peak. The exception is Japan, which kept increasing funding until this decade. Runci blames this drop on a decade of low oil prices and oversupply, changing perceptions of nuclear power following the accidents at Three Mile Island in Pennsylvania and at Chornobyl, Ukraine, and public policies that have shifted R&D responsibility from the public to the private sector.

    Industry has so far failed to take up the slack, while government money has dwindled. The U.S. energy industry is a factor of 10 below the average R&D intensity of American business. The shortage of both public and private funding is having a serious impact on new blood coming into this area of research. “Fields saturate [with older researchers],” says Socolow. “Energy keeps not getting enough new recruits.”

    There are some signs of a revival, however. Runci's report points to energy-conservation research, which grew strongly in the United States and Japan during the late 1990s and into this decade. Runci also singles out the growth of wind energy in Europe. Concern about carbon emissions and a desire to become world leaders in renewables technology spurred the European Union (E.U.) and some national governments to invest in wind energy R&D as well as provide subsidies and tax incentives to energy companies to set up wind farms. The result has been a phenomenal growth in installed wind capacity (see figure) that is now equivalent to 50 coal-fired power stations. The E.U.'s renewables industry now has an annual turnover of $20 billion, half the world market, and employs 300,000 people.


    In his speech last year, Bush voiced new enthusiasm for novel forms of energy technology and announced the launch of his Advanced Energy Initiative. He subsequently proposed a 22% increase in DOE funding for clean energy technology in 2007, focusing on improving the efficiency of cars as well as expanding the use of biofuels and developing fuel cells, and on electricity-generating technologies including clean coal, advanced nuclear power, solar, and wind. These areas received $1.7 billion in funding during 2006, but some may get caught in the anticipated budget freeze.

    One of the most eagerly awaited DOE initiatives is a plan to build two new bioenergy research centers to carry out basic research on microbial and plant systems with the aim of converting inedible plant fibers such as cellulose into ethanol for biofuels. With $125 million over 5 years on the table for each center, universities, national labs, and other institutions are vying to host them. A decision is expected in the summer. The oil company BP has launched a similar scheme, offering $500 million over 10 years to an institution to run an Energy Biosciences Institute (see p. 790). A decision on the site—either in the United States or the United Kingdom—is expected by the time this issue is published. “We've come to realize that the intersection of biosciences and energy is a very powerful one,” says Steven Koonin, BP's chief scientist.

    The European Union is also backing new energy technologies. In the latest 7-year Framework research program—approved late last year—$1.5 billion will be spent on renewables and energy efficiency, an increase of 40% over the previous Framework program and more than 50% of all non-nuclear energy funding. In the United Kingdom, meanwhile, a government energy review completed last summer promised a National Institute for Energy Technologies, with a budget in the region of $200 million per year.

    All of these initiatives are giving energy research a much-needed shot in the arm, but a number of influential reports are calling for increases in energy R&D on an entirely different scale. The G8 meeting of world leaders at Gleneagles, U.K., in July 2005 called on the International Energy Agency (IEA) to map out scenarios for a future clean energy supply. According to the IEA report, published last year, “there is an acute need to stabilize declining budgets for energy-related R&D and then increase them.” Nicholas Stern, a chief economic adviser to the U.K. government, was more specific. In his influential review on the economics of climate change published last October, he forecast that the impact of doing nothing to counter global warming could amount to between 5% and 20% of global gross domestic product, whereas efforts costing 1% of GDP could counter the worst effects. Those efforts would include at least doubling the current amount of energy R&D funding and a fivefold increase in support for the deployment of new technologies.

    An ambitious report released in 2005 by the U.S. National Academies entitled Rising Above the Gathering Storm aimed to identify actions to help the United States compete and prosper in the 21st century. Among a number of recommendations, it advocated the creation of a new agency within DOE, akin to the Defense Advanced Research Projects Agency, to fund energy R&D. This new body, dubbed ARPA-E, should start with $300 million a year and build up to $1 billion annually over 5 or 6 years. A bill to set up ARPA-E was presented to Congress in December 2005 by Representative Bart Gordon (D-TN). Gordon, now the House science committee chair, reintroduced the bill last month. Moniz says that research he and his colleagues have conducted for reports on the future of nuclear energy and coal also suggests that a doubling of research spending is necessary.

    Ultimately, it will be politics at the highest level that will dictate the fate of energy research. If oil supplies are further threatened by geopolitics, or new evidence of human-induced climate change is sufficient to convince the skeptics, energy research may again prosper. But will it continue to do so if the price of oil later drops again? “That remains to be seen,” says Koonin.


    Steering a National Lab Into the Light

    1. Robert F. Service

    Steven Chu is on a crusade to make solar power work. His weapon is Lawrence Berkeley National Laboratory

    Drive past the guardhouse at the entrance to Lawrence Berkeley National Laboratory (LBNL), and you travel about 250 meters on Chu Road before it splits off onto other tentacles of the lab overlooking San Francisco Bay. Despite its short length, it's a fitting entrée. The road is named for 1997 physics Nobelist Steven Chu, who became the lab's sixth director 2 years ago and ever since has been working to steer its research in a new direction. His focus: a sweeping agenda aimed at making solar energy a practical, commercial, and world-changing reality.


    Chu is working to raise awareness and funds to fight climate change.


    Since taking his new job, Chu, 58, has traveled the globe making the case for solar power. Some of the refrains are familiar: retreating glaciers, more-damaging storms, and sea-level rise. Some, though, are less well known, such as how increased water evaporation from soils could jeopardize farming in the Midwestern United States. “Sustainable, carbon-neutral energy is the most important scientific challenge we face today,” Chu is fond of saying.

    On the supply side, Chu says current carbon-neutral energy sources face a host of problems. Wind, nuclear power, and biofuels are unlikely anytime soon to meet all of the needs of a civilization that in 2005 was using energy in all forms at a rate of about 16 trillion watts, with roughly 80% of it coming from fossil fuels. Solar power also has limitations, but the sun beams more energy toward Earth in an hour than all 6 billion of us use in a year. The challenge is finding ways to capture and store that energy that are cheap and efficient.

    To meet that challenge, Chu has launched three separate initiatives to yoke at least part of LBNL research to a solar agenda. The first is a proposal to team up with fellow national labs Sandia and Lawrence Livermore and build one of two $125 million BioEnergy Institutes to be funded by the Department of Energy (DOE) that, over 5 years, will propel advances in nanotechnology and synthetic biology into better ways of making biofuels. Second is to win a $500 million pot put up by the oil giant BP for a biofuels institute (see p. 790). Finally, an initiative called Helios will look beyond bioenergy to include topics such as nanotech-based photovoltaics and fuel-generating catalysts. Although none of the proposals had been funded by the time Science went to press, Chu already has verbal commitments from private sources of about $50 million and up to $70 million from the state of California. “You try to make some rain and get some funding,” Chu says.

    If that funding comes, Chu expects little trouble getting scientists to follow and little opposition within LBNL, because the programs would expand the lab's current $525 million annual budget rather than cut into existing programs. The new programs also wouldn't affect the lab's four national user facilities—a synchrotron, an electron microscopy center, a nanofabrication facility, and a supercomputing center—other than to encourage users to target their research on improving solar power generation. So far, Chu's ambitions are getting a warm reception both inside and outside the lab. “I think it's great,” says physicist Nate Lewis of the California Institute of Technology in Pasadena, adding: “If you need to lay more golden eggs, the only proven way is to buy more chickens.” “[Getting] Chu was a coup,” adds Carolyn Bertozzi, a University of California, Berkeley, chemist, who also directs the Molecular Foundry, DOE's nanofabrication user facility. “He's really been able to inspire people.”

    Asked why he decided 2 years ago to quit Stanford University and move up the road to LBNL, Chu doesn't hesitate. “I was drying up as a scientist and had nowhere to go,” he deadpans before breaking up. In reality, his lab of 13 students and postdocs was thriving. He moved there in 1987 after 9 years at AT&T Bell Labs in New Jersey, where he did the work that won him a share of a Nobel Prize: developing laser-based techniques to cool clusters of atoms to just above absolute zero. He continues to pursue that work, although he's slowly shifting his lab over to work in biophysics. In truth, he says, he was propelled to make the move by his concern over climate change. “It's quite sobering,” Chu says of the scale of the task. “This is a problem we have to address, and we have a limited amount of time to do it. What we do in the first half of this century, we will see the consequences for the next 500 to 1000 years.”

    Chu knows full well that solving this problem is certain to involve government policy and international diplomacy, and it may even depend more on building codes than science. But he seems to thrive on the multifaceted challenge, gliding effortlessly between diverse topics such as how local tax policy in China stops that country from adhering to its own clean-air regulations, whether CO2 pumped to the ocean floor will stay put, and the potential of architectural innovations to reduce the amount of energy used to heat, cool, and light buildings.

    In the end, Chu says, it will take an array of solutions to hold carbon emissions to manageable levels. And they have to start happening fast. Says Chu: “We don't have the option to fail on this one.”


    Eureka Moment Puts Sliced Solar Cells on Track

    1. Dennis Normile

    Stuck with conventional technology, a pair of Australian researchers hit upon a way to take silicon cells somewhere new


    A train ride through the Scottish countryside to historic Edinburgh would not seem an ideal occasion for flashes of scientific insight. But that setting produced a eureka moment for photovoltaics researchers Andrew Blakers and Klaus Weber. While clattering along in a rocking carriage, the two Australian National University (ANU) scientists conceived a new way of making solar cells that promises to reduce the cost of panels by as much as 75%. Instead of trying to make the wafers used in solar cells thinner to save on expensive crystalline silicon, they start with a thick wafer and divide it into thousands of long, thin slices that they dub SLIVERs.

    “The SLIVER cell is a highly original approach, the best way yet suggested of making small, efficient solar cells,” says Martin Green, a photovoltaics expert at the University of New South Wales in Sydney, Australia. A pilot plant in Adelaide is now producing SLIVER cells, and full-scale commercialization could be just 2 years away. The SLIVER cost advantage comes from dramatically reducing the use of expensive monocrystalline silicon, the material of choice for solar cells because of its relatively high efficiency in converting solar energy into electricity.

    Researchers around the world are racing to find ways to minimize the amount of silicon needed. In the traditional approach, they cut round wafers from a cylindrical silicon ingot and add dopants and coatings to the surface. To reduce silicon use, some groups are exploring exotic materials; others are trying to form thin films of monocrystalline silicon on a substrate. Blakers and Weber were trying to push the limits of conventional silicon technology.

    Blakers, 51, has worked on photovoltaics since his student days. He earned his Ph.D. and did his initial postdocs at the University of New South Wales under Green, whose group holds the efficiency record for silicon solar cells. After a stint at the Max Planck Institute for Solid State Physics in Stuttgart, Germany, Blakers set up a silicon photovoltaics research lab at ANU in Canberra. When other researchers began moving beyond silicon, Blakers and his team found they were held back by funding. “We decided to explore what the possibilities were within the constraints we had,” says Weber, who joined the lab as a Ph.D. student in 1993 and is now a senior lecturer.

    The two were pondering those possibilities on that Scottish train in 2000 en route to a renewable energy conference in Edinburgh. “It was a true collaboration,” Blakers says of the moment of inspiration. It took just a couple of months of work back in the lab to produce their first SLIVER cells.

    Their technique starts with a silicon wafer 1 or 2 millimeters thick. They etch slots completely through the wafer, leaving silicon strips connected to the edge of the wafer in something that resembles a circular grill. They then dope, metallize, and coat the grill to produce photovoltaic cells, detach the strips from the wafer, and lay them on their sides for encapsulation. Each strip forms a SLIVER cell 50 to 100 millimeters long, 1 or 2 millimeters wide, and 40 to 60 micrometers thick.

    Processing wafers this way produces cells that contain just one-tenth the silicon used in normal solar cells with the same surface area. And that will cut the cost of finished modules from the current $4.50 per watt to something approaching $1 per watt, Blakers says. Because the process treats both sides of the SLIVERs, they could be used in applications to capture both direct and reflected light. Spaced SLIVERs in a glass sandwich create a transparent panel that could act as window glass while still generating electricity. And because they are thin, SLIVERs are flexible enough to be used in roll-up solar panels.

    Origin Energy, an Australian natural gas and electricity generator and supplier, licensed the SLIVER technology in 2003 and built a pilot plant in Adelaide that, company spokesperson Tony Wood says, has verified that SLIVERs can be produced reliably and in quantity. Wood says Origin is now in discussions with “a small number” of potential partners to scale up production. The commercial plant they plan to announce later this year will likely be in Europe or North America to be closer to markets and to take advantage of government incentives for alternative energy.

    Wherever SLIVERs are made, “there are important niche applications for this approach, such as in powering portable equipment,” says Green, who is working on a rival technique to reduce silicon usage. But, he adds, “I think a factor of 4 clearly would be an overestimation of any cost advantage.” He notes that handling the 2000 to 3000 SLIVERs created from each wafer poses a manufacturing challenge.

    Blakers foresees cost gains for the SLIVER technology when it moves to mass production. As for handling the SLIVERs, he says they have devised a simple scheme that works “without robots, machine vision, or pick-and-place equipment.” He promises that the proprietary process will be revealed at conferences sometime this year, and it's nothing that couldn't be imagined on a train ride through Scotland.


    How to Make Biofuels Truly Poplar

    1. Eli Kintisch

    Clint Chapple has long reveled in the details of plant biochemistry. Now he's finally getting his hands dirty

    Soft cell.

    Chapple's work in poplar suggests a cheaper source of ethanol.


    A master of the biochemical minutiae of plants, Clint Chapple never expected to conduct research requiring a diesel excavator and a greenhouse for growing trees.

    But that's where Chapple now finds himself as he explores whether poplar trees could replace corn as a crop for making ethanol for fuel. Over the past 15 years, Chapple, a plant biochemist at Purdue University in West Lafayette, Indiana, has redrawn the map of a key metabolic pathway in plants—work that has won him a solid reputation among academic researchers. Yet plant scientists such as him have long remained second-class citizens in the life sciences funding game. Now, however, with the government embracing biofuels, basic researchers such as Chapple are finding themselves thrust out of their academic cloisters and into the spotlight.

    “It's like they say—the right place at the right time,” says Chapple, 47. But Chapple's onetime mentor Chris Somerville, a biochemist at Stanford University in Palo Alto, California, says Chapple is being modest. In fact, Somerville says, Chapple has continually exploited unanticipated influences on his career—$70-a-barrel crude oil being only the latest. A fascination with plants' “amazing chemical repertoire” led to graduate work in biochemistry for Chapple, who arrived at Somerville's lab in 1990 with no training in genetics. But he quickly learned enough to create mutants of the model species Arabidopsis and develop a promising metabolic method of characterizing polysaccharides in the cell wall (Science, 20 August 1993, p. 1032).

    A related side project led to a discovery in the pathway that creates lignin, a crucial cell wall stiffener and, behind cellulose, the second most abundant polymer on Earth. That find, a description of the enzymatic step that leads to both straight and branched forms of lignin, helps explain why certain plants break down more easily than others. Over the next 15 years, Chapple uncovered key genetic mutations and steps in the pathway, amounting to a “huge” contribution, says chemist and colleague John Ralph of the University of Wisconsin, Madison.

    Lignin is a nuisance to ethanol makers, as it blocks enzymes from getting at the cellulose to break it down to make ethanol. So the U.S. Department of Energy wants Chapple to see how cells modified to make different kinds or amounts of lignin could help poplar compete economically with corn as a feedstock. The first step in the 3-year, $1.4 million project is for Chapple to create the DNA tools with which lignin deposition can be modified. Next, Purdue's Richard Meilan, a transgenic tree expert, will generate transgenic tree saplings, and chemical engineer Michael Ladisch will evaluate the difficulty of recovering cellulose from each. “It dovetails so nicely with the work I've done,” Chapple says.

    The applied nature of the work is a new twist for Chapple, who says he “always assumed someone else would do it.” Experts say the research is not groundbreaking in terms of basic science, but it does carry risks. Chief among them is whether the public will reject transgenic trees. Researchers are buoyed by widespread U.S. acceptance of modified food crops, now common in U.S. grocery products. But remaining European resistance and Greenpeace U.S.A.'s belief that transgenic trees are a “danger in the woods” make supporters such as biochemist Shawn Mansfield of the University of British Columbia in Vancouver, Canada, believe it could be “a hard sell.”

    One part of Chapple's answer is to find metabolic fingerprints for the transgenic varieties the team grows. That information might enable researchers to find similarly valuable trees in nature and breed them. But that points to another challenge: the “nightmare,” as Meilan puts it, of teasing apart the myriad peaks on chromatograms to nail down which part of the fingerprint corresponds to the handful of critical metabolites out of thousands that plants make.

    Colleagues say Chapple has the right makeup for the job, and he's included in the project consultations with experts on the societal and ethical impacts of the work. Despite Chapple's limited experience in working with industry partners, for example, Meilan says Chapple wisely addressed concerns of visiting company officials by emphasizing Purdue's experience with government regulators.

    Regardless of how effectively Chapple's team overcomes the obstacles to making poplar a legitimate fuel crop, one thing is clear: Plant scientists are being lavished with newfound attention and cash. Just as gratifying, says Chapple, are the shifting attitudes of his department's graduate students, who he says have usually viewed research on plants as less attractive than biomedical pursuits. “I never considered plant sciences before,” says Anton Iliuk, a first-year Purdue graduate student in biochemistry. “All of a sudden it feels very exciting.”


    Small Thinking, Electrified Froth, and the Beauty of a Fine Mess

    1. Robert Coontz

    An electrochemist seeking better building materials for power-generating devices hymns the virtues of “high-performance disorder”

    For someone who works for the U.S. Navy, Debra Rolison doesn't mind rocking the boat. Ask her about fuel cells, and she wastes no time telling you that the way most other chemists build their electrodes—stacking regular crystals into structures tens of micrometers in size—is the wave of the past. The future, Rolison predicts, will be both smaller and messier.

    “We've lived too long in the age of masonry, the glop-and-slop approach to making things like batteries and fuel-cell electrode structures,” she says. On the nanoscale at which chemistry happens, however, materials are a far cry from tidy brickwork. They're badly mixed, irregularly structured, shot through with holes. Chemists must harness that irregularity, Rolison says, to build better devices from the bottom up.

    “She makes people think,” says Henry White, an electrochemist at the University of Utah in Salt Lake City. “Even if you don't agree with her, she can really stir things up and make things happen.”

    What has stirred Rolison, 52, up for the past 36 years is chemistry. She says her passion ignited when she studied the subject in high school and realized that she had “an instinctive understanding of redox reactions.” She sailed through college in 3 years (“almost two”) and in 1980 received her Ph.D. from the University of North Carolina, Chapel Hill. Uninterested in an academic career but wary of chemical companies' reputation for shunting women scientists into management jobs, she signed on as a research chemist at the Naval Research Laboratory (NRL) in Washington, D.C. She has been there ever since. Now, as head of NRL's Advanced Electrochemical Materials Section, Rolison supervises two staff scientists, four postdocs, and usually a couple of undergraduate researchers. “We're small, but we have our fingers in a lot of pies,” she says.

    Rolison's team first broke the crust of energy research in the mid-1990s, with a Defense Advanced Research Projects Agency project to create new high-performance catalysts for methanol-powered fuel cells. Catalysts promote reactions at a fuel cell's two electrodes: the anode, which strips protons and electrons from the cell's fuel (hydrogen or a hydrogen-rich chemical), and the cathode, which splits oxygen molecules from air so the cell can form its main waste product, water. In combination, the reactions create an electrical current that can be harnessed for power. Better catalysts mean more power.

    New wave.

    Rolison says unconventional materials may hold the key to higher-powered fuel cells.


    In an early project, Rolison's lab scrutinized nanoscopic bits of platinum-ruthenium, a popular electrocatalyst for direct oxidation of methanol. The material was “a mess,” Rolison says: a carpet of ruthenium oxide festooned with water molecules, overlying a platinum-rich core. Yet it was a better catalyst than orderly platinum-ruthenium alloy. “Nature was creating an optimal intersection for adsorption and charge transport,” Rolison says. Since then, she has kept an eye out for cases of “high-performance disorder.”

    That fascination with messiness drew Rolison to aerogels, rigid but incredibly lightweight materials riddled with nanoscopic pores. Rolison and colleagues brew their gels so that the pores (which make up about 80% of the materials) stay connected, creating labyrinths of tunnels through which molecules can pass almost unhindered. A cubic centimeter of aerogel may contain a total surface area approaching 100 square meters—just the sort of “real estate” you need in catalytic electrodes, Rolison says. Her lab and others are working on lining the minute tunnels with electrocatalysts. If they succeed, an aerogel electrode would form a three-dimensional (3D) matrix unlike anything currently in use.

    “When this was first proposed, it took a lot of criticism as being pure science fiction,” says White, who also investigates 3D electrochemical systems. But now, he says, mainstream energy researchers are taking Rolison's ideas seriously. One is Ralph Nuzzo, a chemist who researches fuel cells at the University of Illinois, Urbana-Champaign. If anything, he warns half-jokingly, a 3D energy device risks being too effective: “You have to find a way to make it work without being a bomb.”

    Off the job, Rolison has been known to lob a few metaphorical bombs herself. Since 1990, she has campaigned forcefully on issues affecting women in science, including childcare for university researchers and efforts to close a statistical “gender gap” in the upper ranks of scientific leadership. (Both Rolison and NRL representatives stress that she undertakes these activities on her own time and without the lab's involvement.) In 2000, in an editorial in Chemical & Engineering News, she called for the U.S. government to enforce equality in university chemistry faculties by invoking Title IX, a law that withholds federal funding from schools found to discriminate against women. Her take-no-prisoners approach makes some colleagues uneasy, but Rolison says she is “unrepentantly unapologetic” about it. “I've yet to understand the squeamishness by so many scientists and administrators with regard to obeying the law,” she says.

    When it comes to the laws of nature, Rolison is more patient. It could take years, she acknowledges, for her nanoscale approach to electrochemical devices to bear fruit. “We're at the very bottom of the mountain for fuel cells,” she says—but that just means the best scenery is still ahead.

  21. A Fuel for Small Farms

    1. Eli Kintisch

    When she was growing up on a 12-hectare farm in Monticello, Illinois, Emily Heaton bottle-fed orphan calves and helped move cattle through chutes. Now 28, Heaton is an agronomist with Ceres, a plant and biotechnology company in Thousand Oaks, California, and she sees her job in developing new biofuel feedstocks as a way to save a dying breed: family-owned farms.

    Specifically, Heaton focuses on crops such as Miscanthus, an incredibly hardy grass that grows to a height of 4 meters and has been widely studied as a biofuel feedstock in Europe but not in the United States. A research project Heaton conducted as an undergraduate at the University of Illinois, Urbana-Champaign, in 2001 showed that the grass could possibly outcompete current American feedstocks of choice, including switchgrass. “It was a very simple project, but it had never been tried,” she says of the effort, which the state of Illinois funded in an effort to encourage biofuels.

    The work has led to a 40-person project on biomass crops at the university and launched Heaton's career selecting breeds for Ceres to eventually sell. She hopes that work could give small farms the chance to produce crops such as Miscanthus for local biorefineries. “There's plenty need for any producer to contribute,” says Heaton. She is also trying to create new varieties of feedstocks that can survive tough conditions. “I really get to make new plants,” she says of her job. Her father, John Caveny, is also getting in on the act. He maintains a 0.6-hectare stand of Miscanthus that he uses to calculate carbon flux for farmers seeking credits for carbon sequestered in plants or soil.

  22. Wiring Up Europe's Coastline

    1. Daniel Clery

    Critics of wind power often point out that when the wind doesn't blow, the current doesn't flow. An Ireland-based wind-power company has come up with a novel solution to that problem: a subsea electricity grid linking offshore wind farms all around the coasts of Europe, from the Baltic and North seas, to the Mediterranean in the South. With wind farms far enough apart to be in different weather systems, it will always be windy somewhere around the grid.

    Airtricity, a developer and operator of wind farms in and around Ireland and the United Kingdom, including the 520-megawatt Arklow Bank project under construction in the Irish Sea, came up with the idea of a European Offshore Supergrid in 2001. There are now many offshore wind farms built or planned along the coasts of Germany, the Netherlands, the U.K., and other countries. These generally only serve domestic markets, but the Supergrid proposes hooking wind farms to a grid that extends throughout the North Sea, into the Baltic, and then, via the English Channel, the Irish Sea and the Atlantic coast of Ireland. From there, the grid would extend south across the Bay of Biscay and then over the Iberian peninsula into the western Mediterranean.

    As a test bed for the technology, Airtricity is proposing to build a huge wind farm in the southern North Sea, to supply electricity to the U.K., the Netherlands, and Germany. Dubbed the 10GW Foundation Project, it would consist of 2000 turbines, each capable of generating 5 megawatts, and would cover 3000 square kilometers. Its 10 gigawatts would be enough to power more than 8 million homes. “Good wind resource, shallow water, high demand, countries interconnected—it all comes together in the southern North Sea,” says Chris Veal, head of the Supergrid project with Airtricity.

    Airtricity is currently seeking support for the project from the three national governments as well as the European Union. Apart from providing zero-carbon sustainable power, the Supergrid has another advantage: It would provide a ready-made way of moving power around the E.U. Most electricity in Europe is generated and distributed by national power companies. There is little trade across borders—something that the E.U. is trying to encourage. “The European Commission and members of the European Parliament all thought it was a good idea. It ticks so many boxes,” Veal says. Building offshore is expensive, points out Graeme Cooper of the British Wind Energy Association, but although the economics of the Supergrid might not stack up now, the cost of energy is increasing. “There will be a point when someone will say, ‘This is a no-brainer.’”


    Hydrogen Economy? Let Sunlight Do the Work

    1. Robert F. Service

    Looking for a clean way to produce hydrogen, Daniel Nocera wants to run a fuel cell backward, powered by sunlight

    Hydrogen seems like an ideal fuel. Combine it with oxygen in a fuel cell, and it produces water and electricity, without the noxious pollutants that accompany the burning of most fossil fuels. But it has a dark side: Although there is plenty of hydrogen around, it's bound with other atoms in complex molecules, and it takes large amounts of energy to strip it free. Moreover, most hydrogen today is made from fossil fuels, releasing vast quantities of carbon dioxide in the process. Daniel Nocera is hoping to change that. The Massachusetts Institute of Technology (MIT) chemist is looking for new ways to use sunlight to split water into oxygen and hydrogen. In essence, Nocera is trying to run a fuel cell in reverse. “Why can't we reengineer the fuel cell backwards?” he asks. “Conceptually, it's very easy.”

    His quest had a colorful start. Like many fellow Grateful Dead fans with their tiedyed T-shirts, Nocera, 49, was fascinated by colors when he was growing up—not by the colors themselves, however, but by the processes that lie behind them. “I wanted to understand the colors of materials,” Nocera says. And that sparked his interest in chemistry. In between going to some 80 Dead shows, he learned that the colors we see are driven by which frequencies of light are absorbed and reflected by materials. When that light is absorbed, it kicks electrons out of their relaxed state, sending them into a dancing frenzy. As they return to their relative rest, they give off heat. By the time Nocera was a graduate student at the California Institute of Technology in Pasadena, he was wondering how he could design systems to capture electrons excited by sunlight and use them to make fuel. “Right out of the blocks, I was interested in solar energy conversion,” Nocera says. Or, as Deadheads might put it, finding a way to hold onto the light.

    Plants do this by photosynthesis: Two massive protein complexes split water and carbon dioxide and forge new energy-storing bonds in sugar molecules. At the heart of the process is the harnessing of electrons excited by sunlight. Nocera is looking for novel catalysts that will perform some of these tasks more efficiently.

    Uphill battle.

    Nocera hopes to find new catalysts that harness sunlight to make hydrogen fuel.


    Five years ago, he and then-graduate student Alan Heyduk designed a rutheniumbased catalyst that uses light energy to strip protons and electrons from an acid and stitch them together to make H2 molecules. (They're still trying to find a cousin that does the same thing with water rather than an acid.) But that was the easy step. It's even more difficult to grab lone oxygens liberated by splitting water and link them together to form O2—a step that would be needed to complete the water-splitting reactions and maximize the eff iciency of the process. “That requires a certain organization,” says Heyduk, now a chemistry professor at the University of California, Irvine. And it's one of the reasons there's been so little progress in recent decades in devising catalysts that turn out O2.

    But there could soon be a new glimmer of progress. Nocera says his group is close to publishing results with a new rutheniumbased catalyst that absorbs light and uses the energy to stitch oxygen atoms together to make O2. The new catalyst isn't very efficient yet. But it works the way he expected it to, Nocera says, which gives him confidence that they're beginning to understand how to control the motion and bonding of different atoms with their new catalysts. Although Heyduk says he hasn't heard the details of the new catalyst yet, “any system where they can turn water to O2 is a huge advance because of the difficulty of the problem.”

    Nocera acknowledges that even this and other advances his group has made are baby steps compared to what's needed for an industrial version of the technology. One problem is that Nocera's catalysts thus far contain ruthenium, a rare and expensive metal. But Nocera is hoping to find cheaper materials that work as well. He recently launched a new project with MIT colleagues to synthesize a multitude of novel metal oxide compounds for testing as possible water-splitting catalysts.

    A handful of other groups around the world are engaged in related efforts. One approach pursued by researchers at the National Renewable Energy Laboratory in Golden, Colorado, for example, uses semiconductors to absorb sunlight and create electrical charges that are then used to split water. Although this approach is currently more efficient than Nocera's catalysts, the semiconductors needed are still too costly to be commercially viable. At this point, Nocera argues, all such strategies are worth pursuing. “I'm not sure what the winner will be that is able to make energy without adding extra CO2 to the atmosphere,” Nocera says. “A failing of energy R&D for the last 30 years in the United States has been that it has been treated as an engineering problem, with a little ‘r’ component and a big ‘D.’ There needs to be an ‘R’ bigger than the ‘D.’ There are whole new areas of science and engineering that need to be discovered to solve this problem.”

    Still, Nocera is convinced that the broad community of researchers now being inspired to find a carbon-neutral source of energy will succeed. “I'll guarantee it,” he says. “I think science can deliver a cheap and efficient solution. I believe it deeply.”


    Guiding an Oil Tanker Into Renewable Waters

    1. Daniel Clery

    Steven Koonin gave up academic life in California for the harsh realities of the energy industry, and he's having a whale of a time

    Going green.

    BP wants some of its future fuels to come from a field, not a well.


    LONDON—Steven Koonin's career took a surprising turn in 2004. After nearly 3 decades as an academic theoretical physicist, including 9 years as provost of the California Institute of Technology (Caltech) in Pasadena, he pulled up his southern-California roots and moved to London to become chief scientist of the oil company BP. But his new role is not all about oil. BP was the first major oil company to acknowledge publicly that people may be causing climate change. Now, committed to move “beyond petroleum,” it is complementing oil exploration with investment in solar and wind power, clean coal technology, and biofuels. Koonin's job is to help allocate BP's $500-million-a-year research funding, plot its technology strategy, and generally evangelize about the energy challenges facing the world. “Sometimes I feel like I've got the most wonderful job in the world,” Koonin says.

    Alive to the challenge.

    Koonin wants to “do biofuels right.”


    Speaking in BP's smart headquarters in St. James Square, Koonin, 55, reflects on a career that has also taken him into climate research and science advice. “I like finding out about the way the world works, and … I like learning new things.” He's a quick study: Colleagues and energy-policy experts are impressed by his knowledge of what is essentially a new field to him. “He's risen to the challenge and has enormous influence in the company,” says physicist Ernest Moniz of the Massachusetts Institute of Technology (MIT) in Cambridge, a former Department of Energy undersecretary who also sits on an advisory panel for BP. “He's well past the hump on the learning curve,” says solar physicist Philip Goode of the New Jersey Institute of Technology in Newark.

    Koonin was born and raised in New York City, educated at Caltech and MIT, and joined the Caltech faculty in 1975 specializing in nuclear and many-body theory. Computers were developing rapidly at the time, and Koonin found “lots of things to apply computing methods to.” At one point, he says, a representative from IBM offered him a new device called a personal computer to see if he could find uses for it. In the mid-1980s, he got involved in science advice to government, eventually joining JASON, an independent group of scientists that advises the government on science and technology.

    Some work for JASON led to the Earthshine project, an effort to monitor Earth's reflectance by measuring how much earthlight falls on the moon's dark face. “Earthshine helps us to understand the role of clouds in climate,” says Goode, Koonin's collaborator.

    After he became Caltech's provost in 1995, Koonin oversaw a major overhaul of the institute's biological sciences and expansion of neurosciences, as well as its involvement in projects such as the Thirty Meter Telescope. But in 2003 when a colleague told him BP was looking for a chief scientist, he decided to take the plunge. “I wanted to find out how the private sector works,” he says, and energy “was going to be interesting.”

    Koonin says he spent his first year and a half in the job learning about energy, a process that changed his views. “I was more skeptical about climate change a few years ago. Now I've come round more toward the IPCC view.” (The Intergovernmental Panel on Climate Change has concluded that temperatures are rising in part as a result of human activity.) Like most energy pundits, he sees no silver bullet that will save the planet from climate change, but “some ammo has a bigger caliber than others.” He advocates large-scale efforts in carbon sequestration at fossil fuel-burning plants, as well as a new generation of nuclear power stations. And he says it would be “irresponsible” not to investigate other ways to deal with global warming, such as geoengineering: increasing Earth's reflectivity by pumping material into the upper atmosphere.

    One area claiming much of his attention is biofuels. Current biofuel efforts, he argues, are strapped onto agricultural food production and are “not optimal.” Koonin wants to galvanize geneticists, biotechnologists, agricultural scientists, engineers, and others to “do biofuels right.” BP has put up $500 million over 10 years to create an Energy Biosciences Institute (EBI). Five universities are vying for this prize, and the winner should be announced by the time this issue is published.

    This initiative has won plaudits. “Koonin thought hard about how to structure the EBI, and it will have a lot of impact,” says carbon sequestration expert Robert Socolow of Princeton University. “BP has made a commitment to go big in energy biosciences. I doubt this would have happened without Steve Koonin,” says Moniz.

    Koonin is pleased with the buzz the EBI has caused. “Plant geneticists are talking to chemists and engineers. … Researchers are coming alive to the challenge,” he says.


    Treading the Nuclear Fuel Cycle Minefield

    1. John Bohannon

    Tariq Rauf has the unenviable job of making IAEA's international fuel bank work. And the clock is ticking

    VIENNA, AUSTRIA—Nuclear weapons capability could spread to as many as 30 more countries in the coming decades if the trade in nuclear fuel continues on its present course, according to Mohamed ElBaradei, director general of the International Atomic Energy Agency (IAEA). But this frightening scenario might be avoided if the “haves” could agree on a better scheme for sharing fuel production with the “have-nots.” A number of proposals have now been put forward to allow countries to use nuclear energy without acquiring centrifuges of their own for enriching uranium.

    Among these is the establishment of an IAEA-controlled international fuel bank from which all countries could draw. That plan got a boost last year from the U.S.-based Nuclear Threat Initiative (NTI), an independent group backed by U.S. billionaire Warren Buffett. The NTI pledged $50 million to set up the bank, as long as IAEA secures another $100 million, or the equivalent in nuclear fuel, by September 2008.

    So far, no one has put money, or fuel, on the table, and the whole idea remains intensely controversial. “Whether the U.S. would actually place its [nuclear] material under full IAEA control remains to be seen,” says Frank von Hippel, a nuclear policy expert at Princeton University. Matthew Bunn, a nonproliferation specialist at Harvard University, says the bank has “a better-than-even chance” of being set up. Others, however, are not happy about the terms being offered. “Forgoing uranium enrichment in order to obtain security of supply is not an acceptable option for many non-nuclear countries,” says José Goldemberg, a nuclear fuel cycle expert at the University of São Paulo, Brazil.

    At the center of this storm sits Tariq Rauf, the 55-year-old head of IAEA's Verification and Security Policy Coordination section and coordinator of the IAEA fuel-bank project. Science met with Rauf, a Canadian with Pakistani parents, in his office at IAEA headquarters here in the Austrian capital.

    Hot seat.

    Rauf needs more backers for the fuel bank or his funding disappears.


    Q: Why is a fuel bank needed?

    This whole thing started in the fall of 2003 when our director general [Mohamed ElBaradei] drew attention to the fact that nuclear enrichment and reprocessing technology are in too many hands. Today, there are eight to 10 countries with the capability to enrich uranium and about the same number that can reprocess spent fuel to make plutonium. The question is whether fuel production will be restricted to these countries or whether new ones will enter the market. The issue is that the same technology can be used both for nuclear energy and a nuclear weapons program.

    Q: Why would a nuclear hopeful nation want to enroll?

    For one thing, enrichment and reprocessing are very expensive activities. Setting up an enrichment plant isn't economical unless you have eight to 10 nuclear power reactors. Some countries do not need so many. This brings up the conundrum: Do you make or do you buy [nuclear fuel]? So the [fuel bank] idea is to have a system whereby [countries] first go to the market to buy fuel, and if they are unable to because of political reasons, then they would come to these assurance-of-supply mechanisms. It's like if you have bought a ticket from an airline and that airline company goes belly-up, another airline will honor that ticket.

    Q: Besides the political differences, are there technical challenges?

    The challenge is to have it be multinational without a transfer of technology. For example, if you had six countries taking part, the enrichment technology might be coming from the Europeans, and [they] would run the technology. The other countries are part of the management and operational side.

    Q: So scientists and technicians from every country would not be involved?

    They could be involved, but they wouldn't all be sitting in the [enrichment] cascade halls. None of these multinational schemes envisions the expertise of enrichment or reprocessing being transferred to countries that don't have these technologies already. It's as if you bought shares in a company like Toyota. You're interested in the product, which in this case is the enriched uranium coming out. You really don't need to know how the production line works.

    Q: What role is IAEA likely to play?

    One of the criticisms is that this is a grand plan from the IAEA to expand and be a supercontrolling agency. … But we don't want to set up an empire. If we do set up an IAEA fuel bank, we would likely contract it out to industry.

    Q: What might a nuclear renaissance mean for nuclear science?

    The intake of people studying nuclear science in universities has been declining, which has made a smaller and smaller pool of nuclear-educated people available. That has also made life difficult for [the IAEA] because we need inspectors and other staff with nuclear expertise. Many of us hope that a nuclear renaissance will mean that nuclear will no longer be associated with being unsafe, and that this will encourage students. The world certainly needs more people with a nuclear science education.

  26. NORWAY

    A Nuclear Demonstration Project?

    1. Daniel Clery

    Egil Lillestøl is a man with a rather unusual mission: He wants his homeland of Norway to take the lead in developing a new form of nuclear power. Norway is Europe's largest petroleum exporter, from its North Sea oil and gas fields, and Lillestøl, a physicist at the University of Bergen, believes the country needs to do something about its carbon emissions. Norway has little experience with nuclear power but has one of the world's largest reserves of thorium. Lillestøl says Norway should pioneer a new, inherently safe form of nuclear reactor called an energy amplifier that runs on thorium. “It would be a good thing to have other [options] to stand on,” Lillestøl says.

    Carlo Rubbia, a Nobelist and former director-general of Europe's particle physics lab CERN, championed the idea of the energy amplifier in the 1990s, and CERN researchers developed a design and tested some of the key ideas. A conventional fission reactor holds enough fissile material for a nuclear chain reaction to take place; neutron-absorbing rods ensure that the reaction doesn't run out of control, although this always remains a risk. The energy amplifier doesn't have enough fissile material to sustain a chain reaction. Instead, an accelerator fires high-energy particles into the fuel, prompting a cascade of fission reactions and producing heat. The amount of heat is proportional to the intensity of the beam, and the accelerator can be designed so that the amplifier can never overheat. Although the amount of waste produced is expected to be low, particle accelerators aren't cheap, and one with the necessary power has never been built.

    Lillestøl wants Norway to pioneer this form of energy by funding and hosting a prototype—at a cost of about €550 million—and has made it a personal crusade to win over the Norwegian public and government. Lillestøl says he makes two or three presentations a week. Although the government is wary of nuclear power, after a debate in the national assembly, the energy minister called for an in-depth study. Norway is in a unique position to undertake such an enterprise because it has been squirreling away oil revenue and has now amassed a fund of some $250 billion.

    CERN's Jean-Pierre Revol, who worked on the energy amplifier at CERN, says that Lillestøl has made “a lot of political progress” in Norway. Renewed interest in nuclear power is generating curiosity about this technology, Revol says: “If it starts to fly, everyone will want to be part of it.”

  27. Photovoltaics in Focus

    1. John Bohannon

    NEGEV DESERT, ISRAEL—What looks like an upside-down umbrella made of mirrors is the future of renewable energy—at least according to its creator, David Faiman, a physicist who directs the Ben-Gurion National Solar Energy Center here. Photovoltaic cells have been around for decades, but they've never been competitive with fossil fuels. Faiman claims to have found a way to slash the price. “This technology is a real contender as a solution to the world's energy problem,” says physicist Robert McConnell of the National Renewable Energy Laboratory in Golden, Colorado.

    The secret ingredient is perched at the focus of his 10-ton reflector: a square grid 10 centimeters across called a concentrator photovoltaic (CPV) cell. Instead of spreading solar panels across a broad area to capture photons, Faiman uses a reflector to concentrate the light 1000 times onto a small target. Traditional silicon-based solar cells can't handle the heat, but a gallium arsenide-based cell developed by a team at the Fraunhofer Institute for Solar Energy Systems in Freiburg, Germany, “actually works far more efficiently at higher temperatures,” says Faiman. By using a concentrating reflector, the system makes best use of its most expensive component, the CPV cell, which Faiman estimates can reach 40% efficiency at converting sunlight to electricity. With this system, Faiman believes he can build a power plant for less than the magic number of $1000 per kilowatt of electrical capacity. “Getting the price that low is feasible, but only on a large scale,” says McConnell, “and there's a long way to go from this stage.”

    Large is exactly the scale Faiman is thinking along: spreading 20,000 CPV cells over an area of 12 square kilometers to generate 1 gigawatt. With mass-produced CPV cells, Faiman estimates the cost at $1 billion. “Considering the savings, the system can pay for itself within 2 decades,” he says. The team is hoping to make it happen sooner by increasing the efficiency of the CPV cell, for example by adding extra layers of solar cells that capture a broader range of the wavelengths.


    Rethinking Mother Nature's Choices

    1. Robert F. Service

    Jay Keasling believes ethanol is a poor biofuel. So he's going to get microbes to make something better

    There are good reasons gasoline has been king of the road for more than a century: It packs lots of energy for its volume, and it's stable, transportable, and noncorrosive. Alternative fuels, like ethanol, have a hard act to follow. Unlike gasoline, which is a collection of medium-length hydrocarbons, ethanol is a short-chain alcohol, sporting a less energetic carbon-oxygen bond alongside some of the power-packing carbon-hydrogen bonds of gasoline. It delivers about 30% less energy by volume than gasoline. It's still useful as a gasoline additive for cars, but it can't do it all. “We're not going to be putting much ethanol in planes,” says Jay Keasling, a chemical engineer at the University of California, Berkeley, and the Lawrence Berkeley National Laboratory. “We probably wouldn't choose [ethanol] as a fuel if nature didn't give it to us.”

    Interior designer.

    Keasling is betting that synthetic biology will allow him to make microbes that turn out transportation fuel.


    But, says Keasling, “we don't have to accept what nature has given us.” Keasling is trying to force nature to reconsider her options, and his tool of choice is synthetic biology, in which researchers add and remove entire biosynthetic pathways to the genomes of organisms to get them to produce better drugs, materials, or—perhaps, one day—fuels. Keasling is one of the discipline's leading advocates and practitioners. And he's betting that by inserting novel pathways into yeast and other microbes, they could produce long oil-like hydrocarbons, or shorter versions called alkanes that can be stitched together by means of simple chemical processing.

    No one has been able to do this yet, but Keasling and his students are working on the problem. “Jay has a good vision,” says Chris Somerville, a plant biochemist at Stanford University in Palo Alto, California, and co-founder of LS9, a synthetic biology startup in San Carlos, California. “In about 15 years, we will be making biofuels other than ethanol, and synthetic biology will be what makes that possible,” he predicts.

    Brazil derives about one-third of its transportation fuel from ethanol made from sugar cane. But in the cooler U.S. climate, corn is king. Keasling, 42, grew up on his family's 200-hectare corn and soybean farm in Harvard, Nebraska, and he still looks like he belongs there, with his close-cropped hair and barrel-like chest. He guffaws when asked whether he considered staying to become the fifth generation of his family to work the farm. “No. The stork dropped me in the wrong place,” he quips. Keasling slipped away to the University of Nebraska, Lincoln, then on to a Ph.D. at the University of Michigan and a postdoc with Stanford University Nobel laureate Arthur Kornberg. “It's really hard to make a profit” growing corn, Keasling says, explaining that the price per bushel has remained largely unchanged since the late 1940s. That's prompted farmers to continually try to improve their yields, which in turn has produced a glut of corn. That glut, and high oil prices, has suddenly made corn ethanol more economically viable. “You get off the plane in Nebraska, and it smells like a brewery,” he says.

    In his effort to beat out ethanol, Keasling won't be starting from scratch. He is already well known for producing an antimalarial drug called artemisinin using synthetic biology. The molecule, a complex double ring structure, is normally derived from a plant. But doing so is expensive. So, over the past 6 years, Keasling's group has inserted more than a dozen genes into Escherichia coli and yeast, setting up a molecular assembly line that allows the bugs to churn out the drug. The strategy not only works, but improvements over the years have also boosted the artemisinin output 10-million-fold. Keasling hopes further improvements will make the drug cheap enough to be widely affordable in developing countries, potentially saving millions of lives. Keasling's feats with artemisinin earned him the title of Discover magazine's Scientist of the Year in 2006.

    Now, Keasling wants to go one better, swapping out some of the ring-building enzymes he introduced into microbes in favor of others that can construct linear hydrocarbons. “We built this platform to produce this hydrocarbon [artemisinin],” Keasling says. “We can remove a few of the genes to take out artemisinin and put in a different hydrocarbon. It's that simple.”

    At least on paper. Even if the microbes can produce it, Keasling acknowledges that won't be the end of the story. Despite the impressive improvements at getting bugs to churn out an antimalarial compound, they still typically collect only a couple of dozen grams per liter of it at the end of fermentation. That could be a big problem for a transportation fuel that has to be not only cheap but also wildly abundant. “This is massive. This would require a scale-up of fermentation never seen before.”

    As a result, Keasling argues that it's important for synthetic biology to push other efforts to create alternative fuels. One such idea is to reengineer plants' intertwined networks of lignin and cellulose to make it easier for microbes to break them down and convert them to ethanol. Another is to boost the ability of microbes to make and tolerate ethanol. “This is an exciting time to be in the energy research area,” Keasling says. “The biology is just getting to the point where it has something to offer.”


    Former Marine Seeks a Model EMPRESS

    1. Eli Kintisch

    To test nuclear reactor designs, computational physicist Rich Martineau is creating a sophisticated new simulation from the ground up

    New build.

    Martineau wants to “start at the bottom” of nuclear modeling.


    IDAHO FALLS, IDAHO—Whether he's chewing tobacco or driving his battered ′78 Chevy pickup, Rich Martineau stands out in a crowd. The computational physicist aggressively protects his status as an outsider, keeping the uninvited out of his cubicle at the Idaho National Laboratory (INL) in Idaho Falls with a black retractable shade. And the tagline on e-mails from the former U.S. Marine and president of a local gun club reads: “Rome did not create a great empire by having meetings. They did it by killing all those who opposed them.”

    But in September, Martineau arranged and hosted a decidedly communal gathering, a 2-day, invitation-only conclave on modeling nuclear reactors. Among the 44 participants were experts in applied math, supercomputing, and nuclear engineering, as well as top modeling researchers from the aerospace and nuclear weapons fields. “There was excitement in the air,” says Columbia University computational mathematician David Keyes, who adds that the quality of assembled academic firepower “surprised” him. “This Marine image he cultivates is tempered by the realization that you need consensus.”

    What prompted Martineau, 47, to soften his independent nature is his desire to build a sophisticated digital model of what goes on inside a nuclear reactor. Such a model would be “definitely needed” to jump-start the U.S. nuclear energy industry and a major improvement over much older models currently used by nuclear engineers, says Vincent Chan of General Atomics in San Diego, California.

    The goal of the new reactor model Martineau is developing at INL is to allow engineers, government regulators, or plant operators to test novel design concepts using supercomputers instead of costly prototypes. The older models mostly depict antiquated water-cooled, low-temperature uranium reactors, whereas Martineau wants to help industry simulate more innovative designs, including gas-cooled reactors that work at higher temperatures and utilize different fuels such as recycled nuclear waste. “You have to know where your uncertainties come from so that you know [which design decision] matters,” says INL nuclear research chief Phillip Finck. More elaborate virtual reactors could also simulate accident scenarios in order to train workers—what INL modeler Glen Hansen calls the “Xbox concept” after the popular video game system.

    Existing reactor computer models haven't been overhauled much since the heyday of the U.S. nuclear enterprise in the 1970s and 1980s. Back then, computational physicists devised simulation programs, including one dubbed RELAP, that tried to depict conditions within nuclear reactors under normal and accident scenarios. “Nuclear engineering was, like aerospace, at the cutting edge [in modeling] at the time,” says Keyes. But after what Martineau calls “Jane Fonda and Three Mile Island,” government research funds dried up; Martineau's work in the 1990s modeled flames and floods, not nuclear reactors. As a result, whereas airplane builders now have fluid dynamics models so robust that they can build whole jets on computers and use fewer wind-tunnel tests, nuclear engineers still depend on crude, 25-year-old computer programs.

    Three years ago, Martineau proposed developing a new computational model for reactors to INL manager Kathryn McCarthy. With scant money available from Washington, D.C., McCarthy helped persuade INL chiefs to provide some initial funds for the effort. As a wry sign of thanks, Martineau has dubbed the emerging model Enhanced Multi-Physics Reactor Simulation Suite (EMPRESS). “Kathy had the confidence in me when no one else did,” he says of his boss. “After 15 years of stagnating, I exploded into this thing.”

    Martineau spent the summer of 2004 at Argonne National Laboratory learning about the powerful parallel computing techniques the new reactor model would need. He also began assembling a team. Along with two INL modelers with backgrounds in nuclear engineering, the effort has attracted Dana Knoll and Hansen, both accomplished computational physicists recruited from the strong modeling corps at Los Alamos National Laboratory, a nuclear weapons lab.

    The team's goal, as Martineau sees it, is to “start at the bottom,” using fundamental physics principles to overhaul the outdated models. RELAP, for example, simulates the heat and neutron behavior of a reactor but only in one dimension and for idealized vessel shapes. Martineau wants EMPRESS to work in three dimensions, incorporating the real-life geometry of a gas-cooled nuclear power plant and avoiding the fudge factors RELAP uses to approximate hard-to-model areas such as gas-vessel boundaries.

    Lab officials are realistic about the scope of the project and its prospects of winning federal funding, especially as it's unclear how the new Democrat-controlled Congress will be disposed toward advanced nuclear energy work. “We don't have infinite amount of time and infinite amount of resources to chase down every single thing to the molecular scale,” says INL administrator Kemal Pasamehmetoglu.

    Still, Martineau's sanguine about the future of EMPRESS. He once dreamt of flying jets in the Marines and now sees his duty as helping expand nuclear power to reduce U.S. dependency on foreign oil. “God, country, and Corps,” says Martineau of his allegiances. “I've always wanted to do this.”


    Catalyzing the Emergence of a Practical Biorefinery

    1. Adrian Cho

    Most are betting on biology to convert carbohydrates into biofuels. Jim Dumesic thinks catalysis is the key

    The United States is counting on biofuels to reduce reliance on imported petroleum and to cut carbon emissions from vehicles. But most cars won't run on corn oil, so scientists must find ways to convert plant matter into practical fuels. Much hope—and hype—centers on harnessing microbes and enzymes to convert biomass to ethanol. But James Dumesic, a chemical engineer at the University of Wisconsin, Madison, is blazing another trail. An expert in catalysis, Dumesic is searching for a philosopher's stone to turn sugar water into fuels and higher value chemicals.


    With student Chris Barrett and others, Dumesic seeks ways to refine carbohydrates.


    Dumesic, 57, is one of a growing number of catalysis experts who are using their skills to convert so-called biorenewables, such as sugars and other carbohydrates, into hydrogen, liquid fuels, and precursors for plastics. Those efforts may be crucial for transforming biorefining from a grand ambition to an economically viable reality.

    As does a petroleum refinery, a biorefinery will have to crank out a variety of fuels and chemicals to maximize profit, says Todd Werpy, a chemist at Pacific Northwest National Laboratory (PNNL) in Hanford, Washington, and “catalysis is going to be critical for both fuels and chemicals in the long run.” For his part, Dumesic says he's providing industry with options “so that when carbohydrates become more readily available, we can take them different ways.”

    As raw materials for fuels, petroleum and carbohydrates lie on opposite ends of a chemical spectrum. The hydrocarbons in petroleum are molecules with few functional groups—add-ons such as an oxygen bonded to a hydrogen, a hydroxyl group—that provide ready sites for reactions. So the components of petroleum are relatively stable and require little modification to produce gasoline and other fuels. In contrast, carbohydrates bristle with functional groups that make the compounds less stable and less capable of withstanding the high temperatures used to process petrochemicals. Those functional groups must be stripped away to transform the carbohydrates into energy-packed hydrocarbons.

    To pare down carbohydrates, Dumesic employs heterogeneous catalysis, in which the catalyst resides on the surface of a solid support such as powdered aluminum oxide. Ironically, the techniques of heterogeneous catalysis were honed on petroleum chemistry, says Dumesic. “Over the past 50 years, people have learned a lot about how to make, characterize, and test catalysts, and all of that carries over directly” to carbohydrates.

    In 2002, Dumesic and colleagues used platinum to catalyze the production of hydrogen gas from a solution of carbohydrate in water. Dubbed aqueous phase reforming (APR), the low-temperature process requires less energy than traditional steam reforming, in which water vapor interacts with methane. The advance links hydrogen technologies and biorenewables, says James Jackson, a chemist at Michigan State University in East Lansing. “There is no green source of hydrogen because it comes primarily from steam reforming of natural gas,” he says. “This would be making it out of biomass sources.”

    To commercialize the APR technology, Dumesic and collaborator Randy Cortright spun off a company called Virent Energy Systems. The 4-year-old start-up has developed a unit that converts glycerol—a cheap byproduct of biodiesel production—into hydrogen and methane and burns the mixture to drive a 10-kilowatt generator. Virent sold the device to the local utility company, and for the past year it has pumped electricity into the grid.

    Virent hasn't turned a profit yet, however. The company is still lining up investors, such as Cargill and Honda, and working from grants, including $2 million from the U.S. departments of Agriculture and Energy. “Don't congratulate me, because we don't have money from product sales coming through the door,” Cortright says.

    When fed larger sugar molecules, APR also tended to produce methane and other hydrocarbons called alkanes. So in 2004, Dumesic and colleagues changed the catalyst to maximize the production of alkanes instead of hydrogen, potentially opening a new route to liquid fuels. The researchers also added a step to link shorter alkanes into longer ones like those in diesel fuel.

    Most recently, Dumesic and his team have found a way to convert the sugar fructose into a compound known as hydroxymethylfurfural. HMF might serve as a feedstock for making fuels, but it could have even greater potential as a “platform chemical” to produce polymers and other higher-value substances, Dumesic says. PNNL's Werpy also sees the allure in chemicals. “We're focusing much more on chemicals than on fuels because of the bigger margins and opportunities for pulling the field along,” he says.

    The catalysis of biorenewables is still in its infancy, which makes the field exciting, Dumesic says. “I imagine that there was a period of time in the early days of petroleum chemistry when new things happened all the time, and I kind of have that feeling now as we move into this other area,” he says. And this much seems sure: No matter how technologies for biofuels and biorefining evolve, catalysis is sure to be an important part of the mix.