News this Week

Science  14 Oct 2011:
Vol. 334, Issue 6053, pp. 160
  1. Around the World

    1 - Rockville, Maryland
    Task Force: Prostate Cancer Test Does More Harm Than Good
    2 - Paris
    Probing Dark and Light
    3 - London
    U.K. Pledges £20 Million To Finish Off Guinea Worm
    4 - New Delhi
    Center to Battle Future Food Crises
    5 - Malabo, Equatorial Guinea
    Dictator's New Try for UNESCO Award Rebuffed
    6 - Pyongyang
    North Korean University Hosts International Science Conference
    7 - Cotonou, Benin
    Backup Insecticide Shows Promise Against Malaria

    Rockville, Maryland

    Task Force: Prostate Cancer Test Does More Harm Than Good

    A test that millions of U.S. men take in the hope of nabbing cancer early—the prostate-specific antigen (PSA) test—got a bad (and highly controversial) grade this week. Based on recent data, the U.S. Preventive Services Task Force concluded that more harm than good comes from asking healthy men to get a PSA test—and it shouldn't be routinely offered to patients. While a positive result can signal life-threatening cancer, it can also be a false alarm.

    Positive PSA results can lead not just to bleeding and infections from biopsies, but to “increased risk of incontinence and impotence from prostatectomy and radiation therapy,” and even “a small risk of death,” says Roger Chou, lead author of a review paper in the Annals of Internal Medicine that backed the task force. After The Cancer Letter leaked a draft of the Chou review on 6 October, the Annals posted it online. The review noted that a large percentage of men in one trial had false positive results, but benefits were small or undetectable. The task force invited comment on its draft on 11 October.


    Probing Dark and Light


    The European Space Agency (ESA) will launch two missions to study the sun and the mysterious dark energy that is speeding the universe's expansion in launch slots slated for 2017 and 2019. The two missions, dubbed Solar Orbiter and Euclid, were selected 4 October and will now move forward to construction. There were three missions in the running for the two launches; the runner-up is PLATO, an observatory to study planet formation and the conditions for life, which will be retained as a possibility for future launches.

    Solar Orbiter (pictured) will go closer to the sun than any previous mission, about 42 million kilometers away at its closest point, and will study how the sun interacts with its environment. Euclid is a space telescope that will map the large-scale structures of the universe with extreme accuracy, trying to elucidate why the universe is expanding at an ever-increasing rate.


    U.K. Pledges £20 Million To Finish Off Guinea Worm

    Guinea gone.

    Jimmy Carter observes worm treatment.


    Twenty-five years after health workers started a campaign to rid the world of the guinea worm, cases have been reduced by over 99%. But the 1800 or so cases that still occur annually are the hardest to address. At a press conference on 5 October, U.K. international development minister Stephen O'Brien announced that the government will donate about £20 million ($31 million) over 4 years to finish the eradication effort, led by the Carter Center in Atlanta—provided other donors come forward with the remaining £40 million needed for the campaign.

    The guinea worm is spread when people ingest its larvae through contaminated drinking water; the larvae incubate inside the human host and worms up to 1 meter long emerge painfully through the skin a year later. Once abundant across Africa and South Asia, the parasite is now confined to Mali, Ethiopia, Chad, and newly independent South Sudan, which accounts for 98% of cases.

    Behavioral changes such as filtering drinking water and discouraging people with an emerging worm from walking into ponds and lakes have reduced cases from 3.5 million in 1986 to 1797 in 2010. The Carter Center's current goal is worldwide eradication—defined as three consecutive years of no reported cases—by 2015.

    New Delhi

    Center to Battle Future Food Crises

    Hoping to kick-start another green revolution, the Indian government on 5 October announced the creation of a research center to develop wheat and maize varieties that thrive in warmer temperatures and on degraded land. Launched in partnership with the International Maize and Wheat Improvement Centre (CIMMYT) in Mexico, the Borlaug Institute for South Asia will employ 300 researchers at three sites in India.

    The center's establishment is a “momentous event in the history of global food security,” claims CIMMYT Director General Thomas A. Lumpkin. South Asia's population is expected to swell from 1.6 billion today to 2.4 billion by 2050. By that time, CIMMYT predicts, almost a quarter of South Asia's wheat yield could be wiped out by global warming.

    The center honors the 1970 Peace Nobel laureate Norman Borlaug, a renowned wheat breeder who helped lead the first green revolution in the 1960s and avert widespread famine in India. The $125 million center will take root near agricultural universities in the states of Punjab, Bihar, and Madhya Pradesh and is expected to open in 2 years.

    Malabo, Equatorial Guinea

    Dictator's New Try for UNESCO Award Rebuffed



    The U.N. Educational, Scientific and Cultural Organization (UNESCO) has again deferred a decision on a proposal to establish an award for the life sciences funded by and named after Teodoro Obiang, the dictator of Equatorial Guinea. The prize was put on ice indefinitely last year after a global outcry by human rights activists and scientists protesting Obiang's poor human rights record. But it was up for discussion again at UNESCO's 58-member Executive Board meeting in Paris on 4 October, and this time Obiang had the support of the African Union, which he currently chairs. In an uncharacteristically outspoken speech, UNESCO's Director-General Irina Bokova called on Obiang to stop pushing for the award, which she said could put UNESCO at “war with the scientific commmunity.” The issue will be decided at the board's next meeting in April.


    North Korean University Hosts International Science Conference

    A novel experiment in higher education in North Korea celebrated its second anniversary last week with a fete featuring a Nobel laureate, a former astronaut, and a member of U.K. House of Lords. The first inter national conference of Pyongyang University of Science and Technology (PUST) in the North Korean capital was an academic milestone. And as it turns out, PUST is in a unique position: Other universities in Pyongyang are said to have cancelled classes for several months to free students to work on city-beautifying projects to commemorate the 100th anniversary of the birth of the country's founder, Kim Il Sung, next April.

    Since opening its doors 2 years ago (Science, 25 September 2009, p. 1610), PUST has enrolled fewer students than expected and the campus is still working to provide unfettered Internet access. Still, to the two dozen scholars who trekked to Pyongyang last week, PUST's existence is grounds for optimism, says attendee Norman Neureiter, a senior policy adviser at AAAS (Science's publisher). “For several Americans and Europeans to be in a room and speak directly to some 250 North Korean students is an unprecedented event,” he says.

    Cotonou, Benin

    Backup Insecticide Shows Promise Against Malaria

    Indoor spraying of a compound called bendiocarb may provide a backup now that malaria mosquitoes are becoming resistant to mainstay insecticides, a large study in Benin suggests. But scientists say more alternatives are urgently needed—if only because of bendiocarb's toxicity.

    Bendiocarb is a candidate replacement for pyrethroids, compounds whose widespread use in indoor spraying and bednets is causing resistance in Anopheles mosquitoes, the vector for malaria, especially in West Africa. After government teams carried out two rounds of spraying in 2008 and 2009 in an area in Benin where 350,000 people live, Anopheles bites fell by more than 90%, and traps didn't yield a single infected mosquito, researchers reported last week in The American Journal of Tropical Medicine and Hygiene.

    Net gain.

    A treated mosquito net drying in Benin.


    Bendiocarb, which inhibits a brain enzyme called acetylcholinesterase, is among a dozen insecticides approved by the World Health Organization for malaria control, but safety concerns led manufacturers to voluntarily withdraw it from the U.S. market in 1999. Beninese teams did not spray in low-lying areas prone to flooding, where they feared toxic run-off into local waters.

  2. Random Sample

    Glimpse of a Frozen World


    British explorer Robert Falcon Scott's arduous, ill-fated trek across Antarctica to the South Pole is a famous saga. Less well known is the story of the last photographs he took, which have had a lengthy journey of their own. Last week, after nearly a century of being exhibited, fought over, auctioned off, and quietly tucked away in a photographic archive, many of Scott's final photographs have been restored to the public eye with the publication of The Lost Photographs of Captain Scott, a collaboration between the photographs' current owner, antiquarian Richard Kossow, and polar historian David Wilson, the great nephew of one of Scott's team members, Edward Wilson.

    Scott, Wilson, and three other team members reached the South Pole in January 1912 only to find that Norwegian explorer Roald Amundsen had arrived there first. Dejected, they began the journey back across the Ross Ice Shelf. They didn't make it: By the end of March 1912, all five men were dead. Their bodies and records—including Scott's final rolls of film—were found in November of that year.

    By the Numbers

    21 — Researchers awarded $50,000 grants in the first round of the National Science Foundation's (NSF's) new Innovation Corps Program. NSF hopes to help 100 investigators a year assess the commercial potential of their research, starting with an entrepreneur boot camp this week at Stanford University.

    17.4% — Early estimate of the “success rate” for reviewed research grant proposals at the National Institutes of Health for the year that ended 30 September. Last year's rate was 21%.

    Survival of the Palimpsest


    The Greek genius Archimedes was a prolific inventor and writer: He designed screw pumps and heat rays, approximated pi using a method that resembles modern integral calculus, and wrote mathematical treatises on spheres, cylinders, and levers. But a large body of his work, known as the Archimedes Palimpsest, until recently remained tantalizingly just out of reach.


    In 1906, classical scholar Johan Heiberg discovered that beneath the Latin writing of a 1229 prayer book was a much older Greek text by Archimedes. Heiberg transcribed what he could, but much of the writing was indecipherable.

    Enter modern technology. In 1998, a private collector bought the manuscript and lent it to the Walters Art Museum in Baltimore, Maryland, where scholars began to unravel its secrets. Researchers led by William Noel, the curator of manuscripts at the museum, used multispectral analysis and other techniques to delve through layers of Latin writing and even 20th century forgeries to reveal the Greek texts, imaging the pages through 14 different wavelengths of light ranging from infrared to ultraviolet. They also took the text to the Stanford Synchrotron Radiation Lightsource for x-ray fluorescence, which helped reveal much of the iron-laden ink of the “under” text.

    The prayer book had been written over seven treatises by Archimedes, including two found only in the Palimpsest: The Stomachion, the earliest known study in a branch of mathematics called combinatorics; and The Method of Mechanical Theorems, in which Archimedes “was essentially calculating with infinity,” Noel says.

    The Walters Art Museum's exhibit Lost and Found: The Secrets of Archimedes will display the results of this decade-long project from 16 October through 1 January 2012.

  3. Peer Review

    Beyond the Data

    1. Jeffrey Mervis

    The latest attempt to clarify how NSF assesses grant proposals for possible impacts beyond the expected scientific results has not ended a long-running debate.

    The National Science Foundation (NSF) uses two criteria to judge the 55,000 grant proposals it receives each year. One is straightforward enough: intellectual merit. But the other, known as “broader impacts,” is so confusing that a cottage industry has sprung up to help scientists figure it out.

    For only $79—marked down from $197, according to one recent e-mail—a company in Florida offers a CD that teaches applicants how “to successfully identify, distill, and communicate your project's broader impacts to NSF reviewers, improving your chances of funding.” For those scientists on a tight budget, there are plenty of free tips in the open literature.

    The phrase is intended to help NSF determine whether the cutting-edge science being proposed is also addressing an important societal issue. It was adopted in 1997 to rebut criticism that the basic-research agency cared more about the interests of the academic community it serves than the needs of the taxpayers who ultimately foot the bill for its programs.

    The National Science Board, NSF's presidentially appointed oversight body, would like to put that cottage industry out of business by explaining once and for all what NSF means by broader impacts. In June, a board task force took a stab at drafting a new set of principles for reviewers and applicants to follow. Instead of clearing the air, however, the draft guidelines generated a fresh wave of protest. (See page 170 for excerpts from a few of the 265 e-mailed comments. NSF also gathered feedback in meetings with and surveys of stakeholders.)

    The task force has taken that criticism to heart and is busy revising the guidelines. “It's our attempt to be frighteningly clear,” task force member Alan Leshner (CEO of AAAS, which publishes Science) explained during a meeting last month of the task force. “But if we're not, we need to know that.”

    An elusive definition

    NSF has been anything but clear over the years about the meaning of broader impacts. Officials steadfastly refused to provide a precise definition, for what they believed to be a good reason: Such a statement might stifle innovative thinking about the myriad ways research can benefit society.

    “So many people have interpreted the broader-impacts criterion as requiring something involving K-12 education,” says Cora Marrett, deputy NSF director and former head of NSF's education directorate. “But that's not our position at all.” Pinning down the scope of broader impacts could actually be counterproductive, she adds. “Efforts to find activities that are universal may be totally inappropriate for a particular field, or project, or individual PI [principal investigator],” Marrett says, “because people may not have the special expertise needed.”

    Working to improve STEM (science, technology, engineering, and mathematics) education at all levels is certainly one way to demonstrate a commitment to broader impacts. But so are activities aimed at attracting and retaining more women and minorities and, more generally, training the next generation of scientists. Strengthening the nation's scientific infrastructure through increased collaborations and the sharing of equipment and facilities would qualify, as would many types of public outreach. Over the years, NSF has also looked favorably upon researchers who hope to commercialize their discoveries with the goal of boosting the economy, improving national security, protecting the environment, and enhancing the general well-being of society.

    Since the second criterion was adopted, however, NSF officials have complained that significant portions of the community are either ignoring or not taking it seriously. In 2002, a frustrated NSF Director Rita Colwell tried to crack the whip. “NSF will return without review proposals that do not separately address both merit review criteria within the Project Summary,” she declared in a bulletin to the community.

    Confusion persisted, however. Merit review at NSF is a complex process, involving panels made up of thousands of volunteer reviewers guided by hundreds of NSF program officers. And scientists say that no two panels follow exactly the same definition of broader impacts.

    So in 2007, NSF tried a new tack. It described five “representative activities” that would qualify as having a broader impact. Four of them were familiar: promoting teaching, training, and learning at all levels; broadening participation of groups underrepresented in science, notably women, non-Asian minorities, persons with disabilities, and those at institutions outside the scientific mainstream; enhancing the research and education infrastructure through increased collaborations; and disseminating knowledge of science and technology to the general public. The fifth activity, listed under the heading Benefits to Society, serves as a catch-all category for activities such as advising government agencies, working with industry, and putting data into a format that nonscientists can understand. Last year, Congress waded into the fray, asking NSF to explain what it means by broader impacts and how it plans to ensure compliance.

    No-go on national goals

    The science board's June draft tried to answer that question. It began by laying out four principles underpinning merit review, the most notable being a statement that “NSF projects should help to advance a broad set of important national goals.” That assertion is followed by a list of nine examples of activities that would qualify as demonstrating broader impacts. The list began with “increased economic competitiveness” and included “increased partnerships between academia and industry” and “increased national security.” Another principle declares that “broader impacts may be achieved through the research itself, through activities that are directly related to specific research projects, or through activities supported by the project but ancillary to the research.”

    The list of national goals drew sharp and bimodal criticism. One segment of the community complained vociferously that the new language would undermine efforts to promote NSF's historical commitment to diversity, improving instruction, and raising public literacy. They feared that those goals, previously the centerpiece of the broader-impacts criterion, would become subservient to the hunt for an economic payoff. Another group, less vocal but equally critical, told the task force that it had lost sight of the importance of basic research. Some even worried that asking applicants to describe how they planned to satisfy the broader-impacts criterion might corrupt the merit-review process itself.

    Recognizing that its first draft had fallen short, the task force scrapped it and began working on a new version. Although the task force has not made public a copy of its latest draft, John Bruer, president of the James S. McDonnell Foundation and chair of the task force, agreed to talk with Science about its key provisions.

    A major change is a reconfiguration of the section that lays out the overriding principles for merit review. Instead of four principles, with the list of national goals featured prominently, there are three governing ideas, none as controversial as the goals list. The first declares that all NSF projects “should be of the highest quality.” The second says that, in the aggregate, projects should contribute more broadly to advancing societal goals. The third states that any assessment of a project's broader impacts should be “scaled” to the size of the activity.

    “The original list of national goals … was actually intended to describe a set of outcomes,” Bruer says. “Listing them as a guiding principle created some concerns within the community that we have addressed in the new version.”

    The emphasis on top-quality research in the first principle, Bruer says, should assuage those who fear that the broader-impacts criterion “would dilute NSF's commitment to excellence in research.” The second principle, he adds, explains “how broader impacts might be achieved. We want to make clear that it's not seen simply as an additional activity on a grant but rather that it is an essential component of the research activities.”

    Bruer said it may turn out that broader impacts are best measured “in the aggregate.” Some types of NSF-funded activities and programs, such as large centers or collaborations with industry, “are explicit about how broader impacts will be evaluated.” But that's not the case for standard grants to individuals. “Principal investigators have tried to respond in good faith and devote appropriate resources to these activities,” he says. “But it may be more appropriate to measure them at the portfolio level of an NSF directorate, or perhaps across an entire university department or institution.”

    Bruer didn't rule out soliciting another round of public comments. But he hopes that the full board will embrace the task force's report at its next meeting, in December, and he says that posting another draft “might push back that time frame.”

  4. The Community Weighs In on Broader Impacts

    “The new criteria shift from a strong, clear expression of a commitment to preparing and engaging a diverse domestic scientific workforce to the downgrading of diversity to be one of nine possible methods of demonstrating broad impact. … I have no doubt that this change will dilute the impact of NSF on access and representation in science and engineering.”


    “The broader impacts should carry weight only as an extra benefit, an additional positive feature of a proposal. A person should be allowed to say that there are no broader impacts envisioned and still get funding if the science is visionary and outstanding.”


    “Making the stated [and laudable] national goals an explicit funding criterion amounts to apologizing for scientific research rather than leading the fight in support of it. … Proposals for funding of basic science will only address these criteria if the proposer is insincere but convincing and the responsible program director is insincere in reading that part of the proposal.”


    “We recommend that the third national goal be removed from the list and included as a separate principle [to wit]: Broadening participation is a key to achieving the national goals, and investigators must include mechanisms in their broader impacts.”


    “The criteria as modified are troubling. Those supported by the NSF must be held accountable to the society that supports them, not just to those that look like the majority of scientists [white and male] awarded grants from mostly research-intensive universities.”


    “Forcing PIs to devote time, energy, and resources toward outreach activities, one NSF grant at a time, in an uncoordinated fashion with other efforts, is highly wasteful of resources. Many PIs are not well-qualified, well-suited, or highly motivated for K-12 outreach, public outreach, et cetera.”


    “It is very difficult to do first-rate science, and very time-consuming. It is therefore unreasonable that NSF demand its PIs spend time and energy on activities peripheral to the research they are proposing as a condition for receiving support.”


    “I have always valued the older construal of the broader impacts criterion, i.e., the value of outreach, the inclusion of different educational institutions, and the purposeful attention to supporting and bringing in women and members of underrepresented groups. I believe that important focus is hidden, and even lost, in the proposed guidelines.”


    “Much of the research in my field [computer sciences] has significant impact on other fields, on national security, and on economic competitiveness. Yet my field has a terrible record for the inclusion of women and minorities, and a terrible record for K-12 impact. To include [those goals] as broader impacts gives researchers in my field an unconscionably cheap way to satisfy this criterion.”


    “It is not clear to me how a merit-review panel could decide how one PI's goals and aspirations for broader impacts are more likely to succeed than another's. … This uncertainty contrasts sharply with the clarity with which intellectual merit can be evaluated.”


  5. Predicting Climate Change

    Vital Details of Global Warming Are Eluding Forecasters

    1. Richard A. Kerr

    Decision-makers need to know how to prepare for inevitable climate change, but climate researchers are still struggling to sharpen their fuzzy picture of what the future holds.

    Seattle Public Utilities officials had a question for meteorologist Clifford Mass. They were planning to install a quarter-billion dollars' worth of storm-drain pipes that would serve the city for up to 75 years. “Their question was, what diameter should the pipe be? How will the intensity of extreme precipitation change?” Mass says. If global warming means that the past century's rain records are no guide to how heavy future rains will be, he was asked, what could climate modeling say about adapting to future climate change? “I told them I couldn't give them an answer,” says the University of Washington (UW), Seattle, researcher.

    Climate researchers are quite comfortable with their projections for the world under a strengthening greenhouse, at least on the broadest scales. Relying heavily on climate modeling, they find that on average the globe will continue warming, more at high northern latitudes than elsewhere. Precipitation will tend to increase at high latitudes and decrease at low latitudes.

    But ask researchers what's in store for the Seattle area, the Pacific Northwest, or even the western half of the United States, and they'll often demur. As Mass notes, “there's tremendous uncertainty here,” and he's not just talking about the Pacific Northwest. Switching from global models to models focusing on a single region creates a more detailed forecast, but it also “piles uncertainty on top of uncertainty,” says meteorologist David Battisti of UW Seattle.

    First of all, there are the uncertainties inherent in the regional model itself. Then there are the global model's uncertainties at the regional scale, which it feeds into the regional model. As the saying goes, if the global model gives you garbage, regional modeling will only give you more detailed garbage. And still more uncertainties are created as data are transferred from the global to the regional model.

    Although uncertainties abound, “uncertainty tends to be downplayed in a lot of [regional] modeling for adaptation,” says global modeler Christopher Bretherton of UW Seattle. But help is on the way. Regional modelers are well into their first extensive comparison of global-regional model combinations to sort out the uncertainties, although that won't help Seattle's storm-drain builders.

    Most humble origins

    Policymakers have long asked for regional forecasts to help them adapt to climate change, some of which is now unavoidable. Even immediate, rather drastic action to curb emissions of greenhouse gases would not likely limit warming globally to 2°C, generally considered the threshold above which “dangerous” effects set in. And nothing at all can be done to reduce the global warming effects expected in the next several decades. They are already locked into climate change.

    Sharp but true?

    Feeding a global climate model's prediction for midcentury (top) into a regional model gives more details (bottom), but modelers aren't sure how accurate the details are.


    So scientists have been doing what they can for decision-makers. Early on, it wasn't much. A U.S. government assessment released in 2000, Climate Change Impacts on the United States, relied on the most rudimentary regional forecasting technique (Science, 23 June 2000, p. 2113). Expert committee members divided the country into eight regions and then considered what two of their best global climate models had to say about each region over the next century. The two models were somewhat consistent in the far southwest, where the report's authors found it was likely that warmer and drier conditions would eliminate alpine ecosystems and shorten the ski season.

    But elsewhere, there was far less consistency. Over the eastern two-thirds of the contiguous 48 states, for example, the two models couldn't agree on how much moisture soils would hold in the summer. Kansas corn would either suffer severe droughts more frequently, as one model had it, or enjoy even more moisture than it currently does, as the other indicated. But at least the uncertainties were plain for all to see.

    The uncertainties of regional projections nearly faded from view in the next U.S. effort, Global Climate Change Impacts in the United States. The 2009 study drew on not two but 15 global models melded into single projections. In a technique called statistical downscaling, its authors assumed that local changes would be proportional to changes on the larger scales. And they adjusted regional projections of future climate according to how well model simulations of past climate matched actual climate.

    Statistical downscaling yielded a broad warming across the lower 48 states with less warming across the southeast and up the West Coast. Precipitation was mostly down, especially in the southwest. But discussion of uncertainties in the modeling fell largely to a footnote (number 110), in which the authors cite a half-dozen papers to support their assertion that statistical downscaling techniques are “well-documented” and thoroughly corroborated.

    The other sort of downscaling, known as dynamical downscaling or regional modeling, has yet to be fully incorporated into a U.S. national assessment. But an example of state-of-the-art regional modeling appeared 30 June in Environmental Research Letters. To investigate what will happen in the U.S. wine industry, regional modeler Noah Diffenbaugh of Purdue University in West Lafayette, Indiana, and his colleagues embedded a detailed model that spanned the lower 48 states in a climate model that spanned the globe. The global model's relatively fuzzy simulation of evolving climate from 1950 to 2039—calculated at points about 150 kilometers apart—then fed into the embedded regional model, which calculated a sharper picture of climate change at points only 25 kilometers apart.

    Closely analyzing the regional model's temperature projections on the West Coast, the group found that the projected warming would decrease the area suitable for production of premium wine grapes by 30% to 50% in parts of central and northern California. The loss in Washington state's Columbia Valley would be more than 30%. But adaptation to the warming, such as the introduction of heat-tolerant varieties of grapes, could sharply reduce the losses in California and turn the Washington loss into a 150% gain.

    Not so fast

    A rapidly growing community of regional modelers is turning out increasingly detailed projections of future climate, but many researchers, mostly outside the downscaling community, have serious reservations. “Many regional modelers don't do an adequate job of quantifying issues of uncertainty,” says Bretherton, who is chairing a National Academy of Sciences study committee on a national strategy for advancing climate modeling. “We're not confident predicting the very things people are most interested in being predicted,” such as changes in precipitation.

    Regional models produce strikingly detailed maps of changed climate, but they might be far off base. “The problem is that precision is often mistaken for accuracy,” Bretherton says. Battisti just doesn't see the point of downscaling. “I would never use one of these products,” he says.

    The problems start with the global models, as critics see it. Regional models must fill in the detail in the fuzzy picture of climate provided by global models, notes atmospheric scientist Edward Sarachik, professor emeritus at UW Seattle. But if the fuzzy picture of the region is wrong, the details will be wrong as well. And global models aren't very good at painting regional pictures, he says. A glaring example, according to Sarachik, is the way global models place the cooler waters of the tropical Pacific farther west than they are in reality. Such ocean temperature differences drive weather and climate shifts in specific regions halfway around the world, but with the cold water in the wrong place, the global models drive climate change in the wrong regions.

    Gregory Tripoli's complaint about the global models is that they can't create the medium-size weather systems that they should be sending into any embedded regional model. Tripoli, a meteorologist and modeler at the University of Wisconsin, Madison, cites the case of summertime weather disturbances that churn down off the Rocky Mountains and account for 80% of the Midwest's summer rainfall. If a regional model forecasting for Wisconsin doesn't extend to the Rockies, Wisconsin won't get the major weather events that add up to be climate. And some atmospheric disturbances travel from as far away as Thailand to wreak havoc in the Midwest, he says, so they could never be included in the regional model.

    A tougher nut.

    Predicting the details of precipitation using a regional model (bottom) fed by a global model (top) is even more uncertain than projecting regional temperature change.


    Even the things the global models get right have a hard time getting into regional models, critics say. “There are a lot of problems matching regional and global models,” Tripoli says. In one problem area, global and regional models usually have different ways of accounting for atmospheric processes such as individual cloud development that neither model can simulate directly, creating further clashes. Even the different philosophies involved in building global models and regional models can lead to mismatches that create phantom atmospheric circulations, Tripoli says. “It's not straightforward you're going to get anything realistic,” he says.

    Redeeming regional modeling

    “You could say all the global and regional models are wrong; some people do say that,” notes regional modeler Filippo Giorgi of the Abdus Salam International Centre for Theoretical Physics in Trieste, Italy. “My personal opinion is we do know something now. A few reports ago, it was really very, very difficult to say anything about regional climate change.”

    But Giorgi says that in recent years he has been seeing increasingly consistent regional projections coming from combinations of many different models and from successive generations of models. “This means the projections are more and more reliable,” he says. “I would be confident saying the Mediterranean area will see a general decrease in precipitation in the next decades. I've seen this in several generations of models, and we understand the processes underlying this phenomenon. This is fairly reliable information, qualitatively. Saying whether the decrease will be 10% or 50% is a different issue.”

    The skill of regional climate forecasting also varies from region to region and with what is being forecast. “Temperature is much, much easier” than precipitation, Giorgi notes. Precipitation depends on processes like atmospheric convection that operate on scales too small for any model to render in detail. Trouble simulating convection also means that higher-latitude climate is easier to project than that of the tropics, where convection dominates.

    Regional modeling does have a clear advantage in areas with complex terrain such as mountainous regions, notes UW's Mass, who does regional forecasting of both weather and climate. In the Pacific Northwest, the mountains running parallel to the coast direct onshore winds upward, predictably wringing rain and snow from the air without much difficult-to-simulate convection.

    The downscaling of climate projections should be getting a boost as the Coordinated Regional Climate Downscaling Experiment (CORDEX) gets up to speed. Begun in 2009, CORDEX “is really the first time we'll get a handle on all these uncertainties,” Giorgi says. Various groups will take on each of the world's continent-size regions. Multiple global models will be matched with multiple regional models and run multiple times to tease out the uncertainties in each. “It's a landmark for the regional climate modeling community,” Giorgi says.

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution