News this Week

Science  06 Feb 2009:
Vol. 323, Issue 5915, pp. 696
  1. U.S. BUDGET

    Agencies Sweat the Details of Spending Billions More on Science

    1. Jeffrey Mervis*
    1. With reporting by Eli Kintisch and Eliot Marshall.

    It is a problem U.S. science agencies are delighted to have: how to spend billions of dollars on basic research over the next 18 months in a way that does not cause headaches down the road.

    Although tax breaks and aid to beleaguered states are getting most of the media attention, President Barack Obama's plan to stimulate the U.S. economy includes an unprecedented infusion of money into science. It reflects bipartisan support for the idea that research not only puts people to work but also increases the chances of creating new industries that will sustain economic growth. Most of the money would be spent on either bread-and-butter research grants to individuals and small groups or larger awards to rejuvenate labs, buy equipment, and build or renovate facilities at universities and federal laboratories.

    Agency officials say there are more than enough worthy ideas for putting the money to immediate use. But the trick will be to avoid a crash when the stimulus money dries up. For infrastructure projects, that means not letting capacity outstrip the government's ability to support the researchers who will occupy the space. The challenge is even greater for grants, requiring a delicate balancing of veteran and first-time grantees, consideration of the impact on underrepresented groups, and avoiding a bolus of new grants that expire at the same time and trigger a flood of application renewals. Any mistakes in managing either pot, say scientists from both the previous and current Administrations, could destabilize the overall scientific enterprise.

    “I do think that money of the magnitude being proposed can be spent on useful things,” says John Marburger, who headed the White House Office of Science and Technology Policy for the entire tenure of President George W. Bush. “But it's short-term money. The great danger is creating facilities that no one can afford to operate.”

    Harold Varmus, who advised the Obama campaign and who was named co-chair of the President's Council of Advisers on Science and Technology in December, makes the same point about research grants. “Not everybody understands that grants create an obligation,” says Varmus, a former U.S. National Institutes of Health (NIH) director. “So the base is crucial. Obama talked repeatedly during the campaign about gradual and consistent funding for science. Maybe part of this [stimulus] should go into the base.”

    A leading light.

    The stimulus bill could speed up construction of Brookhaven's $900 million National Synchrotron Light Source II.


    Although the package is still being debated, the eventual figures for science promise to be huge. The $819 billion package approved last week by the House of Representatives contains some $20 billion for research-related activities, including a $6 billion dollop for the Department of Education to distribute to states for all manner of repairs and renovations at colleges and universities. The plan now before the Senate offers a slightly different mix—more for research and less for infrastructure at NIH, less for both pots at the U.S. National Science Foundation (NSF), and less for Department of Energy (DOE) facilities. Obama, who has promised that three-fourths of the new, one-time spending will occur by the fall of 2010, has urged legislators to work out their differences in the next few weeks so that the money can start flowing.

    NSF, NIH, and DOE officials are taking different tacks in preparing for the windfall. NSF plans to do more of the same within both research and infrastructure. NIH hopes to add new twists to its research portfolio. And DOE is expected to accelerate construction, letting the future take care of itself. Here are highlights from each agency.

    NSF. The House bill would give NSF $2 billion for new research grants and $900 million for three different types of infrastructure programs, including reviving and expanding one for university projects. The Senate is offering less in both categories—$1 billion and $350 million.

    Rather than asking for new proposals, Director Arden L. Bement says that program managers will dip into the pool of existing applications submitted since 1 October 2008, the start of the 2009 fiscal year, and fund those rated most highly by reviewers. “We have enough in our backlog to spend that [$2 billion] right now,” he says. The expansion will also allow NSF “to take care of young investigators,” he adds.

    Money for a multiyear grant (typically 3 years) would be committed up front, at the time of the award, rather than parceled out year by year. That will put pressure on investigators “to spend it quickly rather than putting it into the bank,” says Bement. NSF estimates that the House package translates into 3000 additional awards that would employ 12,750 scientists, postdocs, and graduate and undergraduate students. Then there's the impact on the economy of what's bought with the grant, everything from reagents to highend instrumentation.

    NSF also plans to stay the course on infrastructure spending. Bement says he will dust off and update the last solicitation for an identical but smaller academic modernization program that lapsed in 1997 if the $200 million proposed by the House makes it through the legislative gauntlet.

    NIH. Acting NIH Director Raynard Kington says the agency has drawn up three ways to spend the grants' portion of the windfall, which is $1.5 billion in the House version and $2.7 billion in the Senate. (Senator Arlen Specter (R-PA) has proposed bumping the Senate number to $9.2 billion.) The first would be a new call for proposals involving “topics in which there have been scientific or technical challenges” that might yield quick results with a blast of cash. Researchers would apply for accelerated review for awards of up to $500,000 a year for 2 years. These challenge grants would fit well with NIH's emphasis on new investigators and high-risk science, Kington says.

    NIH would spend some of the stimulus money on standard investigator grants (R01s) that scored well in peer review last year but didn't get funded. The catch: The money would last only 2 years instead of the usual 4 years. Finally, Kington says, NIH might also add to the size of the awards made to investigators whose requested budgets were cut or who can identify “related research areas that might be meritorious.” Again, however, the supplement would probably be for only a couple of years.

    All of these strategies are aimed at avoiding what Kingston calls the “hard lesson” of the 5-year doubling that left grantees with what Marburger describes as “a gargantuan appetite” that NIH hasn't been able to fill. “There will be great pressure [to use the stimulus money] to fund more R01s just as we currently fund them,” Kington says. “But we have to be careful that we do not do anything imprudent, and that includes setting ourselves up to have big commitments 2 years out” that can't be met.

    Growth industry?

    Congress may revive and expand an NSF program that funded research facilities like these greenhouses at the University of California, Davis.


    The House bill would also give NIH $1.5 billion to spend on repairs and improvements of extramural research facilities and for “shared instrumentation.” (The Senate would provide $300 million.) NIH is all set to handle awards like these and can do it quickly, Kington says. Both houses have proposed $500 million for renovations to the NIH campus.

    DOE. The big gap between what the House and Senate have proposed for DOE's Office of Science—$2 billion versus $430 million—is forcing DOE officials to hold their fire. Acting office head Patricia Dehmer would say only that her staff has done “detailed homework” on how to implement the House's orders to “support improvements to DOE laboratories and scientific facilities.”

    Her former boss, Raymond Orbach, says he assumes legislators “will split the difference.” The American Physical Society has drawn up a list of $1.6 billion in infrastructure or equipment improvements that could be jump-started within 4 months, and Orbach says he's not afraid of accruing long-term mortgages. “The actual running cost of a facility is only 10% of its construction,” says Orbach. “These investments will create platforms that don't exist anywhere else in the world. That's an opportunity you can't afford to pass up.”

    The wait is agonizing for individual project directors. Steven Dierker, who manages the just-approved National Synchrotron Light Source II at Brookhaven National Laboratory in Upton, New York, says adding $150 million to DOE's request for $103 million this year would put hundreds more people to work, sooner. It would also save money, he adds, by shortening the 7-year construction schedule for the $912 million facility.


    New Ph.D.s to Teach Harvard Undergrads

    1. Susan Gaidos*
    1. Susan Gaidos writes for Science Careers.

    Harvard University plans to hire up to 20 recent Ph.D.s to teach undergraduate courses in a move that officials say will improve instruction and help students facing a tough job market.

    The new College Fellows Program was announced last week in an e-mail to faculty and will go into effect this fall. Fellows will be paid $48,000 with full benefits to work in some 20 academic departments throughout the Faculty of Arts and Sciences. The program is open to all recent—since 2005—Ph.D. graduates. The awards are for 1 year, with a second year possible, and the money will come from the university's instructional budget.

    “A large part of the goal was to support graduates and mentor excellent teaching among recent Ph.D.s, and [another] was to meet essential teaching needs in Harvard College,” says Allan Brandt, dean of Harvard's Graduate School of Arts and Sciences. “We wanted to develop strong teaching for our Harvard College students and to make sure our teaching needs were met.”

    Each fellow will be assigned a faculty mentor, and teaching-focused seminars are planned. The program, Brandt says, “is designed for people who have a deep interest in university teaching.” Fellows will be expected to carry 70% of the teaching load of a faculty member, leaving them some time to pursue their research. “At this career stage,” says Brandt, “it's very important that they have some protected time to continue their research endeavors.”

    James Hanken, Alexander Agassiz professor of zoology, says the program offers new Ph.D. recipients “a sort of a temporary hold” in a tough job market. “If it's a teaching postdoc that doesn't consume all of your waking hours and leaves you time to do some research, I think it can be a good deal,” he says.


    California Researchers Chilled by Sudden Freeze on Bond Funds

    1. Greg Miller
    On hold.

    A sea-floor-mapping study (left) and coastal prairie restoration project are among the victims of California's bond fund freeze.


    For many California researchers, the bad news came just before the holidays: As a result of the state's deteriorating financial situation, the department of finance was freezing billions of dollars in funding tied to the sale of state bonds. These funds had been slated for a vast variety of projects statewide, everything from supporting schools and public housing to building libraries and fixing freeways. But the freeze also pulled the plug—at least temporarily—on thousands of environmental research and conservation projects. In the ensuing weeks, scientists, graduate students, and nonprofit organizations have faced a mad scramble to replace what most had assumed was a secure source of funding.

    When the decision was announced on 19 December, Rikk Kvitek, a marine ecologist at California State University, Monterey Bay, got the call right before heading to his lab holiday party. “I had to walk in and say ‘Merry Christmas, and by the way, there's no work tomorrow because virtually all of our funding is tied up in this.’” Kvitek is a principal investigator on a $20 million state-funded sea-floor-mapping project that was just hitting full stride. “For the last 8 years, I've been really pushing the state to map all the state waters because we knew that it would really transform coastal marine research and how resource management was done,” he says. The high-resolution digital maps are being used, among other things, to help select sites for the state's network of marine protected areas. Because of its scale, the project has become the primary focus of Kvitek's work, and he says for the first time in 20 years he had all of his funding eggs in one basket.

    Like others, Kvitek was instructed to stop all work immediately on bond-funded projects. The next day, a 180-foot ship surveying the northern California coast pulled into port and has been out of commission ever since. Kvitek has managed to find temporary funding from a foundation to keep his 15 students and staff members working on data analysis, but he says that money will last only 2 or 3 months.

    California's governor and legislature are at loggerheads about how to narrow a projected $40 billion budget deficit, and the state's credit rating has tanked, scuttling its ability to sell bonds to raise capital. In more typical times, the state loans money to approved projects and subsequently recoups the money by selling bonds. But now that the bonds aren't selling, the state decided that the loans had to stop. Environmental projects are feeling the pinch in part because of the public support they've received in recent years as California voters have passed propositions authorizing the state to sell bonds to fund projects to study and manage the state's natural resources. (The state's bond-funded stem cell institute has not yet felt the pain as acutely but is girding itself; see sidebar.)

    “This came totally out of the blue,” says Susan Williams, the director of the Bodega Marine Laboratory in Bodega Bay, administered by the University of California (UC), Davis. Many of the frozen projects were intended to help support California's coastal economy, Williams says. “These are not piein- the-sky projects,” she says, citing oceanmonitoring projects used by fisheries managers and the U.S. Coast Guard, efforts to combat invasive species, and research on the impact of climate change.

    Some projects have been put on hold at a critical juncture. One example, Williams says, is a coastal prairie-restoration project run by the Bodega marine lab. This postcard-pretty habitat along the Sonoma County coast is being taken over by invasive grasses. The eradication plan called for a series of mowing and herbicide treatments followed by plantings of native species. The first treatment was completed last fall, but the funds are now frozen for the second round of grass removal, which should be happening now. “Unless the funding is restored and we can do the second removal, the seed bank of these weeds will germinate,” causing “an even bigger problem” than existed before the program started, Williams says.

    The restoration project was also paying tuition, fees, and a stipend for Tawny Mata, an ecology graduate student at UC Davis. The funding freeze “has pretty much left me up in the air about how I'll finish my Ph.D.,” Mata says. A teaching assistantship is paying her bills this quarter, but beyond that Mata isn't sure how she'll manage. “I've been contacting any professor I've ever worked with to see if they have any money lying around.” She's not alone: A recent e-mail survey found that at least 24 out of about 160 students in her program had lost at least some funding.

    Across UC Davis, 60 projects received stop-work orders, says Jan Hopmans, chair of the university's Department of Land, Air and Water Resources—20 in his department alone. “Many of these grants are for a few hundred thousand to a few million dollars,” Hopmans says. “We have 50 employees just in my department for whom we have in principle no funding at this time.” Researchers, students, and technicians have been reassigned to projects with other sources of funding where possible, Hopmans says, but so far 13 people have received layoff notices.

    Nonprofit groups are also feeling the pain. “A lot of our grantees are relatively small organizations, and some of them will go out of business if this goes on too long,” says Samuel Schuchat, executive officer of the Coastal Conservancy, the state agency charged with administering bond-funded grants for coastal research and conservation. One such program, the Invasive Spartina Project, an effort to eradicate invasive Spartina cordgrass from the San Francisco Bay, would be especially painful to lose, say Schuchat and others. The state has already invested nearly $10 million in the project, which has reduced the area covered by the grass by 90% since 2006 and is on course to eradicate it by 2012, says Peggy Olofson, the project's director. Olofson has cobbled together money to run a scaled-down operation this year, but beyond that the future is uncertain.

    How long the bond funds will remain frozen is unclear, but all eyes are on Sacramento, where the governor and state legislators are wrangling over how to close the budget gap—a necessary first step toward restoring the state's credit rating and restoring its ability to sell bonds. Only then will those affected by the freeze be able to start thinking of a thaw.


    Stem Cell Institute Looks for New Ways to Raise Cash

    1. Greg Miller

    In 2004, California voters authorized the state to raise $3 billion for stem cell research through the sale of bonds. Now, the state's finances are in disarray and its bonds aren't selling, forcing the California Institute for Regenerative Medicine (CIRM) to look for new ways to raise cash. At a meeting last week in Burlingame, California, the institute's leaders discussed alternatives and provisionally approved funding for $58 million in new training grants.

    Although the state's recent decision to freeze funding on projects tied to bond sales has had a chilling impact on many researchers in the state (see main text), the freeze has not had a big immediate impact on CIRM, says Vice President for Operations John Robson. As of 1 January, the institute had $158 million on hand, thanks to proceeds from bond sales that took place before the recent economic nosedive. That should be enough to maintain operations and honor previous funding commitments for several months, Robson says: “We feel like we have enough money to carry us to about November.”

    But then what? CIRM estimates it would need to raise close to $136 million just to continue funding ongoing commitments through the end of 2010; it would need a total of $377 million to also fund projects that have already been given preliminary approval from its board and anticipated new programs. At the meeting, CIRM Chairman Robert Klein proposed raising money by selling private placement bonds to wealthy investors interested in the potential societal benefits of stem cell research.


    Life Scientists Cautious About Dual-Use Research, Study Finds

    1. Yudhijit Bhattacharjee

    Some life scientists are changing the way they do business because of security concerns, according to a U.S. survey released this week.

    Researchers and policymakers in the United States have been hotly debating the need for new government regulations to prevent the misuse of life sciences research by terrorists and other bad actors. Even without such regulations, according to the survey, a few scientists are avoiding “dual use” research projects with the potential for harm; some are shying away from international collaborations; others are excluding foreign graduate students and postdocs from certain lines of work and censoring themselves while talking about their research.

    In all, 15% of the nearly 2000 life scientists who responded to the survey, conducted in late 2007 by the National Research Council and AAAS (publisher of Science), reported having changed their behavior in one or more of those ways. “It is a surprisingly high number,” says study chair Ronald Atlas, a microbiologist at the University of Louisville in Kentucky. He finds it worrisome that security concerns may be impinging on the traditional openness of research in the life sciences. “What's not clear is whether the community is overreacting or if this is an appropriate response,” Atlas says.


    Some researchers are avoiding certain projects because of security concerns.


    The finding is also an implicit endorsement of the popular argument among academics for letting scientists police themselves on dual-use research rather than imposing government-mandated rules. The National Science Advisory Board for Biosecurity endorsed that self-governance approach in recommendations to the government in 2007, but federal officials have not yet decided what the policy should be.

    Richard Ebright, a chemist at Rutgers University, New Brunswick, who has argued in favor of tougher regulations, says he finds the survey results “hard to believe,” given that previous studies have shown that most scientists in the community aren't even aware of dual-use concerns. Ebright suspects that the survey, which was e-mailed to 10,000 life scientists who are members of AAAS, attracted an overwhelming proportion of responses from individuals who would “prefer not to see [government] regulations.” Atlas agrees that the survey may have captured “a biased group that had been thinking about this topic” and says that the findings “would require further verification from broader surveys.”

    The study authors say the survey results point to the need for clearer guidelines on what kinds of research might have the potential for dual use. “It's possible that some life scientists are being overcautious because there is no good definition of dual-use research,” Atlas says. Panelist Robert Cook-Deegan, a biosecurity expert at Duke University in Durham, North Carolina, says biosafety committees at some institutions are already working with their scientists to help evaluate the dual-use potential of research projects and respond accordingly.

    As an example, he cites a project led by Mark Denison of Vanderbilt University in Nashville, Tennessee, and Ralph Baric of the University of North Carolina, Chapel Hill, that set out to make a SARS-like virus using synthetic biology techniques. The researchers “thought about dual use with their biosafety committees all along, and we did a half-day workshop before their publication to talk about what should not be included in the final publication and why,” Cook-Deegan says. The paper was published in the 16 December 2008 issue of the Proceedings of the National Academy of Sciences, with minor modifications to the language and no data withheld. “It's a really nice example of scientists taking dual use seriously,” he says.


    Some Neglected Diseases Are More Neglected Than Others

    1. Martin Enserink

    In a bird's-eye view of who's fighting diseases among the poor, Ireland and the United States stand out for largess; Germany and Japan seem skimpy. As for targets, the battle against dengue is relatively well-funded, whereas pneumonia, meningitis, and diarrhea get mere table crumbs. The “big three”—HIV/AIDS, malaria, and tuberculosis—together soak up some 80% of the money.

    Those are some conclusions from the first global study tracking spending on research and development for “neglected diseases.” The authors, led by Mary Moran of the George Institute for International Health in Sydney, Australia, hope that the study will help grantmakers decide where best to put their money and spur countries into action.

    With the help of an expert panel, the team first decided which diseases currently suffered scientific neglect; to qualify, they had to occur disproportionately in low- or middle-income countries, and there had to be a need for new vaccines, drugs, or diagnostics that the market fails to address. The team then painstakingly tried to trace every grant or investment made in 2007 by governments, institutes, charities, and companies to address those needs. The paper, published in PloS Medicine this week, documents a total of $2.5 billion spent on 30 diseases.


    Government agencies pick up the biggest part of the tab, with the U.S. National Institutes of Health good for 42% of the research. (The Bill and Melinda Gates Foundation comes in second at 18%.) Per capita, however, Ireland is the biggest public spender.

    A comparison of the estimated global burden of each disease as expressed in disability-adjusted life years (DALYs), a measure that aims to capture both disease and death, shows that R&D funders have flocked to the big three, while other diseases with an even bigger burden get little R&D attention. The accuracy of the DALY estimates, which the World Health Organization last updated in 2004, is under constant debate. Moran explains that funders may have other reasons to pick a disease, such as the expectation that investments will pay off quickly; that's why the team mentioned DALYs only briefly in its report. Still, it's clear that some diseases are underfunded, she says.

    Indeed, the data show that the big three no longer deserve the term “neglected,” says David Molyneux of the Liverpool School of Tropical Medicine in the United Kingdom, who calls the paper “impressive.” Molyneux says policymakers must “shift resources” to respiratory infections, diarrhea, and other underfunded diseases. Some can be controlled without new research, by scaling up current, and often cheap, methods. (See Three Q's on p. 695.)

    The team, which has funding from the Bill and Melinda Gates Foundation to do the survey annually through 2013, is already working on next year's edition. Next time, Moran hopes to include the contributions of four significant funders—India, China, and pharma giants Merck and Wyeth—that failed to submit data this year. But she worries that the economic downturn may lead some funders to spend less where more is needed.


    Invisibility Umbrella Would Let Future Harry Potters See the Light

    1. Adrian Cho

    Two years ago, physicists blurred the distinction between science and fiction by producing a shell-like “invisibility cloak” that made both itself and an object inside it undetectable— albeit when viewed with microwaves of a specific frequency. Now, a team from Hong Kong has gone one better with a theoretical scheme for an “invisibility umbrella” that can make both itself and an object placed beside it disappear. The previous cloak ferried incoming light around the enclosed object, rendering it blind to the outside world. In contrast, the umbrella leaves the hidden thing out in the open, so it can “see” its surroundings.


    An object scatters light waves traveling from left to right (top). With its embedded antiobject, the invisibility umbrella counteracts the scattering, rendering both things invisible.


    It's still a far cry from the magic garment that enables Harry Potter to sneak around Hogwarts Castle unseen and spy on others, but the concept draws rave reviews from other researchers. “It's an absolutely brilliant idea,” says physicist Ulf Leonhardt of the University of St. Andrews in the U.K., one of the pioneers of cloaking theory.

    In devising the scheme, Yun Lai, Che Ting Chan, and colleagues at the Hong Kong University of Science and Technology in Kowloon melded two earlier approaches. In 2005, Andrea Alu and Nader Engheta, electrical engineers at the University of Pennsylvania, predicted that researchers could make an object nearly invisible by coating it with a tailored layer of “metamaterial”—an assemblage of metallic rods and C-shaped rings that interacts with electromagnetic radiation in novel ways—counteracting the thing's tendency to scatter light.

    In contrast, in May 2006, Leonhardt and, independently, John Pendry of Imperial College London imagined stretching space so that the paths of light rays bend around an object. They then calculated how they might mimic that impossible stretching by sculpting the properties of a shell of metamaterial. Building on Pendry's implementation of such “transformation optics,” experimenters David Schurig and David Smith at Duke University in Durham, North Carolina, produced a microwave cloak just 5 months later (Science, 20 October 2006, p. 403).

    Chan and colleagues struck a middle ground. In theory and simulations, they first used transformation optics to perfectly cancel the scattering from a cylindrical post of ordinary material by coating it in metamaterial. The researchers then realized that they could make an object near the now-invisible post disappear, too.

    The trick, they report in a paper to be published in Physical Review Letters, is to embed a matching “anti-object”—the metamaterial equivalent of a voodoo doll—in the outer layer of the post. The scattering from the embedded anti-object exactly cancels the scattering from the object, Chan says, “so it looks as if there is nothing there.” Because the hidden object remains outside the post or umbrella, it can detect light from its surroundings.

    The scheme does have limitations, Pendry notes. The umbrella works for only a single frequency, has to be specifically tailored to the object to be hidden, and won't completely hide something that absorbs light. Still, Pendry says, “that's carping on my part—it's really a neat idea.”

    Making an invisibility umbrella even for microwaves may be challenging. It requires “left-handed” metamaterials that bend light in an especially strange way and are difficult to make. Still, Leonhardt says, “it's clear that someone will do this in the future.” Given current progress, don't be surprised if some wizard of an experimenter does it sooner rather than later.


    From the Science Policy Blog

    As the global economy continues to melt down, scientists saw some bright spots. But the news wasn't all good, and some of it was downright odd. Here are some highlights from Science's science policy blog, ScienceInsider:

    U.S. scientists saw more dollar signs this week when the U.S. Senate debated billions for federal research as part of a massive economic stimulus package. ScienceInsider has a breakdown of who would get what. Still, some argue that money alone won't be sufficient to help young students who want to pursue scientific careers; an overhaul of the U.S. research system is needed. Things weren't as rosy north of the border: On 27 January, the Canadian prime minister unveiled a budget that makes some big cuts to the country's main source of research grants. Given the prospects for flush funding in the United States, Canadian scientists worry about a brain drain. Scientists in France are even more riled up, thanks to an incendiary speech by President Nicolas Sarkozy, which lambasted the country's research system as “infantilizing and paralyzing.”

    Also making headlines: two chemical elements. Officials at Los Alamos National Laboratory are scratching their heads over a mysterious case of beryllium contamination. Adding to the intrigue is the source of the harmful chemical: a part of the lab known as Technical Area 41. Meanwhile, off dry land, scientists have started dumping 6 tons of iron into the Southern Ocean as part of a controversial geoengineering project that aims to increase the ocean's ability to absorb CO2.

    And finally, in a bit of bizarre news, an unnamed senior official at the U.S. National Science Foundation apparently viewed pornography on a work computer over 2 years. The incident has caught the attention of Senator Charles Grassley (R-IA), who now questions whether the agency deserves the $3 billion the U.S. House of Representatives has proposed giving it as part of the economic stimulus package.

    For the full postings and more, go to


    Looking for a Little Luck

    1. Leslie Roberts

    After another year of setbacks--and some real gains--the global polio eradication initiative continues with more support than ever.

    After another year of setbacks—and some real gains—the global polio eradication initiative continues with more support than ever

    GENEVA, SWITZERLAND—Soon after she took office 2 years ago, Margaret Chan called an urgent meeting. The time had come, said the new director-general of the World Health Organization (WHO), to take a hard look at the global polio-eradication initiative, which by then was 6 years past deadline, a couple of billion dollars over budget, and facing increasing questions about its feasibility from scientists and tapped-out donors. She wanted no more grand promises of when the virus would be vanquished from the planet. Instead, Chan and the “major stakeholders”—the partner organizations, donors, and countries—launched an “intensified” 2-year program, setting measurable milestones by which to judge progress. The leaders of the global initiative, a collaborative effort based at WHO, were to report back in February 2009, at which time the world could reassess its massive investment in the biggest global health program ever.

    That moment of reckoning is here, and the initiative has met only one of the milestones set 2 years ago. At 1643, global polio cases in 2008 were actually higher than the 1315 total in 2007, and the virus remains entrenched in the last four countries where, for reasons both social and biological, it refuses to budge.

    Still, no one is talking about pulling the plug. If anything, the beleaguered program has garnered more support and more money than it did several years ago. Just last month, the Bill and Melinda Gates Foundation, Rotary International, and the U.K. and German governments pledged $635 million for polio eradication. “Polio has to succeed” is the widely voiced sentiment among Chan and other global health leaders, not only because of the huge investment—20-plus years and nearly $6 billion—but also because of the unsettling realization that there is no palatable way out (Science, 20 April 2007, p. 362). Stopping now, so close to the finish, would mean losing the spectacular gains of the past 20 years—a defeat that would certainly be the death knell for other potential global eradication projects, like those for malaria or measles, says Peter Wright, an infectious-disease expert at Dartmouth College who advises the eradication initiative. And that is a decision no one is yet ready to make.

    That leaves the Global Polio Eradication Initiative (GPEI)—a collaboration led by WHO, Rotary International, UNICEF, and the U.S. Centers for Disease Control and Prevention—trying every trick in the book to beat the virus into submission. GPEI has a new 5-year plan that calls for reaching those unmet milestones—and far more. The program is investing heavily in research on improved vaccines that earlier program leaders swore would never be necessary; the virus would be gone by the time one could be developed. In a major departure, they are rethinking whether the world can ever safely stop vaccinating against polio, the fundamental assumption on which polio eradication was sold some 20 years ago (see sidebar, p. 705). And perhaps most of all, they are hoping for a little luck.



    For a long time, it looked like the polio warriors had the virus licked. Soon after the program was launched in 1988, with the confident prediction that polio would be gone by 2000, the program began dispatching virus from more than 100 countries in quick succession, using the tried-and-true approach that had already eradicated polio in the Western Hemisphere: supplementing routine polio immunizations with huge countrywide campaigns several times a year to deliver drops of Albert Sabin's oral polio vaccine (OPV) to every child under age 5.

    By 2000, global cases had fallen 99% from 350,000 to 791, reaching an all-time low of 483 in 2001. In the process, one of the three wild poliovirus serotypes, type 2, was eradicated almost inadvertently—providing a proof of concept that the ambitious plan was indeed feasible. By 2006, type 1 and type 3 virus were cornered in just four “endemic” countries—India, Nigeria, Afghanistan, and Pakistan—where transmission has never been interrupted (Science, 26 March 2004, p. 1960).

    But there the initiative stalled, with the four endemic countries periodically erupting and reinfecting other polio-free countries and the global case count hovering between about 1000 and 2000.

    As skepticism mounted among scientists and weary donors, a few advocated throwing in the towel on eradication and concentrating instead on keeping the virus in check (Science, 12 May 2006, p. 832). Meanwhile, Bruce Aylward, the peripatetic and unfailingly optimistic M.D./MPH who has led the effort since 1998, kept insisting that success was just around the corner—just another year away.

    That was the context in which Chan launched the intensified program, pouring in more money and resources to determine once and for all whether eradication could be achieved. The answer, everyone agreed, depended on progress in the four endemic countries.

    India seesaws

    In India, more than in any other country, the polio fighters were banking on a win in 2007–08. The plan was to deal a “mortal blow” to poliovirus type 1, considered the worst player because it causes more paralytic disease and spreads faster than the other remaining wild virus, type 3. Once type 1 was dispatched, mopping up type 3 in India would be easy, they predicted, and would show the world the program was on track.

    “If we can do it in India, the toughest place in the world, we can do it anywhere,” says Aylward. Early on, polio experts realized India was different; instead of the three to four doses of OPV that had sufficed to stop poliovirus transmission elsewhere, in some parts of northern India, children needed eight or 10 doses to be protected—and still some became paralyzed. In the other endemic countries, GPEI says the problem is “a failure to vaccinate”; in India, by contrast, the problem is compounded by “a failure of the vaccine.” The country's huge population, explosive birthrate, and crowded, squalid conditions combine to create an ideal environment for the virus, which is transmitted by feces.


    Nigeria again has a runaway polio epidemic.


    To tackle this “pernicious transmission,” as Aylward calls it, in early 2005 GPEI helped rush into use a new, more immunogenic version of OPV—a monovalent form that focused all its firepower on just type 1 (mOPV1). Later that year, mOPV3 was introduced (Science, 14 January 2005, p. 190). With the new vaccines in hand, northern India launched its sequential strategy, vowing to wipe out type 1 by the end of 2008.

    Volunteers flooded the country with oral polio drops, upping rounds from every 2 months to every 4 weeks and focusing on the toughest districts in the crowded, impoverished states of Uttar Pradesh, outside Delhi, where circulation was most intense, and Bihar, some 800 kilometers east.

    The results were stunning. Type 1 cases across the country dropped from 648 in 2006 to 73 in 2008. Most remarkable, Uttar Pradesh, which Aylward calls the wellspring of polio in the country—“every virus in India since 1999 has been linked to that area,” he says—went 12 months without a case. “It is really a hallmark [achievement],” says Samuel Katz, a polio expert at Duke University in Durham, North Carolina, who directs GPEI's newly reconstituted research advisory committee.

    But in early 2008, India was blindsided by a walloping epidemic of type 3 polio. Although the program had continued to use occasional rounds of mOPV3 and the trivalent formulation, tOPV, to forestall just such an event, “we didn't get the balance right,” concedes David Heymann, who oversees the polio program as WHO's assistant director-general for infectious diseases.

    Then in June 2008, type 1 came back to Uttar Pradesh. Genetic analyses showed it wasn't a local Uttar Pradesh virus, with its distinct genetic signature—instead, it was “imported” from Bihar. To scientists, the distinction was important—it meant transmission in Uttar Pradesh had indeed stopped for the first time ever—but that still left the country battling an epidemic on two fronts, with cases in 2008 down from 2007 but still, at 556, alarmingly high.

    At a November 2008 meeting, the India Expert Advisory Group, which oversees the country's effort, vowed to continue the fight into 2009, again focusing on type 1 but adding more doses of mOPV3 to keep that serotype in check. As a contingency plan, WHO and partners are testing a higher potency mOPV1, and the country is exploring whether adding doses of inactivated polio vaccine can help boost immunity in young children.

    All bets are still on India to be the first of the four endemic countries to stop transmission of the wild virus. India is the “key to donor confidence,” says Heymann. “We need a victory.”

    Nightmare in Nigeria

    In contrast, few expected much progress in northern Nigeria, where opposition, apathy, political instability, and corruption have stymied the program for years. But even realists didn't necessarily expect an outbreak of the magnitude that struck in 2008, in which cases in some areas were 10 times higher than in 2007. “Polio in Nigeria remains a nightmare,” says Oyewale Tomori of Redeemer's University near Lagos, head of the country's expert polio advisory group.

    In May 2008, WHO issued a blunt warning that Nigeria posed a risk to the rest of the world, threatening to derail the entire global effort. It came close to doing that in 2003–04, when suspicions about vaccine safety led several Muslim states in northern Nigeria to stop all polio vaccination for up to a year (Science, 2 July 2004, p. 24). As a result, virus from Nigeria reinfected 20 previously polio-free countries, as far away as Indonesia.

    Rumors about vaccine contamination are no longer the major impediment to eradication; instead, Nigeria's problems are largely “operational,” say GPEI officials, citing a lack of political will and the government's failure to provide even the most rudimentary health services. Others more bluntly refer to “gross incompetence” and say graft and corruption figure heavily. As Tomori explains, in the past, vaccinators might have been promised 40 Nigerian nairas a day for their work, but by the time government officials skimmed off their share, each may have received about 4. Those problems have now been fixed, say GPEI officials.

    At the epicenter of the epidemic in Kano state, 68% of all children have received fewer than three doses of OPV, and up to 30% are “zero-dose.” With 791 cases in 2008, Nigeria accounted for almost 50% of the global total. That number is especially frustrating to polio experts because stopping transmission in Nigeria should be a cinch compared with India. Studies by Nicholas Grassly and colleagues at Imperial College London have shown that vaccine efficacy is high there; that means that transmission of the virus should stop when population immunity reaches roughly 80%.

    Now, as in 2003, WHO and world leaders are trying to shame Nigeria into action. Last May, the World Health Assembly passed a resolution singling out Nigeria and calling on the country to quickly stop its runaway outbreak.

    The public humiliation may be having the desired effect. In July, President Umaru Yar' Adua vowed to redouble the effort. The ineffective head of the national polio program has been replaced, the third such change in 3 years. “This time it is different; the president is on board,” asserts Aylward. Tomori is more circumspect, saying he will wait to see whether this high-level commitment actually translates to action on the ground. Meanwhile, in 2008, poliovirus from Nigeria spread to seven West African countries. Other war-torn countries, including Chad and Sudan, are still grappling with epidemics sparked by earlier “importations” from Nigeria.

    Mixed bag.

    Despite significant advances, polio cases in 2008 remained high in the four endemic countries.


    Perils in Afghanistan and Pakistan

    In the other two hot spots, Afghanistan and Pakistan, violence, political turmoil, religious opposition, and the fierce autonomy of local leaders render eradication all but impossible. Large swaths of both countries are “no-go” zones where WHO and other United Nations personnel are not allowed to operate. National polio teams can still get in but are justifiably leery of doing so. Even in “accessible” areas, a “climate of fear” prevails, and vaccination teams may report going more often than they actually do, says epidemiologist Rudi Tangermann, who oversees efforts in the two countries from Geneva.

    In mid-2007, GPEI thought it had made significant headway; a “third party” had brokered an agreement with the Taliban to let polio vaccinators work unimpeded. Despite that agreement, in March 2008, two polio workers and their driver were killed by a suicide bomber in southern Afghanistan, where they were traveling to prepare for a vaccination campaign.

    Surveillance and monitoring are compromised as well. “We are peering in from the outside,” concedes Aylward. The countries constitute one epidemiologic block, with two transmission zones where the wild virus travels freely across the border. One is in Pakistan's rugged and inhospitable North-West Frontier Province and the federally administered tribal areas, where the Taliban and Al Qaeda are resurgent. The second extends from southern Afghanistan, near Kandahar, through Baluchistan, and then stretches all the way to northern Sindh in central Pakistan.

    Of the two, Pakistan is the bigger worry, says Tangermann. The number of cases rose in Afghanistan in 2008, but nowhere near as high as they did in Pakistan, where type 1 exploded and the virus spread into previously polio-free areas. In Afghanistan, President Hamid Karzai has pledged his support for eradication; Pakistan, on the other hand, “must become more committed under its new government,” says Heymann.

    Fundamentalist leaders in Pakistan have issued fatwahs saying the vaccine is unsafe and threatening vaccinators. “Refusals” have risen considerably. In February 2007, a Pakistani doctor and his driver were killed by a remote bomb while they were returning from a village where they were trying to persuade parents to let their children be vaccinated.

    Equally unsettling, the Geneva team suspects that the program in Pakistan is weaker than they imagined and that the viral foe may be tougher. Earlier reports that vaccination teams reached 95% of the target children seem to have been fabricated, says Tangermann. And recent studies by Grassly and colleagues at Imperial suggest that viral transmission is much more efficient in Pakistan than previously believed, closer to that of India than that of Nigeria. “Pakistan is the only place we really have questions about what we are dealing with,” says Aylward.

    To get a better fix on the biology, WHO and Pakistani partners are planning studies to measure antibodies to the virus in children in Karachi, Peshawar, and Lahore. “We have to see how effective the vaccine is and how well the program is working,” says Aylward.

    On the political front, Aylward has been trying to work his magic. At a high-level meeting last December in Islamabad, he and other partners got assurances from Minister of Health Mir Aijaz Hussain Jakhrani that Pakistan would make eradication a priority.

    For now, says Heymann, the most the program can hope to achieve there is to show it can stop transmission in conflict-free regions, like Punjab, where despite repeated campaigns, circulation remains intense. For the other areas, they wait. “We may be quite slow in areas with security problems,” he says.

    Vote of confidence

    Despite the upsurge of cases in 2008, Aylward insists that the world is much closer to eradication than it was a year ago. Chan has declared polio eradication WHO's “top operational priority,” saying in a speech in June, “The credibility of not just WHO but of many other health initiatives is on the line.” She is organizing an independent review to figure out what went wrong in each country and what the program could do better.

    The global oversight body, the Advisory Committee on Polio Eradication, is on board as well; in December, it endorsed GPEI's strategic plan for 2009 to 2013. Although Aylward is leery of firm deadlines, the plan calls for interrupting type 1 transmission in India by the end of 2009 and type 3 the following year. They hope to wipe out both types in Afghanistan and Pakistan in 2010, but Nigeria might take a year longer. All that depends, of course, on donors keeping their checkbooks open and countries putting their mind and muscle behind eradication.

    The donors have stepped up. With the $635 million infusion from Gates and others, funding looks better than it has in years. The $255 million Gates grant is the largest single donation since Rotary International kicked off the effort in 1988 with $240 million.

    Aylward keeps up his unrelenting schedule, visiting endemic and reinfected countries to spur or prod them into action. For the toughest spots, the big guns go too, such as Chan, Heymann, and the newest advocate, Bill Gates, who is also championing malaria eradication and who visited India in December 2008 and Nigeria just last week.

    For now, the global health community seems willing to give the eradication initiative more time. There are still skeptics who say it will never be finished. But most agree with Wright. “It's terribly hard,” he says. “All the models suggest it is not a good idea to give up on the program.”

    “We won't let up,” said Aylward in an interview from the noisy Brazzaville airport in Congo en route to Islamabad. “I will personally push it over the line if I have to. We still have very long sleeves and lots of tricks up them if we need them.”


    Rethinking the Polio Endgame

    1. Leslie Roberts

    One of the toughest conundrums for the long-running Global Polio Eradication Initiative has been whether and when it would be safe to stop vaccinating once they deem the virus gone. Now, the thinking is undergoing a major shift.

    One of the toughest conundrums for the long-running Global Polio Eradication Initiative has been whether and when it would be safe to stop vaccinating once they deem the virus gone. Now, the thinking is undergoing a major shift.

    The first big complication came in 1999 when scientists realized that the weakened virus used in the live oral polio vaccine (OPV) could revert to its neurovirulent form in rare cases and spark an epidemic. Thus was born the “OPV paradox”: OPV was necessary to eradicate the virus, but as long as OPV was in use, eradication could never be achieved. As a solution, World Health Organization (WHO) scientists proposed a plan: After the world was certified polio-free, all countries would stop using OPV simultaneously, as if at the stroke of midnight.

    Some scientists dismissed the idea as folly and instead advocated universal use of the inactivated polio vaccine (IPV), already widely used in developed countries. That would be the only way to ensure the world was really safe, they argued, and the only way to prevent a gross inequity in which poor countries bore all the risk of polio.

    For years, WHO maintained that such a switch wasn't feasible: IPV was too expensive for poor countries, it must be injected, and its effectiveness is unproven in tropical settings. Now, experts such as Roland Sutter, who heads WHO research in Geneva, Switzerland, concede that IPV does have a role after all. “The world will be a much safer place if more countries use IPV,” he says.

    To make that possible, WHO is now dusting off some earlier studies and investing heavily in new research into a cheaper, more effective version of IPV. WHO is looking at “dose sparing” strategies that could bring down the cost. In Cuba and Oman, it is testing the efficacy of using one-fifth the normal dose, delivered intradermally with an injection gun instead of intramuscularly with a needle. Other projects are trying to “stretch” the antigen with new adjuvants.

    Another big push is for what is called a “Sabin IPV.” One of the drawbacks of the standard Salk IPV is that production starts with the dangerous wild virus, which is then killed. To reduce the chances of an accidental release, IPV is manufactured only in facilities that operate under strict bio-containment procedures and only in countries that maintain a very high population immunity against polio. Both requirements rule out transferring the technology to developing-country manufacturers, which would bring down the cost.

    A Sabin IPV would use the less infectious attenuated strain from the oral vaccine as its seed stock, providing “a margin of safety” should an accident occur, says Sutter. Several clinical trials of a Sabin IPV are ongoing; if all goes well, it could be introduced within 5 to 8 years.

    Ultimately, says Bruce Aylward, who runs WHO's eradication initiative, “I want something much better than Sabin IPV.” Several groups are working on manipulating the virus to make safer seed strains that could be handled under less stringent safeguards. It's still early days, but there are several promising leads, including a virus that can't survive at body temperature.

    Sutter and Aylward say each country will decide whether to continue vaccinating. But if countries do choose to continue, they want to have in place a “cost neutral” vaccine that delivers the same protection as OPV at the same price—without the risk.


    Agreeing to Disagree

    1. Elizabeth Pennisi

    Two friends debate the relative importance of kinship in the evolution of complex social systems in insects.

    Two friends debate the relative importance of kinship in the evolution of complex social systems in insects

    Heave ho.

    Weaver ants work together to build nests for colonies with millions of members.


    When Bert Hölldobler and Edward O. Wilson wrote their first book, The Ants, it was a labor of love. Passionate about the insects they studied, they produced a Pulitzer Prize-winning work that brought worldwide attention to the sophisticated social world of ants. But when they sat down to write the sequel more than a dozen years later, the honeymoon was over.

    For these two biologists, ants, bees, wasps, and termites hold a special place in nature, having achieved the ultimate in altruistic behavior. In these so-called eusocial species, one or a few females produce young that other individuals in the colony care for in lieu of producing their own offspring. Twenty years ago, Wilson, based at Harvard University, and Hölldobler, now based at Arizona State University, Tempe, attributed this seemingly self-sacrificing behavior to the unusually close kinship among colony members.

    Yet, over the past 4 years, the duo has struggled to reconcile new information indicating that some social insects are exceptions to the rules that guided their thinking when they wrote The Ants. Although their new book, The Superorganism, tackles the evolution of eusociality, the two researchers are at odds over how these social systems got started.

    “We agree on everything except what happens on that one magic step,” Wilson says. He no longer argues that kinship is the underlying driving force, suggesting instead that what tips the scales in favor of eusociality is a particular set of environmental conditions that leads to the guarding of the young in nests close to dependable food sources. Over the past 2 years, he has pushed hard to get the field to see his new perspective. And Hölldobler has pushed back. “There is nothing wrong with kin selection,” Hölldobler insists. “It's a very important concept to understanding the early evolution of eusociality—how it begins.”

    Others wonder what the fuss is about. “It's a tempest in a teapot,” says Hudson Kern Reeve, an evolutionary biologist at Cornell University. “It's distracting us from the really interesting questions.” But Wilson disagrees: “If we can't understand the beginning of eusociality, our whole view of it is wobbly.”

    Living large

    Eusociality represents the ultimate in successful living, say Hölldobler and Wilson. Summed together, the 20,000 species of ants, bees, wasps, and termites represent two-thirds of the insect biomass, even though they account for just 2% of the insect species. Ants in tropical rainforests outweigh all the vertebrate inhabitants combined. (All ants and termites are eusocial; some bees and wasps are.)

    Their colonies, nests, and hives actually represent “superorganisms,” a concept introduced 80 years ago by William Morton Wheeler and reemphasized by Hölldobler and Wilson in their new book. Many organisms form groups, but in the superorganism, the group operates with a minimum of internal strife; most group members work toward the common good with little apparent mind to individual self-interest.

    That attitude had Charles Darwin flummoxed. He called social insects a “special difficulty” capable of toppling his theory of evolution. Natural selection should favor individuals that always put themselves—and the propagation of their own genes—first. Yet that didn't seem to be happening with worker bees or soldier ants, which often have no reproductive capability at all and instead strive to keep their queen and her offspring well-fed and protected. Darwin suspected that family ties might explain this apparent paradox—with selection operating at the family and not the individual level.

    In the 1960s, William Hamilton, a British biologist building on work by a fellow countryman, geneticist J. B. S. Haldane, put forth a very convincing theory, often called kin selection, that explained eusociality: Such altruism could evolve when a queen propagated enough of her workers' genes to make it worth the workers' while to help her rather than reproduce independently.

    In the 1970s, Wilson became one of kin selection's greatest proponents. Textbooks embraced the idea, and theoreticians worked hard to refine their understanding of how kin selection worked. “It became dogma,” says Wilson. Yet when Wilson first started to put together the new book's chapter discussing the origin of eusociality, he ran into trouble. “In the last few years, the picture had changed considerably,” he says. Since the publication of The Ants, field biologists had discovered many variations on the eusociality theme. Eusocial termites, for example, do not have the so-called haploid-diploid genetic system that was critical to Hamilton's explanation of kin selection, and some species that do have those genetics haven't evolved eusociality.

    Moreover, it became clear “there are counterforces that oppose close-pedigree kinship in the early evolution of social insects,” Wilson points out. Genetic diversity leads to greater resistance to disease and other destructive forces, and outbreeding does occur in these systems. He cites the example of primitive wasps, in which individuals in the same colony tend to become less related as time passes. He reasons that if close relatedness were so important, then these groups should continue to be made up of close kin. “Yet counterforces against the favoring of close kin in the origin of eusociality have not been taken into account by the theoreticians,” he criticizes.

    In 2004, Wilson wrote a manuscript on his new view and sent it to a half-dozen colleagues, working with them one on one to refine his ideas. At the time, Hölldobler was on board, and together they co-authored a perspective in the 20 September 2005 Proceedings of the National Academy of Sciences. They pointed out that three forces were at work in shaping an insect society: group, individual, and “collateral” kin selection (involving relatives other than offspring). Group selection occurs when one cooperating hive or nest out-competes another one; individual selection involves the survival and reproductive output of a particular ant, wasp, or bee; and kin selection has an impact when relatives other than offspring help spread shared genes.

    In that paper, Wilson and Hölldobler analyzed the conditions under which each type of selection tended to enhance or undermine cooperation, and how strongly. They concluded that the degree of relatedness primarily affects how quickly a social system changes but not whether cooperation was favored in the first place. “Eusociality … can, in theory at least, be initiated by group selection in either the presence or absence of close relatedness,” they wrote. But it could not arise without group selection.

    What was important was not relatedness, they argued, but the evolution of a gene variant that caused offspring to linger at the nest rather than disperse and an environment that favored cooperation. Researchers have yet to identify such an allele, but several researchers are looking, particularly in species that sometimes cooperate and sometimes live solitary lives, depending on the circumstances. “The preadaptation for eusociality will always [involve] a family and in that family, the individuals will be closely related,” notes Wilson. “But that's very different from saying that close relatedness brought them to take that last step [into eusociality].”

    As evidence that kinship is not critical, they cited examples of species that appear to be on the first steps toward eusociality, such as xylocopine carpenter bees. These bees form temporary semisocial groups that begin when one member of a pair bullies the other into becoming a worker while the first becomes the queen. Workers that are related to the queen tend to leave, but nonrelatives tend to stick around, suggesting that kinship has little to do with the cohesiveness of the group. Likewise, primitively eusocial wasps will set up group living arrangements with either kin or nonkin.

    Conflict resolution.

    Hölldobler (top, left) and Wilson debated how to cover the origins of eusociality in their new book, which discusses social insects including termites. In dampwood termites (bottom), workers and nymphs feast on a queen killed when two colonies converged.


    Wilson and Hölldobler also argued that because eusociality is quite rare—these altruistic societies have emerged only about a dozen times and exist in just 15 out of the 2600 known insect and arthropod families—it takes just the right combination of factors to set the stage for it. For insects, the appropriate setting involves maternal behavior in which the female builds a nest, gathers food, and feeds the young. In that situation, it helps to have someone guarding the young while the queen is foraging.

    Moreover, if the environment has ample food but a lot of competition, “that kind of pressure would make a group superior over an individual,” says Wilson. Consider primitive termites that nest in dead trees in California. Many king-queen pairs settle into a tree, but eventually one colony will take over the entire trunk, depending on how strong a cooperative unit that pair establishes. “It's solid group selection,” says Wilson.

    The rift

    Wilson continued to expand on these ideas. In the January 2008 issue of BioScience, he reiterated that collateral kin selection does not play a significant role and reemphasized that the road to eusociality entails setting up cooperative housekeeping and foraging from a local, dependable food source.

    The numbers support his view, he says. No eusocial species exist among the 70,000 species of parasitoid wasps and their relatives that travel from prey to prey to lay their eggs; whereas it arose seven times in the 55,000 aculeate (stinging) wasps that keep paralyzed prey in nests. Among the 9000 species of aphids and thrips, only those few that induce plant hosts to form galls—a nest of sorts—have become eusocial even though most of them form groups. And out of 10,000 decapod crustaceans, only snapping shrimp, which live in sponges, are eusocial, he reported.

    In these settings, species with the flexibility to live in groups can win out. Wilson thinks today's eusocial species likely had ancestors similar to a Japanese stem-nesting xylocopine bee, Ceratina flavipes: Most of the time females make do on their own, but every once in a while, they pair up and divide the labor, setting the stage for a “group” to outdo individuals and for the tendency to form groups to be favored.

    And, he argues, it may not take many genes to get a group to form. Studies of bacteria and fungi show that a single gene can have a profound effect on dispersal. A loss of the dispersal instinct positions individuals to become helpers.

    About this time, Wilson began collaborating closely with David Sloan Wilson of Binghamton University in New York. After organizing a discussion group with about 20 others, they published an expanded explanation of these ideas in the December 2007 issue of The Quarterly Review of Biology and in the September-October 2008 issue of American Scientist. Kin-selection theory, they argue, is ineffective. “The theory that traditionalists use leads them anywhere they want to go” and fails to make useful predictions, Wilson asserts. “To make [a theory] really stand [up], you have to show that that's the only result that can come from your theory, and they haven't done that.”

    Meanwhile, Hölldobler became convinced that kin selection really is critical for the origin of eusociality after all. He became increasingly unhappy with what Wilson and he had written in 2005, concluding that they had not made a clear enough distinction between cooperation and eusociality and that their evidence included supposedly primitively eusocial species that were really more advanced than that. “You don't need to be closely related for the evolution of cooperative groups, but eusociality is very special,” he notes.

    “Reeve and I set the record straight,” he says, in the 5 June 2007 issue of the Proceedings of the National Academy of Sciences. They emphasized that because, mathematically, group and kin selection were equivalent, the argument that group rather than kin selection lay at the heart of the origin of eusociality was spurious. “The fuss about kin selection versus group selection is quite pointless,” says Hölldobler.

    He sees the evolution of eusociality as a two-step process activated by ecological conditions and operating on closely related individuals. First, a subsocial species must work out an arrangement in which offspring that are capable of reproducing nonetheless help the mother raise siblings. Next, that arrangement becomes cast in stone, with sterile workers.

    In primitive ants, workers are almost the size of the queen, still have functional ovaries, and can mate, but as long as the mother remains fertile, they usually refrain from laying eggs. Evolution to eusociality often stalls at this first primitive stage, Hölldobler notes. Colonies can never get bigger than what the queen and worker nestmates can police because individuals might otherwise “cheat” the system and reproduce on their own.

    But under certain circumstances, bigger, more harmonious groups outdo rivals and evolve full-fledged eusocial networks, in which millions of sterile specialized individuals work together as a superorganism. In these cases, competition between groups favors cooperation within a group, and competition between individuals in the group dissipates. “At that point, it's not important to have high relatedness,” says Hölldobler. And, as he and Wilson agree, it can be healthier for a colony to have genetic diversity.

    Promising beginnings.

    At their nests, a female paper wasp (top) and a female sweat bee set up shop where early offspring may become workers that help raise the next generation.


    Hölldobler's opinion seems to hold the majority. “On this one issue, it is largely Wilson on one side and most of the rest of us on the other,” says David Queller, an evolutionary biologist at Rice University in Houston, Texas. “Like many researchers, I have become convinced that variation in costs and benefits is probably more important than variation in relatedness favored by Hamilton, but relatedness is still an essential part of it, and it all fits into Hamilton's kin-selection paradigm.”

    Yet there are dissenters other than Wilson and Wilson. “Relatedness is not needed to explain the origin of worker behavior,” says James Hunt, a biologist who studies wasps at North Carolina State University in Raleigh. “A mountain of data [has shown] that neither high relatedness nor asymmetry of relatedness among nestmates is necessary for the origin of sociality.”

    This rift slowed progress on The Superorganism. Originally, Wilson was supposed to write the chapter on the genetics and evolution of eusociality, but after 5 years, it was still not done, with the two authors at odds about how to proceed. Finally in July 2007, Wilson handed his pen over to Hölldobler, who labored over a version that Wilson eventually said he could live with as long as there were references to Wilson's perspective. The chapter “represents the best compromise that Bert and I could reach,” says Wilson. And now, on the book-signing circuit for their new book, Wilson and Hölldobler downplay their points of departure. “We were worried that that chapter would distract the reader from the wonderful natural history,” says Hölldobler. So they beg off questions related to the controversy.

    And they remain fast friends. In the end, says Wilson, “we agreed to disagree.”


    On the Origin of Art and Symbolism

    1. Michael Balter

    How and when was the artistic gift born? In the second essay in Science's series in honor of the Year of Darwin, Michael Balter discusses the evolution of the human ability to develop mental images and convey them through abstract means such as drawing and sculpting.


    Since their discovery by French spelunkers in 1994, the magnificent lions, horses, and rhinos that seem to leap from the walls of Chauvet Cave in southern France have reigned as the world's oldest cave paintings. Expertly composed in red ochre and black charcoal, the vivid drawings demonstrate that the artistic gift stretches back more than 30,000 years. These paintings are almost sure to be mentioned in any article or paper about the earliest art. But what do they really tell us about the origins of artistic expression?

    The prehistoric humans who decorated Chauvet's walls by torchlight arrived at the cave with their artistic genius already in full flower. And so, most researchers agree that the origins of art cannot simply be pegged to the latest discovery of ancient paintings or sculpture. Some of the earliest art likely perished over the ages; much remains to be found; and archaeologists don't always agree on how to interpret what is unearthed. As a result, instead of chasing after art's first appearance, many researchers seek to understand its symbolic roots. After all, art is an aesthetic expression of something more fundamental: the cognitive ability to construct symbols that communicate meaning, whether they be the words that make up our languages, the musical sounds that convey emotion, or the dramatic paintings that, 30,000 years after their creation, caused the discoverers of the Chauvet Cave to break down in tears.


    While sites like Chauvet might be vivid examples of what some researchers still consider a “creative explosion” that began when modern humans colonized Europe about 40,000 years ago, an increasing number of prehistorians are tracing our symbolic roots much further back in time—and in some cases, to species ancestral to Homo sapiens. Like modern humans themselves, symbolic behavior seems to have its origins in Africa. Recent excavations have turned up elaborate stone tools, beads, and ochre dating back 100,000 or more years ago, although researchers are still debating which of these finds really demonstrate symbolic expression. But there's widespread agreement that the building blocks of symbolism preceded full-blown art. “When we talk about beads and art, we are actually talking about material technologies for symbolic expression that certainly postdate the origins of symbolic thought and communication, potentially by a very wide margin,” says archaeologist Dietrich Stout of University College London.

    The evolution of symbolism was once thought to have been as rapid as “flicking on a light switch,” as archaeologist Clive Gamble of the Royal Holloway, University of London, put it some years ago. But given new evidence that symbolic behavior appears long before cave paintings, Gamble now says that his much-cited comment needs to be modified: “It's a dimmer switch now, a stuttering candle.”

    As they more precisely pinpoint when symbolic behavior began, scientists are hoping they might one day crack the toughest question of all: What was its evolutionary advantage to humans? Did symbols, as many researchers suspect, serve as a social glue that helped tribes of early humans to survive and reproduce?

    Venus, phallus, or pebble?

    “I don't know much about Art, but I know what I like,” quipped the humorist and art critic Gelett Burgess back in 1906. For archaeologists, distinguishing art from nonart is still quite a challenge. Take the 6-centimeter-long piece of quartzite known as the Venus of Tan-Tan. Found in Morocco in 1999 next to a rich trove of stone tools estimated to be between 300,000 and 500,000 years old, it resembles a human figure with stubby arms and legs. Robert Bednarik, an independent archaeologist based in Caulfield South, Australia, insists that an ancient human deliberately modified the stone to make it look more like a person. If so, this objet d'art is so old that it was created not by our own species, which first appears in Africa nearly 200,000 years ago, but by one of our ancestors, perhaps the large-brained H. heidelbergensis, thought by some anthropologists to be the common ancestor of modern humans and Neandertals. That would mean that art is an extremely ancient part of the Homo repertoire. “Ignoring the few specimens we have of very early paleoart, explaining them away, or rejecting them out of hand does not serve this discipline well,” Bednarik wrote in a 2003 analysis of the Venus of Tan-Tan in Current Anthropology.

    Symmetry in stone.

    Some stone tools require a mental image to create.


    Yet many archaeologists are skeptical, arguing that the stone's resemblance to a human figure might be coincidence. Indeed, the debate over the Tan-Tan “figurine” is reminiscent of a similar controversy over a smaller stone discovered in 1981 at the site of Berekhat Ram in the Israeli-occupied Golan Heights. To some archaeologists, this 250,000-year-old object resembles a woman, but others argue that it was shaped by natural forces, and, in any case, looks more like a penguin or a phallus. Even after an exhaustive microscopic study concluded that the Berekhat Ram object had indeed been etched with a tool to emphasize what some consider its “head” and “arms,” many researchers have rejected it as a work of art. For some, proof of symbolic behavior requires evidence that the symbols had a commonly understood meaning and were shared within groups of people. For example, the hundreds of bone and stone “Venus figurines” found at sites across Eurasia beginning about 30,000 years ago were skillfully carved and follow a common motif. They are widely regarded not only as symbolic expression, but full-fledged art.

    A roaring start.

    Researchers agree that Chauvet Cave's magnificent paintings, including these lions, are full-blown art.


    Thus many researchers are reluctant to accept rare, one-off discoveries like the Tan- Tan or Berekhat Ram objects as signs of symbolic behavior. “You can imagine [an ancient human] recognizing a resemblance but [the object] still hav[ing] no symbolic meaning at all,” says Philip Chase, an anthropologist at the University of Pennsylvania. Thomas Wynn, an anthropologist at the University of Colorado, Colorado Springs, agrees: “If it's a one-off, I don't think it counts. It's not sending a message to anyone.”

    Tools of the imagination

    Given how difficult it is to detect the earliest symbolic messages in the archaeological record, some researchers look instead for proxy behaviors that might have required similar cognitive abilities, such as toolmaking. Charles Darwin himself saw an evolutionary parallel between toolmaking and language, probably the most sophisticated form of symbolic behavior. “To chip a flint into the rudest tool,” Darwin wrote in The Descent of Man, demands a “perfect hand” as well adapted to that task as the “vocal organs” are to speaking.

    To many researchers, making sophisticated tools and using symbols both require the capacity to hold an abstract concept in one's head—and, in the case of the tool, to “impose” a predetermined form on raw material based on an abstract mental template. That kind of ability was probably not needed to make the earliest known tools, say Wynn and other researchers. These implements, which date back 2.6 million years, consist mostly of rocks that have been split in two and then sharpened to make simple chopping and scraping implements.

    Then, about 1.7 million years ago, large, teardrop-shaped tools called Acheulean hand axes appeared in Africa. Likely created by H. erectus and probably used to cut plants and butcher animals, these hand-held tools vary greatly in shape, and archaeologists have debated whether creating the earliest ones required an abstract mental template. But by about 500,000 years ago, ancient humans were creating more symmetrical Late Acheulean tools, which Wynn and many others argue are clear examples of an imposed form based on a mental template. Some have even argued that these skillfully crafted hand axes had symbolic meanings, for example to display prestige or even attract members of the opposite sex.

    Symbolic start.

    Some scientists argue that this 77,000-year-old engraved ochre shows symbolic capacity.


    The half-million-year mark also heralded the arrival of H. heidelbergensis, which had a much larger brain than H. erectus. Not long afterward, our African ancestors began to create a wide variety of finely crafted blades and projectile points, which allowed them to exploit their environment in more sophisticated ways, and so presumably enhance their survival and reproduction. Archaeologists refer to these tools as Middle Stone Age technology and agree that they did require mental templates. “The tools tell us that the hominid world was changing,” says Wynn.

    As one moves forward in time, humans appear able to imagine and create even more elaborate tools, sharpening their evolutionary edge in the battle for survival. By 260,000 years ago, for example, ancient humans at Twin Rivers in what is now Zambia could envision a complex finished tool and put it together in steps from different components. They left behind finely made blades and other tools that had been modified—usually by blunting or “backing” one edge—to be hafted onto handles, presumably made of wood. These so-called backed tools have been widely regarded as evidence of symbolic behavior when found at much younger sites. “This flexibility in stone tool manufacture [indicates] symbolic capabilities,” says archaeologist Sarah Wurz of the Iziko Museums of Cape Town in South Africa.

    Similar cognitive abilities were possibly required to make the famous 400,000-year-old wooden spears from Schöningen, Germany. One recent study concludes that these spears' creators—probably members of H. heidelbergensis—carried out at least eight preplanned steps spanning several days, including chopping tree branches with hand axes and shaping the spears with stone flakes.

    The idea that sophisticated toolmaking and symbolic thought require similar cognitive skills also gets some support from a surprising quarter: brain-imaging studies. Stout's team ran positron emission tomography scans on three archaeologists—all skillful stone knappers—as they made pre-Acheulean and Late Acheulean tools. Both methods turned on visual and motor areas of the brain. But only Late Acheulean knapping turned on circuits also linked to language, the team reported last year.

    Color me red

    At Twin Rivers, it's not just the tools that hint at incipient symbolic behavior. Early humans there also left behind at least 300 lumps of ochre and other pigments in a rainbow of colors: yellow, red, pink, brown, purple, and blue-black, some of which were gathered far from the site. Excavator Lawrence Barham of the University of Liverpool in the United Kingdom thinks they used the ochre to paint their bodies, though there's little hard evidence for this. Most archaeologists agree that body painting, as well as the wearing of personal ornaments such as bead necklaces, was a key way that early humans symbolically communicated social identity such as membership in a particular group, much as people today declare social allegiances and individual personalities by their clothing and jewelry.

    Yet while the Twin Rivers evidence is suggestive, it's hard to be sure how the ochre was actually used. There's little sign that it was ground into powder, as needed for decoration, says Ian Watts, an independent ochre expert in Athens. And even ground ochre could have had utilitarian uses, says archaeologist Lyn Wadley of the University of Witwatersrand in Johannesburg, South Africa. Modern-day experiments have shown that ground ochre can be used to tan animal hides, help stone tools adhere to bone or wooden handles, and even protect skin against mosquito bites.

    “We simply don't know how ancient people used ochre 300,000 years ago,” Wadley says. And since at that date the ochre users were not modern humans but our archaic ancestors, some experts are leery of assigning them symbolic savvy.

    Yet many archaeologists are willing to grant that our species, H. sapiens, was creating and using certain kinds of symbols by 75,000 years ago and perhaps much earlier. At sites such as Blombos Cave on South Africa's southern Cape, people left sophisticated tools, including elaborately crafted bone points, as well as perforated beads made from snail shells and pieces of red ochre engraved with what appear to be abstract designs. At this single site, a number of what many archaeologists consider diagnostic elements of symbolic behavior came together. And in work now in press, the Blombos team reports finding engraved ochre in levels dating back to 100,000 years ago (Science, 30 January, p. 569).

    Eye of the beholder.

    Archaeologists debate whether this modified stone was meant to represent a woman.


    There are other hints that the modern humans who ventured out of Africa around this time might also have engaged in symbolic behavior. At the Skhul rock shelter in Israel, humans left 100,000-year-old shell beads considered by some to be personal ornaments (Science, 23 June 2006, p. 1731). At the 92,000-year-old Qafzeh Cave site nearby, modern humans apparently strongly preferred the color red: Excavators have studied 71 pieces of bright red ochre associated with human burials. Some researchers argue that this represents an early case of “color symbolism,” citing the universal importance of red in historical cultures worldwide and the apparently great lengths to which early humans went to gather red ochre. “There is very strong circumstantial evidence for the very great antiquity of the color red as a symbolic category,” says anthropologist Sally McBrearty of the University of Connecticut, Storrs.

    These finds of colorful ochre, fancy tools, and beads have convinced many researchers that the building blocks of symbolism had emerged by at least 100,000 years ago and possibly much earlier. But why? What selective advantages did using symbols confer on our ancestors? To some scientists, the question is a no-brainer, especially when it is focused on the most sophisticated form of symbolic communication: language. The ability to communicate detailed, concrete information as well as abstract concepts allowed early humans to cooperate and plan for the future in ways unique to our species, thus enhancing their survival during rough times and boosting their reproductive success in good times. “What aspects of human social organization and adaptation wouldn't benefit from the evolution of language?” asked Terrence Deacon, a biological anthropologist at the University of California, Berkeley, in his influential book The Symbolic Species: The Coevolution of Language and the Brain. Deacon went on to list just some of the advantages: organizing hunts, sharing food, teaching toolmaking, sharing past experiences, and raising children. Indeed, many researchers have argued that symbolic communication is what held groups of early humans together as they explored new environments and endured climatic shifts.

    As for art and other nonlinguistic forms of symbolic behavior, they may also have been key to cementing these bonds, by expressing meanings that are difficult or impossible to put into words. In that way, artistic expression, including music, may have helped ensure the survival of the fittest. This may also explain why great art has such emotional force, because the most effective symbols are those that convey their messages the most powerfully—something the artists at Chauvet Cave seem to have understood very well.

    Additional links:

    The Blombos Cave Project

    Chauvet Cave, French Government Site in English

    Homo heidelbergensis

    Robert Bednarik's Web site



    Moonlighting in Mitochondria

    1. Martin G. Myers Jr.*
    1. Division of Metabolism, Endocrinology and Diabetes, Department of Internal Medicine, and Department of Molecular and Integrative Physiology, University of Michigan, Ann Arbor, MI 48109, USA. E-mail: mgmyers{at}

    Molecules known as signal transducers and activators of transcription (STATs) regulate gene expression in the nucleus in response to cell surface receptors that are activated by cytokines. On page 793 of this issue, Wegrzyn et al. (1) reveal that the isoform Stat3 also functions in another organelle—the mitochondria—to control cell respiration and metabolism. This finding not only reveals a new role for Stat3, but implies its potential role in linking cellular signaling pathways to energy production.

    Stat3 proteins represent the canonical mediators of signals elicited by type I cytokine receptors at the cell surface (2). For instance, the adipocytokine leptin activates Stat3 in hypothalamic neurons to promote the expression of the catabolic neuropeptide, propiomelanocortin, thereby regulating whole-body energy intake and metabolism (3). The binding of a cytokine to its receptor triggers an intracellular cascade of events, beginning with the activation of an enzyme, Jak kinase, which is associated with the receptor's cytoplasmic domain. The activated receptor-Jak complex then recruits and phosphorylates a tyrosine residue in cognate STAT proteins. This modification causes the STAT protein to relocate to the nucleus, where, as a dimer, it binds to specific DNA sequences and promotes gene expression (see the figure). Thus, the well-understood job of STAT proteins is to transmit a transcriptional signal from the cell surface to the nucleus. The phosphorylation of some STAT proteins on a specific serine residue may also contribute to their regulation (2).

    Dual deployment.

    The activation of a cytokine receptor at the cell surface promotes the tyrosine phosphorylation (Tyr-P) of Stat3, which dimerizes and moves to the nucleus to control gene expression. Serine phosphorylation (Ser-P) of Stat3 appears to be required for its action in mitochondria, where it promotes increased oxidative phosphorylation. Because many stimuli promote the serine phosphorylation of Stat3, many signaling pathways could regulate mitochondrial respiration via Stat3.


    Wegrzyn et al. have now identified another crucial role for Stat3, the isoform that responds to cytokines of the interleukin-6 and −10 families (including leptin). These cytokines act in the immune system and many other organ systems to regulate diverse cellular processes, including differentiation, proliferation, and apoptosis (2). Noting that GRIM-19, a mitochondrial protein, interacts with Stat3 and inhibits Stat3 transcriptional activity (47), the authors investigated the potential mitochondrial location of Stat3, revealing that a fraction of cellular Stat3 resides within the mitochondria of mouse myocytes and hepatocytes. Here, Stat3 associates with GRIM-19-containing complexes I and II, which are components of the electron transport chain that generates energy by oxidative phosphorylation.

    Wegrzyn et al. determined that in a subset of mouse B lymphocytes devoid of Stat3, oxidative phosphorylation was reduced, a defect attributable to the diminished activity of complexes I and II. That the number of mitochondria and their content of the proteins that constitute complexes I and II were not altered in Stat3-null cells suggests that Stat3 regulates the activity, as opposed to the absolute amount (as would be expected for transcriptional regulation), of complexes I and II.

    The expression of Stat3 in these otherwise Stat3-null cells restored oxidative phosphorylation, and this rescue of mitochondrial function did not require the DNA binding domain, the dimerization motif, or the tyrosine phosphorylation site that controls Stat3 nuclear localization and transcriptional activity, consistent with a transcription-independent role for Stat3. By contrast, the conserved serine phosphorylation site on Stat3 was important: Expression of Stat3 with a mutation that prevented phosphorylation of this serine did not produce the rescue effect, whereas a phosphomimetic mutant at this site did. It remains to be determined whether Stat3 is unique among STAT proteins in localizing to and regulating mitochondria.

    The nontranscriptional function of Stat3 in mitochondria raises many questions about its precise role in the organelle. For instance, although the results of Wegrzyn et al. reveal the absolute requirement for Stat3 to maintain normal mitochondrial function, they do not speak directly to the potential role for Stat3 in the physiologic regulation of cellular respiration. Presumably, specific signaling pathways, such as those that regulate the serine phosphorylation of Stat3, modulate Stat3-dependent control of cellular respiration (see the figure). Whether the control of mitochondrial localization, complex I/II association, or some other step might underlie such regulation remains unclear, however.

    Although it is tempting to speculate that cytokines use Stat3 to coordinately regulate transcription and respiration, the inhibition of Stat3-dependent transcription by GRIM-19 suggests that the opposite—Stat3 performs each job at the expense of the other—is just as likely. Also, because many cytokine-independent intracellular signaling proteins (such as protein kinase C isoforms and mitogen-activated protein kinases) promote the serine phosphorylation of Stat3 (2), cytokines may not be the only, or even the major, controllers of Stat3-modulated oxidative phosphorylation.

    Although many questions will take substantial research to work out, the newly discovered mitochondrial function for Stat3 has the potential to dramatically expand the way we think about the roles of STAT proteins, as well as how canonical cellular signaling pathways may control cell energetics.


    1. 1.
    2. 2.
    3. 3.
    4. 4.
    5. 5.
    6. 6.
    7. 7.

    Heavy Metals or Punk Rocks?

    1. Robert J. Bodnar*
    1. Department of Geosciences, Virginia Polytechnic Institute and State University, Blacksburg, VA 24061, USA. E-mail: rjb{at}

    At the recent World Copper Congress in Santiago, Chile, Rio Tinto's chief executive for copper, Bret Clayton, reported that copper consumption is expected to double over the next two decades (1); demand for other metals is expected to parallel this trend. These projected metal needs cannot be satisfied with known ore bodies. To locate new deposits, minerals exploration programs require robust genetic models for the formation of economic accumulations of metals. On page 764 of this issue, Wilkinson et al. (2) elucidate one of the least understood aspects of ore formation: the concentration of metals in hydrothermal solutions that deposited the ores.

    Fluid inclusions provide the best available tool for determining the physical and chemical conditions during ore formation (3). These microscopic samples (∼5 to 50 μm in diameter) of the ore-forming fluid are trapped in ore and non-ore minerals during mineralization; they thus record the temperature, pressure, and composition of the ore-forming fluid. Advanced microanalytical techniques allow individual fluid inclusions to be analyzed to determine ore metal concentrations (46).

    Wilkinson et al. now report unusually high metal contents of hydrothermal fluids from two ore districts containing sediment-hosted zinc-lead deposits, based on microanalysis of fluid inclusions. Other recent studies of fluid inclusions from copper (7) and gold (8) deposits also found much higher metal concentrations than would have been predicted on the basis of experimental data or theoretical models. If confirmed by further studies, these results have important implications for both the duration of the ore-forming process and the amounts of ore fluids needed to generate world-class ore deposits. This information is of crucial importance for understanding ore genesis, given that the duration of ore-forming systems is one of the major unknowns related to the formation of mineral deposits (9).

    Average continental crust contains about 70 ppm zinc (Zn) and 12.5 ppm lead (Pb). In contrast, average ore grades in Mississippi Valley-type (MVT) Zn-Pb deposits similar to those studied by Wilkinson et al. typically are about 6% Zn and 2% Pb by weight, representing enrichment factors of about 850 and 1600, respectively. Thus, metals must be scavenged from a large volume of rock with average crustal metal values and concentrated into a much smaller rock volume to generate economic deposits. For example, large to giant MVT deposits contain on the order of 106 to 2 × 107 metric tons combined Zn + Pb (10), with an average Zn: Pb ratio of ∼3.

    Garven (11) modeled the fluid-flow history associated with formation of a large MVT Zn-Pb deposit in Pine Point, Canada, and concluded that the total hydrothermal fluid discharge through the mineralized area was 5 × 106 m3 year−1. Garven assumed that 5 mg of zinc precipitated per kilogram of solution that flowed through these rocks and concluded that it would take from 0.5 to 5.0 million years to form the deposits, with Darcy flow rates (12) of 1 to 5 m year−1. Similar durations for ore formation (0.3 million years) have been estimated for the MVT deposits in the Upper Mississippi Valley district of the United States (13).

    Flow rates and duration of the ore-forming process reported by Garven (11) require total hydrothermal fluid volumes ranging from 2500 to 25,000 km3 over the lifetime of the ore-forming system. Similar volumes of fluid would be required to form other large to giant MVT deposits if each kilogram of fluid only precipitates a few milligrams of metal. However, if the metal content of the ore-forming fluid is considerably higher, as suggested by Wilkinson et al., then both the amount of fluid required and the duration of the ore-forming event would be reduced by orders of magnitude (see the figure). For example, if each kilogram of hydrothermal fluid deposited 103 mg of Zn (orange dot in the figure), then Pine Point and similar deposits could have formed in about 104 years from a few cubic kilometers of hydrothermal fluid, compared to the millions of years and hundreds of cubic kilometers of fluid required assuming that each kilogram of hydrothermal fluid deposited 5 mg of Zn (green dot in the figure).

    How to form an ore deposit.

    This modified “Roedder diagram” (15) shows the relationship between the amount of metal precipitated per unit mass of hydrothermal fluid (y axis) and the size of the ore deposit (x axis). The time (black diagonals) and volume of fluid (red dashed diagonals) required to form the deposit are contoured onto these coordinates. The width of the shaded box represents the range in ore tonnage for large to giant MVT Zn-Pb deposits (10). Wilkinson et al. report zinc concentrations of 5000 ppm and 3000 ppm (dashed lines near the top of the box). These concentrations are higher than previously reported and suggest that economic deposits can form faster than previously suggested (green and orange dots).

    The results presented by Wilkinson et al. further highlight the importance of depositional processes in the formation of economic occurrences of metals. Most ore geologists now agree that fluids with metal contents sufficient to produce economic mineralization are relatively common (14), and that it is the lack of a suitable depositional mechanism that often limits ore formation. Temperature decrease alone cannot be the dominant mechanism, because the solubility of most metals in most hydrothermal fluids decreases by only a small amount over the temperature range determined for most deposits. Thus, other processes—such as boiling or immiscibility, fluid mixing, or fluid-rock interactions—must operate to promote the precipitation of all (or most) of the dissolved metals transported by the hydrothermal fluids. The results presented by Wilkinson et al. provide important new insights into metal contents of ore-forming fluids and emphasize the need for continued research to constrain the amounts of hydrothermal fluids required to form world-class ore deposits and the duration of the ore-forming events.

    References and Notes

    1. 1.
    2. 2.
    3. 3.
    4. 4.
    5. 5.
    6. 6.
    7. 7.
    8. 8.
    9. 9.
    10. 10.
    11. 11.
    12. 12.
    13. 13.
    14. 14.
    15. 15.

    Confined Polymers Crystallize

    1. Piet J. Lemstra*
    1. Eindhoven University of Technology, Den Dolech 2, 5600 MB Eindhoven, Netherlands. E-mail: p.j.lemstra{at}

    Plastics have been very successful in replacing glass, metals, and wood, in part because they are light and easy to process into complex shapes at high speed and at low cost. However, in applications such as packaging, molded plastics can be at a disadvantage compared with steel, aluminum, and glass because of their relatively high permeability to atmospheric gases such as O2 and CO2. This problem arises because the synthetic polymers that are the main component of plastics are rather randomly organized in the solid state with sufficient spaces between the molecules that allow for gas diffusion. Although the problem can be solved to some extent by adding less-permeable materials to plastics, ideally it would be desirable to find a way to arrange the long-chain polymer molecules in an orderly way, namely, into crystallites in which the molecules are closely packed. On page 757 of this issue, Wang et al. (1) report that very thin layers of a commonly used polymer crystallize through special processing conditions into so-called polymer single crystals, which is surprising given the known difficulties in getting polymers to form crystals.

    The focus of most polymer research is on functional properties in emerging areas such as biomedical engineering (2), electronics (3), and energy [for example, plastic solar cells (4)], but the vast bulk of synthetic polymers are used in plastics. Packaging materials consume about 40% of the plastics produced. Plastics have surpassed steel in terms of production volume, with about 250 million tons of plastics produced annually worldwide.

    Despite this widespread use, there are packaging applications where the gas permeablility of plastics, especially to oxygen, hampers their use, from beer bottles to coatings in advanced electronics. Plastics are quite permeable because their softening temperatures (the glass transition temperature, Tg) are low in comparison with those of silicate glass and much lower than the melt temperatures of metals. For commonly used plastics, Tg ranges from −100° to about 100°C, whereas for silicate glasses, Tg ranges from 500° to above 1000°C.

    Gases or liquids permeate a plastic or glass by first dissolving in it and then diffusing through it. In the simplest theoretical description, the permeability is the product of the gas's solubility and its diffusivity. At room temperature, the diffusivity—the movement of the penetrating molecules—in silicate glass is almost zero because the silicate chains are “frozen” at ambient temperatures far below their Tg. The packing of these chains is not orderly but is still very tight and provides very little empty space and mobility for the penetrating molecules to pass through.

    In contrast, the segments making up the polymer chains are quite mobile at room temperature. This movement opens up pathways for low-molar mass molecules such as O2 and CO2 and, in the case of polar polymers such as nylon, for water. In general, the higher the Tg of a plastic, the better its barrier properties. For that reason, the polyester poly(ethylene terephthalate) (PET), with a Tg of ∼ 80°C, can be used for carbonated sodas, but it is too permeable to oxygen for storing beer.

    Various options exist to improve the barrier properties of plastic systems, such as applying a metal or ceramic coating or processing alternating layers of the plastic film with deposited ceramic film. More recently, nanoscale clay particles (made from exfoliated single layers of clay) have been dispersed in a plastic film (5). These thin clay layers are impermeable to gases and create a tortuous path for diffusing molecules; the path length for diffusion increases and slows gas exchange.

    Another approach, but one that is much more challenging, is to enhance the barrier properties of polymers by getting them to crystallize because polymer crystals are impermeable to gases. However, the long chains of synthetic polymer molecules are entangled with each other, much like cooked spaghetti. How can these long-chain and highly entangled molecules form ordered crystallites?

    The first efforts at polymer crystallization avoided highly entangled melts and started with dilute solutions. In the 1950s, Keller (6), Fischer (7), and Till (8) found independently that linear polyethylene (PE) can form platelet single crystals upon cooling dilute PE solutions (see the figure, panel A). The long-chain molecules are folded in these crystals (9). The fold length, which corresponds to the crystal thickness, is very small, on the order of 10 to 20 nm (see the figure, panel B). The concept of folded-chain crystallization continues to raise enormous interest among polymer physicists, because it is found that all polymer-chain molecules with a regular chemical structure form these folded-chain lamellar crystals.

    Packing polymer chains.

    Polymer single crystals can be obtained from very dilute solutions, e.g. polyethylene single crystals as visualized by AFM (A) in which the chains are folded (B) (9). Wang et al. have now succeeded in creating crystallized polymer layers (C) from the melt by extruding alternating thin layers of two different polymers. (D)These single-crystal layers inhibit gas diffusion by creating barriers that make the path that molecules take more tortuous.


    However, well-defined folded-chain crystals, referred to as lamellae, can only be grown from dilute solutions, which is too slow and inefficient for manufacturing. In processed products such as films, containers, and fibers, the polymers that are cooled from the molten state are only partly crystalline. Although there is still a driving force for the long-chain molecules to form folded-chain crystals, this process is hampered because the chain molecules are highly entangled polymers.

    Crystallization from the polymer melt starts from nuclei (catalyst residues or purposely added nucleation agents), and chain-folded crystallites emanate from such nuclei. However, the entangled polymer chains only partly incorporate into the crystallites, creating spherical crystal aggregates called spherulites. Thus, melt-crystallized polymers are semi-crystalline because those chain segments that are not folded into crystallites remain amorphous. Partial crystallization improves the barrier properties to some extent, because the crystallites are not permeable and create a tortuous path for small molecules, just as in the case of the nanoclay platelets.

    Wang et al. now show how poly(ethylene oxide) (PEO)—a crystallizable polymer—can be processed from the melt to form very thin layers consisting of polymer single crystals. The authors use a process in which alternating layers of PEO and EAA poly(ethylene-co-acrylic acid) are coextruded. The pressures applied in this process force the long-chain molecules of PEO into their most compact arrangement, that of large folded-chain lamellar crystals (see the figure, panels C and D), while the EAA layers remain amorphous. The permeability of PEO to oxygen decreased by about two orders of magnitude in this crystalline form.

    Showing that large lamellar crystals can be grown from the melt is not only of high academic interest. The multilayer extrusion technique is used to create higher-value products through processing [for example, flexible mirrors (10)]. Crystalline polymers created in this way may in future function as barrier layers for semicrystalline polymers used in packaging.


    1. 1.
    2. 2.
    3. 3.
    4. 4.
    5. 5.
    6. 6.
    7. 7.
    8. 8.
    9. 9.
    10. 10.

Log in to view full text

Log in through your institution

Log in through your institution