News this Week

Science  20 Apr 2007:
Vol. 316, Issue 5823, pp. 350

You are currently viewing the .

View Full Text

Log in to view the full text

Log in through your institution

Log in through your institution


    Long-Sought Plant Flowering Signal Unmasked, Again

    1. Elizabeth Pennisi

    As elusive as the top quark, the signal that tells plants to flower has befuddled plant biologists for more than a century with many false leads to its identity. Two years ago, researchers created quite a stir with data indicating that this signal was messenger RNA (mRNA) that traveled from the so-called flowering locus T (FT) gene in plant leaves to the growth tip where flowering takes place. But those authors are now retracting that finding (p. 367). Instead, two new reports, published online by Science this week ( and, have fingered the FT protein itself.

    “This is something we have been waiting for a long time,” says J. A. D. Zeevaart, an emeritus plant physiologist at Michigan State University in East Lansing. “These two papers will be classics in the field for years to come,” adds Philip Wigge, a plant biologist at the John Innes Centre in Norwich, U.K. Others, however, think the evidence is not yet conclusive. “They haven't taken the story any further,” says William Lucas, a plant cell biologist at the University of California, Davis.

    This story has its roots in a 1930s study by Russian plant physiologist Mikhail Chailakhyan. Based on grafting experiments, Chailakhyan proposed that when leaves sense the appropriate day length, they send a mobile signal called florigen to the plant's growing tip to initiate flowering. But promising leads led to dead ends, and “florigen [became] the pariah of botany, [akin to] Big Foot or intelligent extraterrestrial life,” says Brian Ayre, a plant biologist at the University of North Texas in Denton.

    In the past decade, researchers armed with molecular tools for manipulating genes and visualizing proteins in live tissue have revived the quest. They pinned down the FT gene, the leaf protein that turns FT on, and a flowering gene that the FT protein controls. Then, in 2005, Tao Huang, a postdoc at the Swedish University of Agricultural Sciences in Umeå, and his colleagues proposed that mRNA was the mobile signal in Arabidopsis, as they saw mRNA from FT build up in both the leaf and the growing tip. They concluded that FT mRNA was produced in the leaf and traveled to the growing tip, where it was translated into the FT protein, which then kicked off flowering (Science, 9 September 2005, p. 1694). This report seemed “an enormously exciting breakthrough,” recalls Colin Turnbull, a plant biologist at Imperial College in Wye, U.K.

    Peripatetic protein.

    In Arabidopsis (top), a leaf protein moves from a flowering graft into a nonflowering mutant, causing a stem and blossoms to form. In rice (bottom), the equivalent protein (green) shows up in the shoot apical meristem.


    But it has not held up. In the 18 April 2006 Proceedings of the National Academy of Sciences, Eliezer Lifschitz of Technion Israel Institute of Technology in Haifa reported no sign of mRNA from the FT-equivalent gene in the flowering shoots of tomatoes. And in their retraction notice, Huang's collaborators report that their initial analysis excluded some data and gave extra weight to other data. When they redid the experiments, “we could not detect movement of the transgenic FT mRNA,” says Ove Nillson, in whose lab Huang did this work. Huang, now at Xiamen University in China, has not agreed to the retraction.

    Turnbull and George Coupland of the Max Planck Institute for Plant Breeding Research in Cologne, Germany, working with Arabidopsis, and another team studying rice, have now proposed that the mobile signal is the FT protein itself rather than mRNA.

    In rice, the equivalent of the FT gene is called Hd3a. Ko Shimamoto of the Nara Institute of Science and Technology in Japan, his student Shojiro Tamaki, and their colleagues first measured Hd3a mRNA in various tissues. They found that in rice grown with short days (rice requires short days to develop flowers), the mRNA increased in leaves but was present only in very low amounts in the shoot apical meristem, the growing tip. Next, they made a transgenic rice strain by joining the gene for green fluorescent protein (GFP) with that for Hd3a, which made any Hd3a protein visible under a confocal laser scanning microscope. They saw the protein in the vascular tissue of the leaf and the upper stem as well as in the core of the growing tip.

    They then attached promoters to the combination GFP/Hd3a gene that caused the genes to turn on in the leaf but not in the growing tip. Flowering still occurred, they report. “The only way [FT] could get there was if it moved,” explains Zeevaart.

    Like Shimamoto, Coupland and Turnbull focused on the FT protein and used GFP to track its fate, this time in Arabidopsis. Laurent Corbesier, a postdoc in Coupland's lab, added the fused FT/GFP gene to a mutant Arabidopsis strain that lacked the FT gene. They observed the protein first in the vascular tissue of the stem, and 4 days later, at the base of the growing tip.

    In another experiment, the team grafted plants carrying the fused gene to mutant plants that could not make FT at all. The FT/GFP protein, but no mRNA, moved across the graft junction and through the mutant plant, they report.

    Finally, when they attached two GFP genes to the FT gene, the resulting protein was too big to travel beyond the leaf—and in those plants, no flowers formed. Thus, the researchers could rule out both RNA and the existence of a signal activated by FT.

    “The evidence is convincing, especially the grafting experiments,” says Ayre. And, strengthening the case, several other researchers are preparing to publish similar results.

    But not everyone agrees. Lifschitz calls the evidence in both reports “circumstantial.” He, Nilsson, and Miguel Blázquez of the Polytechnic University of Valencia, Spain, point out that neither group tested whether GFP moves through the plant on its own accord. And Lucas doesn't think the authors adequately demonstrated that FT gets into the growing tip from the leaf. For example, in Arabidopsis, one leaf promoter used turns on genes elsewhere in the plant, so it could have turned on FT outside the leaf, Lucas points out. Even Ayre is still cautious. “Florigen has a long history of disappointing people,” he says. “We're getting there, but the race is intense, and we need to keep cool heads.”


    The Looming Oil Crisis Could Arrive Uncomfortably Soon

    1. Richard A. Kerr

    The world's production of oil will peak, everyone agrees. Sometime in the coming decades, the amazing machinery of oil production that doubled world oil output every decade for a century will sputter. Output will stop rising, even as demand continues to grow. The question is when.

    Forecasts of peak oil production have ranged from Thanksgiving weekend 2005 to somewhere beyond 2050. But at the annual meeting of the American Association of Petroleum Geologists (AAPG) in Long Beach, California, early this month, the latest answer emerged: World oil production could stop growing as early as 2020—too soon to avoid a crisis—or it could hold off until 2040. “The peak in world oil production is not imminent,” oil information analyst Richard Nehring of Nehring Associates in Colorado Springs, Colorado, said at the meeting, but it is “nevertheless foreseeable.”

    Predictions of the timing of peak oil have been all over the map (Science, 18 November 2005, p. 1106). So-called peakists favor gauging future production by judging how much oil Earth still holds and how much has already been produced. They come up with a peak in the next few years, certainly before 2020. At the other extreme, major oil companies draw on in-house expertise about how much oil remains and how fast it will be produced. They see no end to rising production as far out as they look, usually not beyond 2030.

    Nehring took a different tack, in two ways. First, he conducted an informal survey of experts by organizing a meeting, a prestigious Hedberg Conference, under the auspices of AAPG last November and inviting 75 experts from 19 countries to consider the world's oil resources. There he pressed them for their best estimates of everything from how much oil might be left to discover to how much might be wrung from existing oil fields and how much might come from unconventional sources such as Canadian tar sands.

    From the meeting's discussions, Nehring came up with low, medium, and high estimates of all the oil likely ever to be produced. But as he said at the meeting, the ultimate resource is not the only constraint. Politics and social unrest can limit how fast those resources can be exploited, as is happening today in Venezuela, Iraq, and Nigeria. And technological challenges, as in the still-icebound Arctic, can slow extraction as well.

    Sooner or later.

    The less oil left to be pumped from the ground, the earlier world production reaches a peak. In a new analysis, only the earliest, low-resource peak looks reliable.


    So for his second innovation, Nehring created three scenarios with successively higher peaks beginning in 2020, 2030, and 2040. He then compared the amount of effort it would take to achieve each scenario with the world oil industry's past performance. “The only scenario we're quite sure of is the low one” producing a 2020 peak, he says. Conference participants were confident that at least the low estimate of ultimate oil resource is actually out there, he says. And the world oil industry has managed to add the needed production capacity as fast as a 2020 peak would require.

    Holding the peak off until 2040, however, would require both a high—and much less certain—total oil resource and adding more production each year than ever before, despite having already produced the world's most easily extractable oil. “We can't behave now like we're going to have the high scenario,” Nehring concludes.

    Nehring is getting some attention but not many converts. “Richard did a good service in holding this Hedberg Conference,” says oil assessment specialist Donald Gautier of the U.S. Geological Survey in Menlo Park, California. But there's so much uncertainty, Gautier says, from when Arctic ice might melt out of the way to when needed new technology can be developed, that predicting the peak may not be worthwhile. A decade or so could tell.


    Astrocytes Secrete Substance That Kills Motor Neurons in ALS

    1. Constance Holden

    Astrocytes—among the glial, or “support,” cells in the central nervous system—may be the primary culprit in the death of motor neurons in at least some cases of amyotrophic lateral sclerosis (ALS), researchers at Harvard and Columbia universities report. The new findings underscore the power of research with embryonic stem (ES) cells to elucidate basic disease processes, the researchers say.

    Bad neighborhood.

    Astrocytes (red) threaten motor neurons (green) in culture with mutant SOD1 gene.


    ALS is an untreatable disease that progressively kills motor neurons. For years, scientists have debated whether the motor neurons themselves are defective, or whether some external factor kills them. Two papers, published online on 15 April in Nature Neuroscience, present new evidence that in some ALS sufferers—those with a mutant gene for superoxide dismutase (SOD1)—the astrocytes emit a toxin that selectively kills motor neurons. ALS researcher Jeffrey Rothstein of Johns Hopkins University in Baltimore, Maryland, says the papers “add weight to a growing body of data that has suggested that astrocytes contribute to ALS.” Roughly 10% of ALS cases are familial, with the rest being labeled “sporadic.” SOD1, the only gene that's so far been linked to the disease, is mutated in about 25% of familial cases.

    In the latest work, a Harvard group led by Kevin Eggan generated ES cells from the blastocysts of mice bred to express the normal human SOD1 gene and from mice with the mutant SOD1 gene. The scientists then coaxed the ES cells to become motor neurons and did some mixing and matching, cultivating the motor neurons and astrocytes with and without the mutation. They found that even normal neurons did badly in the presence of astrocytes with the SOD1 mutation, showing a 50% decrease over 14 days. Reversing the conditions, they found that when mutant motor neurons were surrounded by normal astrocytes, their losses were much less severe.

    Eggan says the work “really validates this song we've been singing: If you can make ES cell lines carrying the genes, you can study diseases in a much more sophisticated way.” He plans to do the work with human ES cell populations containing the DNA from ALS patients.

    The Columbia team, headed by biologist Serge Przedborski, took the findings a step further by demonstrating that astrocytes are solely responsible for this particular toxin, and that they only target motor neurons. They found that growing motor neurons in a culture in which astrocytes had been grown produced the same deadly results as exposing them to astrocytes directly. Przedborski says that points to a toxic substance emitted by the glial cells. The team ruled out both SOD1 and glutamate—a major troublemaker in brain disease also implicated in ALS—as the toxic substance.

    Steven Goldman, a stem cell researcher at the University of Rochester Medical Center in New York, says the research provides a “very valuable model” for this particular type of ALS. “These are potentially quite exciting papers for introducing new avenues for research,” says Goldman. He says identifying the agents being released by astrocytes, and ultimately seeing “whether this type of mechanism is limited to the SOD1 model,” may help unlock the mysteries of sporadic types of ALS as well.

    The Columbia group plans to define the basic characteristics of the poison and then use that knowledge to screen for new drugs. “For a disease in need of drugs like few others, that would be great,” says Eggan.


    Colliding Clouds May Hone Physical Constants

    1. Mark Anderson*
    1. Mark Anderson is a writer in Northampton, Massachusetts

    A new spin on the atomic clock could yield some of the most precise measurements to date of fundamental physical constants—potentially providing crucial experimental tests of a number of “theories of everything.” Four physicists at Pennsylvania State University (Penn State) in State College have built the prototype of a “quantum scattering interferometer”—a device capable of registering differences in the mass and other properties of atoms with unheard-of sensitivity. The experiment, described this week in Nature, could lead to more-accurate atomic clocks and help physicists study exotic states of matter such as Bose-Einstein condensates and degenerate Fermi gases, says Randall Hulet of Rice University in Houston, Texas. “It's a new high-precision tool for measuring the effects of [atomic] interactions,” Hulet says.

    The researchers start by cooling two clouds of cesium atoms to fractions of a microdegree above absolute zero. One cloud is then put into its lowest-energy state (called the ground state) and sent through a chamber where microwaves excite the atoms into a bizarre state called quantum superposition, in which the outer electron in each atom has a property called its spin pointed both upward and downward at the same time. One spin state harbors slightly more energy than the other does. (In atomic clocks, the frequency of light emitted when cesium atoms move between these two “hyperfine” states provides the standard for determining the length of a second.)

    In the Penn State experiment, the researchers bring the two clouds together. As the atoms collide, the two hyperfine states of each cesium atom recoil differently, causing them to interfere with each other in a way that creates a time lag or phase shift between the two states. The collided atoms are sent through a microwave chamber a second time and then are sorted using laser pulses that tally the phase shifts. The results provide a detailed portrait of the collision (see figure). “If you really want to understand atom-atom interactions very precisely, this is the method,” says Kurt Gibble of the Penn State team.

    Mind the gap.

    Time lag caused by interference of specially prepared cesium atoms could boost studies of atomic collisions and physical constants.


    In the future, the scientists hope to further fine-tune the experiment by using magnetic fields to coax the atoms into short-lived connections called Feshbach resonances, which resemble molecular bonds. Because the atoms interact over longer periods, researchers will be able to learn more about their properties. The resonances should enable physicists to apply atomic-clock accuracy both to studies of the collisions and to measurements of fundamental physical constants—such as the ratio of the mass of the electron to the mass of the proton. Some versions of string theory and other grand unification theories predict that such constants will vary slowly over time. If so, Gibble says, an ultrasensitive future version of the quantum scattering interferometer would be able to measure the variations over the span of years.


    Study Questions Antidepressant Risks

    1. Jennifer Couzin

    An analysis of 27 clinical trials of antidepressants in youngsters has found a negligible risk of suicidal thoughts and suicide attempts, with the treated groups showing 0.7% greater risk than participants given a placebo. The study comes more than 2 years after regulatory agencies worldwide warned doctors to take great care in prescribing the drugs to children and teenagers because they might increase “suicidality.” Since then, controversy has grown over whether the risks have been exaggerated.

    The authors of the new study, published this week in the Journal of the American Medical Association (JAMA), undertook their analysis after the U.S. Food and Drug Administration (FDA) slapped a “black box” warning on most antidepressant drugs in late 2004. But that warning didn't take account of the medications' benefits, says Jeffrey Bridge, an epidemiologist who studies teen suicide at Columbus Children's Research Institute in Ohio. With David Brent of Western Psychiatric Institute and Clinic in Pittsburgh, Pennsylvania, and their colleagues, the group set out to mimic FDA's approach while incorporating some new data and reported benefits of the drugs.

    As FDA did, the JAMA authors examined trials of depression, obsessive-compulsive disorder, and other anxiety disorders, bringing together 5310 subjects. Disorder-specific analyses did not yield statistically significant results, but when all data were pooled, the authors found that those on medication had a small but statistically significant increased risk of suicidal thinking and behavior of 0.7%. This was less than the FDA-cited 2% risk. The authors also found that, in general, teenagers were helped by many of the drugs tested. Depressed children, as has been previously reported, were helped only by Prozac. “What this shows is that the overall bad effect is very small, and the overall good effect is much bigger,” says Charles Nemeroff, chair of the department of psychiatry and behavioral sciences at Emory University in Atlanta, Georgia.

    Like nearly all so-called meta-analyses, the JAMA study has its drawbacks, researchers agree. The trials it examined were not designed to assess suicidal behavior and excluded individuals who were suicidal.

    Although these drugs may pose a risk to some, says Kelly Posner, a child psychiatry researcher at Columbia University who assisted FDA in its safety analysis, we already know that “untreated depression is what kills people.” Posner and others are convinced that youth suicide data now emerging—a 14% increase in the United States in 2004 and a staggering 49% increase in the Netherlands—show what happens when antidepressant drug use declines.


    Iraq Mortality Study Authors Release Data, but Only to Some

    1. Jocelyn Kaiser*
    1. With reporting by John Bohannon.
    A war's toll.

    The release of study data hasn't calmed a debate over the number of violence-related deaths in Iraq.


    The authors of a controversial study on conflict-related deaths in Iraq are seeking to diffuse criticism by releasing their raw data. But the move has hardly settled the debate. Critics say the authors have withheld key details needed to check the study. And some are outraged by the conditions set for who can have the data, including the requestor's “objectivity.”

    The paper, published in The Lancet last October by a U.S. and Iraqi team, estimated that 655,000 more people have died than normally would have since the March 2003 U.S. invasion—more than 10 times any official estimate. The authors got this result by extrapolating from mortality data collected through door-to-door surveys. Other academics have questioned aspects of the study, from whether the interviews could have been done as quickly as claimed, to whether the results were inflated by surveying only households near main streets vulnerable to bombs and shootings (Science, 20 October 2006, p. 396).

    Lead author Gilbert Burnham's team at Johns Hopkins University in Baltimore, Maryland, had resisted calls to release the raw data, citing possible danger to the Iraqi interviewers and the survey participants. Earlier this month, however, Burnham and his team posted a note on their Web site saying they would release a data set stripped of information that might reveal identities—but only to qualified scientific groups. Such groups must have expertise in biostatistics and epidemiology, the note says, and must also be “without publicly stated views that would cause doubt about their objectivity in analyzing the data.” The Hopkins team says several groups have received the data.

    But at least one researcher has been turned down: Michael Spagat, an economist and expert on conflict studies at Royal Holloway, University of London, in Egham, who has been a proponent of the street-bias idea. Burnham, e-mailing from Jordan, declined to explain which criteria Spagat did not meet; co-author Les Roberts, now at Columbia University, says he wasn't involved in the decision but that Spagat “would not meet the criteria by multiple measures.”

    Spagat calls the policy “deeply flawed,” adding, “If we do something dumb or nonobjective with the data, qualified people should be able to expose our stupidity.” The decision also puzzles David Kane, a fellow at the Harvard Institute for Quantitative Social Science who has received the data set even though he says he posted comments on a Web log last fall that raised the possibility of fraud. Denying some critics access “is ridiculous,” Kane adds.

    One epidemiologist apart from the fray agrees that the conditions are unusual: “I am wary of trying to limit access based on the predilections of those requesting it,” says David Savitz of Mount Sinai School of Medicine in New York City. But Allen Wilcox, editor-in-chief of Epidemiology, defends the conditions set by the Hopkins group: “I can hardly blame [them] for being cautious in this case,” says Wilcox, because the topic is so politically charged.

    Others are concerned that the group's decision to withhold information such as main street names and the sampling protocol has made it impossible to detect street bias or other potential problems. More details on the interviews “are necessary if the authors are to lay to rest intimations of 'fabricated' data,” says Madelyn Hicks, a psychiatrist and public health researcher at King's College London. Burnham says his group “envision[s] no additional release of materials.”


    Boom and Bust

    1. Jennifer Couzin,
    2. Greg Miller

    Biomedical facilities are expanding after a growth spurt in the NIH budget. Yet individual scientists say that it's harder than before to get their work funded

    Rising expectations.

    Like many, the University of Wisconsin, Madison, is investing in the biosciences.


    Kurt Svoboda knew from day one that science meant sacrifices: pulling long hours, sweating over the data, and starving his personal life for professional success. What the young assistant professor at Louisiana State University (LSU) in Baton Rouge didn't realize was that time spent on grant proposals, not groundbreaking experiments, would be the big stressor.

    Svoboda, who turned 40 in January, wants to study how nicotine affects brain development in zebrafish, as a model of fetal exposure in women who smoke during pregnancy. Last summer, his tenure review was little more than a year away, but he had yet to secure an unspoken prerequisite: a federal grant. It wasn't for lack of trying. Since landing his job at LSU in 2002, Svoboda has been on a continuous hunt for money, five times submitting or resubmitting major applications, along with two smaller ones. Last summer, he was down to his third and final chance for approval of a revised R01, the National Institutes of Health's (NIH's) bread-and-butter grant. Without one, Svoboda suspected he would lose his job.

    He and his two graduate students worked 12-hour days cranking out data they hoped would sway the reviewers. The final version of the application contained 27 data figures, Svoboda says. “I've never seen a person work so hard in my life,” says Svoboda's LSU colleague, neurobiologist John Caprio.

    From California to Louisiana to Massachusetts, thousands of biomedical researchers find themselves in a similar financial bind. This year will be the fourth in a row that the budget of NIH, the wealthiest research agency in the world, has not kept pace with biomedical inflation. The slowdown is one of the longest veteran scientists can recall.

    To make matters worse, there may not be much sympathy for these scientists on the outside. Biomedicine is considered rich and well-protected territory. Indeed, today's crunch comes after a heady 5 years of growth, as the NIH budget doubled between 1998 and 2003. But people on the inside know the money crunch is real. “You're almost in a different movie now,” says Anthony Fauci, who heads the National Institute of Allergy and Infectious Diseases (NIAID), one of the largest of NIH's 27 institutes and centers. The percentage of research proposals funded by NIH has dropped from 32% in 2001 to a projected 21% this year. Funded grants are routinely cut by 10% or more.


    Dozens of investigators interviewed by Science, along with six NIH institute directors and agency head Elias Zerhouni, describe a climate in which young scientists struggle to launch their careers and even the most senior are trimming their research projects. Harold Varmus, a Nobel Prize winner who led NIH from 1993 to 1999 and is now head of the Memorial Sloan-Kettering Cancer Center in New York City, has had his grant cut, although he won't say by how much.

    Still, with a budget totaling $29.2 billion in 2007, NIH is hardly a pauper. What brought biomedical research to this place of financial anxiety? The doubling flooded NIH with billions more dollars over a relatively brief time. Whereas a private corporation might conserve some of this windfall, by law NIH must spend nearly all the money it receives the year it receives it. That provoked a massive expansion in biomedical research, and expectations of federal support surged to a level that could not be sustained when the budget stopped growing. The crash is hitting labs, careers, and the psyches of scientists with a vengeance.

    The big bubble

    Nine years ago, Congress set out to double the NIH budget and within 5 years sent it soaring from $13.7 billion to $27.1 billion. But everyone knew the golden days would not last. In October 2000, eight senior scientists and policymakers began meeting informally to discuss how to maintain the momentum. In 2002, the group published a commentary in Science presenting different budget models and their impacts on research priorities (Science, 24 May 2002, p. 1401). Its most pessimistic prediction modeled annual increases of 4%. Says David Korn, a former Stanford University dean now at the Association of American Medical Colleges (AAMC) in Washington, D.C., who helped bring the group together: “We didn't model increases below 4% a year because the tradeoffs and the sacrifices that would have been caused … were too difficult for us to deal with in the model.”

    At NIH, senior officials found that “no matter what, there will be pain after the doubling,” says Zerhouni, who became NIH director in 2002. To soften the blow, in 2002 and 2003, NIH tried to accelerate the pace of one-time expenditures such as construction, to free up money for the following years. But even “in the worst scenarios, people really didn't think that the NIH budget would go below inflation,” says Zerhouni, an outcome he attributes to the 9/11 attacks, the wars in Afghanistan and Iraq, and Hurricane Katrina.

    Meanwhile, research institutions everywhere were breaking ground on new facilities and expanding their faculty. In a 2002 survey, AAMC found that new construction at medical schools had exploded: From 1990 to 1997, schools invested $2.2 billion in new construction, compared to $3.9 billion from 1998 to 2002. But that paled in comparison to what was to come: an expected $7.4 billion in new construction from 2002 to 2007. AAMC has not yet confirmed whether these plans were carried out.

    Schools hired new faculty members to fill the buildings, expecting to recoup their investments from the NIH grants investigators would haul in. “Universities and their leadership did what I would have done too,” says Zerhouni. “The government is indicating support for these activities,” and the expansion was “exactly what Congress intended.”

    This appears to have helped drive more applicants to NIH. In 1998, fewer than 20,000 scientists sought research grants from the agency; in 2006, that number was more than 33,000, and according to NIH forecasts, the number of applicants is expected to top 35,000 in 2007. The number of applications has grown at an even faster clip, as scientists, concerned about their chance of getting funded, are submitting proposals more frequently. Because growth at medical schools lagged somewhat behind the doubling, many institutions are still expanding. At Sloan-Kettering, for example, officials only recently began filling a new building with scientists. They expect to increase their faculty by almost 50%, says Varmus.


    But as requests for NIH money edged upward, NIH's resources began to drop. After a 16% increase in 2003, the final year of the doubling, NIH received a 3% boost in 2004, an abrupt reversal of fortune. Although the general rate of inflation in 2006 was 3.1%, according to the U.S. Department of Commerce, the cost of goods and services in biomedical research and development rose 4.5%. The number of competing grants NIH funded peaked in 2003 and has been dropping since. The declining value of NIH's dollars and rising demand were “a perfect double whammy,” says Zerhouni.

    Yet the numbers fail to convey the gnawing unease and foreboding expressed by scientists across disciplines and at every stage of their careers. “The ripple effect here is amazing and paralyzing,” says Steven Dowdy, a cancer biologist at the University of California, San Diego. At Brown University, molecular cell biologist Susan Gerbi, who helps oversee graduate training, canvassed 49 faculty members in eight departments recently, as she does every year, to see how many would take on a graduate student from next year's pack. “In the past, it was a majority,” around 90% of those who responded, she says. “This year, only about 25% of the trainers said they would be interested … because they did not have a guarantee of funding for next fall.”


    “What's chilling” about the drought is that “we're getting into years 3 and 4 with no end in sight,” says Edward Benz Jr., the president and CEO of Dana-Farber Cancer Institute in Boston. Many researchers noted that the pressures on the federal budget, including the war in Iraq, leave Congress little room to expand or even stabilize other programs.

    Crunched for cash

    Alan Schneyer, a 52-year-old reproductive endocrinologist who has spent his career at Massachusetts General Hospital in Boston, at first wasn't too concerned when his grant application failed to make the cut. Like Svoboda and all scientists applying for R01s, Schneyer is allowed to submit two revisions of an application in hopes of persuading reviewers to give a fundable score. Schneyer's work had recently taken an unexpected turn. After a decade of studying how two proteins affect reproduction, he had eliminated them in mice and hit upon a surprising result: The animals had superior glucose tolerance and an abundance of pancreatic cells that make insulin. “If you're trying to treat diabetes, this would be perfect,” says Schneyer.

    Schneyer says he went through a “several-years learning curve” transitioning into the diabetes field. In March, he learned that, on his third and last chance, he'd missed this year's funding cutoff by 3%.

    After 3 years of trying for NIH grants and failing, including two unsuccessful attempts to renew a second R01, Schneyer's lab has shrunk from six people to one—he works there by himself. Although the National Institute of Diabetes and Digestive and Kidney Diseases may yet make an exception and fund him, he's not taking any chances. Next month, Schneyer will begin a job that doesn't rely as heavily on NIH money, working at Pioneer Valley Life Sciences Institute in Springfield, Massachusetts, and commuting from Boston 3 hours each day. “It's very real that you won't get a grant, even with an idea and 20 years of experience and a mouse that indicates treatment for diabetes,” he says.


    NIH officials say they're hearing from many scientists who, like Schneyer, can't keep their labs running. “I get phone calls from people saying, 'I'm letting my people go, what do I do now?'” says Michael Oberdorfer, a program director at the National Eye Institute. “I feel like I'm doing a lot of social work.”

    Compounding the problem is that most universities and medical institutions rely on NIH money for the bulk of scientists' salaries and overhead costs and are not set up to support faculty members long-term. Traditionally, “bridge funding” could tide researchers over for a few months. But now, more scientists than ever are having to resubmit grant applications, with a gap of 8 months or more in between each submission. At NIAID, the percentage of proposals funded on the first try has gone from 27% in 2001 to 11% in 2006.

    On the hunt.

    Submitting grant applications takes more and more time, says neurobiologist Kurt Svoboda.


    Some schools are beefing up their bridge funding. Dana-Farber, for example, is setting aside $3 million to $4 million this year. Historically, the institute reserved $500,000 to $1 million “and almost never spent it,” says Benz, Dana-Farber's CEO.

    Even “the senior investigator is turning out to be a challenge for us to support,” Benz continues. Funded grants are subject to cuts—24% on average at the National Cancer Institute (NCI), for example, and 18% at the National Institute on Aging. In each of the last 2 years, says Benz, two or three Dana-Farber labs have found themselves hundreds of thousands of dollars short of what they say they need to keep running smoothly—money that Dana-Farber has kicked in. “The full impact hasn't been felt because institutions have provided a buffer, but the funds that provide for that buffer are disappearing,” he says.

    At Brown, Gerbi is running out of money and will submit her second shot at her R01 in July. In desperation, she's cast a wide net in her quest for funds, applying everywhere from disease foundations to the Department of Defense, and has about 10 grant applications pending inside and outside NIH. In the meantime, she says, the university offered her some money that will run out this summer—but only because she recently underwent treatment for breast cancer and an exception was made. If grant money doesn't come through within months, she says, she'll be closing the stock center she keeps of a rare fly species named Sciara and, in the worst case scenario, her lab as well.

    Many scientists complain that the tough funding climate is exacerbated by an excessive focus at NIH on costly “big science,” such as the Cancer Genome Atlas, which is using large-scale genetic sequencing to decipher the molecular basis of cancer and whose 3-year pilot phase is budgeted at $100 million. Projects like this one, many scientists say, are coming at the expense of grants that sustain individual labs and have been the source of much innovation over the years.

    Zerhouni denies that R01s are a lower priority than they used to be. “I hear that just the way you do, but the numbers don't bear that out,” he says. Still, centers have grown somewhat. In 1998, they made up 8% of the budget, compared with 10% in the 2008 budget request filed in February. The extramural grants budget devoted to R01s and other individual grants has dipped slightly, from 81% in 1998 to 78% in 2006. But it's not clear that this is compounding the troubles of individual researchers, says NCI Director John Niederhuber. Many big team initiatives at NCI, he says, are funding some of the same researchers who would be applying for R01s.


    “It would be ridiculous to think we should be doing things today in 2007 exactly the way we did in 1970,” says Niederhuber. “Science has changed tremendously.”

    A strain on the young

    Nowhere does the funding gap seem wider than when looked at through the lens of age. “It's just about inconceivable for a brand-new investigator to get an NIH grant funded on their first submission these days,” says David Sweatt, chair of the neurobiology department at the University of Alabama, Birmingham. Sweatt has hired three young scientists in the past year and worries about their future. “I see it as this dark shadow hanging over people who are just starting out their labs,” he says. “They're having to spend so much time being anxious over funding, to the detriment of having time to think creatively about their research.”

    At Vanderbilt University in Nashville, Tennessee, pediatric infectious disease specialist John Williams recently learned that his first R01 application failed to make the cut on its second try. Although NIH institutes generally give new investigators a bonus by increasing the pay line—from 12% to 14% at NIAID, for example, or from 12% to 18% at NCI—that only helps if the grant isn't among the 40% or so that are set aside when they're “triaged” and left unscored by evaluators. This happened to Williams, who left a better-paying job as an emergency-room doctor and now studies a respiratory virus that strikes children. “We love science, and we want to do research,” he says of physicians like himself, “but we have to eat.” Williams, who has four children of his own, appreciates that NIH is trying to help. But “the efforts being made don't seem to be very effective,” he says.

    As it happens, in 2006 first-time applicants actually had a better shot than established ones at scoring an R01 or equivalent grant on their first try: 9% compared to 7%, according to the NIH director's office. But NIH officials are increasingly worried about mid-career scientists, those seeking to renew their first or second grant. “That's probably where the pain is maximal,” says Zerhouni. Last month, NIH announced the Director's Bridge Awards, promising up to $500,000 to tide over selected researchers for a year who just miss the funding cutoff. But such assistance is not the same as landing a grant, which is “a big element in the tenure decision,” says Leonard Zwelling, vice president for research administration at M. D. Anderson Cancer Center in Houston, Texas.


    Biologist Susan Gerbi of Brown University worries about losing her stocks of Sciara, a rare fly.


    Researchers early in their careers are eyeing the situation and wondering whether they should even try to ride it out. “We're losing some very talented people who are deciding that there's not enough stability here,” says Thomas Insel, director of the National Institute of Mental Health.

    One is Jerome Rekart, who was a postdoctoral fellow at the Massachusetts Institute of Technology (MIT) in Cambridge until he left last July for a teaching job at Rivier College, a liberal arts school in Nashua, New Hampshire. Concerns about funding were a major factor in his decision, Rekart says. Throughout his graduate school career and the first year of his postdoc at MIT, he'd seen mentors and respected scientists constantly fretting about their funding. “That was scary to me,” he says.

    Rekart's adviser, MIT neuroscientist Martha Constantine-Paton, says he came to her lab with outstanding recommendations and a number of publications. “This is not the kind of person you'd want to lose,” she says.

    NIH apparently felt the same way, awarding Rekart a training grant last year. By then he had already decided to move on, and he has no regrets. “I can control how well I teach,” he says. “I don't have control over the NIH budget and how many pieces of the pie are available and whether or not I can get one.”

    While some senior scientists predict that a generation of younger ones will disappear, others reject such grim forecasts. “There is a sense of unease,” says Varmus, but “I don't think we're losing young people outright yet.”

    Looking ahead

    NIH officials are struggling, meanwhile, to keep the percentage of proposals funded from sinking even lower. But with 77% of the research budget tied up in ongoing projects, including grants that last several years, institute directors and Zerhouni say they have relatively little leverage. Many institutes have canceled or delayed programs, particularly costly clinical ones (Science, 2 March, p. 1202). At the National Heart, Lung, and Blood Institute, Director Elizabeth Nabel says she has put off funding a new hypertension trial, and at NIAID, Fauci says he freed up $15 million by delaying further development of an Ebola vaccine.

    View this table:


    At the same time, “we can't not have any new initiatives,” says Fauci. “Science moves in a way that you have to push it sometimes.”

    Today's anxiety appears to reflect what happens when funding patterns shift abruptly. “Once you've expanded the [research] capacity, what are you going to do?” asks Nabel. “You just can't turn off the spigot quickly and keep the engine running at full gear.” Still, she adds, decisions about how to prioritize biomedical research “are really made at a societal level.”

    In hopes of influencing those decisions, scientists, convinced that an increase in NIH's budget will mark an improvement in their fortunes, are lobbying Congress heavily. Some are also considering alternative funding sources. “The real question to me,” says Mary Hendrix, president and scientific director of Children's Memorial Research Center at Northwestern University in Chicago, Illinois, “is how to reconstitute the lost funding without relying on government support.” Hendrix, one of the leadership group who met years ago to look beyond the NIH doubling, favors boosting public-private partnerships and working more closely with disease foundations. States, for example, are increasingly stepping in to finance human embryonic stem-cell research.

    But ultimately, Congress will set the pace, and there are glimmers that next year's budget may reverse the years of flat funding. Although President George W. Bush has recommended cutting the 2008 NIH budget by about $500 million, Congress has suggested it may push the other way.

    In February, Svoboda at LSU learned that on his third try, and to his great relief, his grant would win federal money. He scored in the 17th percentile and will be recommended for funding at the National Institute of Environmental Health Sciences.

    As a result, Svoboda is feeling a lot more optimistic about his tenure review this fall, although the experience left him disillusioned. “It's wiped out my personal life,” he says. “It's just been a brutal process.” Now he has a few years before he'll need to compete for funding again.


    Peer Review Under Stress

    1. Greg Miller,
    2. Jennifer Couzin

    With biomedical grant applications at an all-time high, competition is putting a strain on the system that picks the winners. After serving on recent study sections—as judging panels are known at the National Institutes of Health (NIH)—many researchers describe the stress and frustration of having to make arbitrary distinctions. “You're on the airplane flying home thinking 'We were really flipping coins on those last three grants,'” says Michael Mauk, a neuroscientist at the University of Texas, Austin, who chairs a study section on learning and memory. “The problem when pay lines get this low is that we're not cutting out the fat, we're cutting into the meat.”

    Although the system still works fairly well overall, says David Perkel, a neuroscientist at the University of Washington, Seattle, and a member of Mauk's study section, it doesn't have the resolution to precisely rank the top applications in a given pool. Yet as NIH pushes against a no-growth budget, small distinctions among the best proposals matter more and more. “It's possible to get to the top 20% and do it in a deliberative way,” Perkel says. “But what I think is really problematic is subdividing that top fifth.” For example, Perkel has seen what he calls “figure-skating effects.” As in the Olympic finals, “the first skater never gets a 10” because the judges want to leave room for still-more-impressive performances, Perkel says. Similarly, in his study section, the best scores almost always come later in the meeting, Perkel says: “Because there's this score creep during the day, it introduces some inequities.”

    There's another unspoken rule, researchers say: grant applications being submitted for the third and final time often receive preferential treatment. (NIH only allows three tries.) Some reviewers admit to feeling torn between a desire to toss a lifeline to a vulnerable colleague and the obligation to score grant proposals strictly according to merit. “I think reviewers are very aware that there's a lot at stake here,” says Michael Oberdorfer, a program director at the National Eye Institute. “You're talking about the survival of a lab.” The success rates for bread-and-butter grants known as R01s tend to increase with each submission. Across NIH, the success rate (the number of funded grants divided by applications) in 2006 jumped from 8% for first-time R01 applications to 28% for second submissions and to 47% for third attempts. In 1998, in contrast, the difference between first and third application success rates was smaller: 21% and 41%.

    Researchers want the best and brightest of their colleagues reviewing their grant proposals. But stocking study sections with well-qualified scientists has gotten harder recently. Grant applications to NIH are projected to increase 65% between 2002 and 2007, and in response to pleas from overloaded researchers, NIH has cut the number of proposals each reviewer is asked to review. As a result, the pool of reviewers has nearly doubled in the past few years, and “the quality is not always there,” concedes Antonio Scarpa, director of NIH's Center for Scientific Review (CSR), which coordinates the grant review process. Moreover, many top scientists, under pressure to keep their own labs productive, decline to serve on study sections.

    The lack of experienced scientists on review panels is a long-standing issue that's been accentuated by the current funding squeeze, says Edward Kravitz, a neuroscientist at Harvard Medical School in Boston. He says he's the recent victim of what he claims was a poorly informed review. Although his group has published four papers in high-profile journals since November on the genetic basis of aggressive behavior in fruit flies, this month a proposal to continue this work was “triaged”—rejected without being discussed by the full study section. (CSR has long told study sections that about 50% of grant applications should be triaged to save time for discussing the best applications, but the rule has been more strictly enforced in the last year or two, Scarpa says.) “There's something very wrong here,” says John Hildebrand, a neuroscientist at the University of Arizona in Tucson, who describes Kravitz's fly research as “utterly novel and creative.”


    Small distinctions loom large when grants are under review.


    Scarpa says CSR has several new initiatives to recruit qualified reviewers, including soliciting recommendations from scientific societies, tapping more researchers from outside the United States, and testing Web-based reviews that don't require researchers to travel to NIH headquarters in Bethesda, Maryland. Three senior scientists, members of the National Academy of Sciences, recently agreed to serve on such online study sections because they wouldn't have to travel from California, Scarpa says. He sees this as an encouraging sign. “The review is as good as the reviewers are,” Scarpa says. “So that is one of our first priorities.


    Polio: No Cheap Way Out

    1. Leslie Roberts

    Some experts have proposed abandoning efforts to eradicate the virus in favor of controlling it, but a new analysis says that could be more costly in the long run

    Back on track?

    With the world still reeling from its 2003 boycott of the polio vaccine, the Nigerian state of Kano has resumed massive immunization campaigns.


    Last year, several public health experts broached a heretical idea: Maybe it is time to give up on eradicating polio. Instead of trying to wipe out the disease, perhaps it makes more sense to settle for controlling it and use the money that would be saved for more pressing health problems, such as malaria or HIV/AIDS. The idea quickly gained advocates, but details of what a control strategy might entail—and what it would cost—have been lacking.

    Now, researchers at Harvard School of Public Health in Boston have developed a mathematical model to explore the options. Specifically, they analyze almost a dozen control scenarios—from aggressive efforts to keep the number of polio cases very low to more minimal interventions that would accept a higher disease burden—and then examine the tradeoffs in terms of costs versus polio cases for each.

    Eradication, assuming it is feasible, might yet cost billions to achieve. But it would still be cheaper than any control option—even those that would let cases soar to 200,000 a year—the team reports. “It is not intuitive. Control does not appear to offer any financial savings,” says Kimberly Thompson of Harvard, who led the effort with postdoc Radboud J. Duintjer Tebbens. Their results were published online last week by The Lancet.

    “Eradication is the best buy. It makes not just economic but humanitarian sense,” says Bruce Aylward, who directs the global polio-eradication initiative from the World Health Organization (WHO) in Geneva, Switzerland, and is clearly delighted with the new analysis. Thompson's work is “a nail in the coffin for the idea that there is a cheap and painless way out.”

    Even critics of the campaign have been loath to throw in the towel. Polio is one of a very small number of diseases in the world that are even candidates for eradication, because it doesn't have an animal reservoir, and relatively cheap and effective vaccines are available. It's also hard to argue with the success of a program that has slashed cases 99%, from about 350,000 in 1988 to about 2000 in 2006.

    But 20 years and $5 billion later, that last 1% remains elusive. After dropping to an all-time low of 483 in 2001, global cases have hovered around 1300 a year for the past 7 years, and polio retains a tenacious hold in four countries—India, Nigeria, Afghanistan, and Pakistan—from which it periodically erupts and reinfects polio-free countries. Between 2003 and 2006, the program spent $450 million battling outbreaks in 25 countries.

    Most experts agree that eradication is technically feasible, given unlimited resources. But the program has been sidelined by budget shortfalls, rumors about vaccine safety, religious and political opposition, conflict, and the technical challenge of trying to vaccinate every child on the planet multiple times. As deadlines have come and gone, fatigue and a perception of diminishing returns have grown, and it has become increasingly tough to raise money. Right now, says Aylward, the program is facing its most critical funding shortfall ever.

    Given these setbacks, it's not surprising that people are questioning whether eradication makes sense, concedes Stephen Cochi of the global immunization program at the U.S. Centers for Disease Control and Prevention (CDC) in Atlanta, Georgia. One of the most prominent calls for a reassessment came in an article in Science last year by Isao Arita, a veteran of both polio and smallpox eradication campaigns, and colleagues. “We believe the time has come for the global strategy for polio to be shifted from 'eradication' to 'effective control,'” they wrote (Science, 12 May 2006, p. 852).

    Skeptical that a low-cost, low-case option existed, Thompson and Duintjer Tebbens set out to investigate, using funds from the Harvard Kids Risk Project; no support for this study came from WHO, CDC, or other partners in the eradication initiative, Thompson says.

    First, they looked at what would happen if endemic countries were to scale back on immunization: To what extent, and how quickly, would polio cases rebound? They used a model that factors in variables such as the basic reproductive number of the virus, the level of population immunity, and outbreak risks, plugging in real-life data from Uttar Pradesh and Bihar, two states in northern India where polio transmission remains intense despite extensive vaccination.

    The model confirmed what the experience of the past few years has shown, says Thompson: “If you back off, you will likely see huge outbreaks.” Just a 10% decline in immunization intensity resulted in more than 5000 additional expected cases a year in these two Indian states alone.

    Then they explored the tradeoffs of various control and eradication scenarios for the world's low-income countries, where the biggest burden of cases would likely occur. (Polio is not an issue in wealthy countries where routine vaccination rates are high.)

    Control means, in essence, tolerating a certain number of polio cases a year, but there is no agreement on what an “acceptable” amount might be. Arita, for instance, had proposed keeping polio cases to fewer than 500 a year.

    Thompson and Duintjer Tebbens decided to start with a number that essentially reflects where we are today, about 1300 polio cases a year. We know what it takes to keep cases at that rate, says Thompson: about $680 million a year. About $400 million of that covers the costs of surveillance, routine polio immunization, and two rounds of supplemental immunization in all low-income countries. Another $280 million goes for four additional immunization rounds in endemic areas. Thompson and Duintjer Tebbens defined this option as “aggressive” or “very high” control. Playing it out over 20 years, as control efforts would continue indefinitely, led to a whopping cost of $10 billion.

    They then fiddled with the variables to come up with less-aggressive control scenarios. Under the bare-bones option—routine immunization with no supplemental rounds and no outbreak response—global cases quickly soared to 200,000 per year. Something in the middle—say, two supplemental rounds every 3 years and a moderately aggressive emergency response—might keep cases to 46,000 a year and cost almost $8 billion over 20 years.

    Control essentially means “low cost and high cases, or high cost and low cases, or something in between,” says Thompson. She cautions that the numbers are projected values and should not be taken as gospel. “It's a big model. There are a lot of assumptions,” she says. “All models can be criticized,” concedes John Sterman, a fellow modeler at the Massachusetts Institute of Technology's Sloan School of Management in Cambridge. But this is “the best analysis out there. I don't know of any other analysis as detailed and carefully grounded in the data.”

    Hands down, the winner in terms of cases and costs is eradication, says Thompson. The team modeled four widely discussed “posteradication” options: stopping routine immunization after transmission of the wild virus has been halted, continuing immunization with oral poliovirus vaccine, continuing with OPV and supplemental rounds, and switching to the more expensive inactivated poliovirus vaccine. These scenarios ranged from roughly $80 million to $5.5 billion over 20 years, not including the cost to achieve eradication in the first place, which Aylward estimates could be another $2 billion.


    The poliovirus continues to circulate in Uttar Pradesh, India, despite the highest vaccination rates in the world.


    The bottom line, says CDC's Cochi, is “there is not a viable control option. It's an illusion that you can take your foot off the gas pedal and ease up and it will be much less costly and there won't be a huge resurgence.” He adds that this analysis shows “we need to intensify efforts, not slack off. We need to get the money and commitment and finish.”

    The worst possible option, which looks something like what is happening today, is what Thompson calls “wavering commitment.” As she describes it, when cases drop to low levels as they did in 2001, people may feel they are spending too much on polio and turn their attention elsewhere. Cases resurge, and the world rushes back in. “It's expensive, and we have more kids paralyzed,” she says. “If we do it repeatedly, we pay more money than if we just finished it off.”


    Moderate Success for New Polio Vaccine

    1. Leslie Roberts

    The first data from the field are in on a new weapon in the campaign to eradicate polio: monovalent oral poliovirus vaccine (mOPV), introduced in India in 2005 in the hope of stopping the most intransigent chains of transmission. The good news is that the vaccine seems to be three times more effective than the trivalent version that has been the mainstay of the polio eradication initiative. But the bad news is that even the new one is still a pretty blunt instrument.

    Without question, Albert Sabin's OPV, a live vaccine made from three attenuated strains of poliovirus, has worked wonders, leading to the virtual disappearance of polio from all but a few corners of the world (see main text). The virus is hanging on in northern Nigeria, where an antivaccine boycott in 2003 led to a huge resurgence of cases. Along the rugged border of Pakistan and Afghanistan, insecurity and rumors are hampering vaccinators' access to children. In India, crowded, unhygienic conditions seem to be pushing the biological limits of vaccine.

    It has long been known that OPV isn't as effective in poor tropical settings, where population density and inadequate sanitation combine to fuel viral transmission. But it wasn't clear exactly how bad things were until last year, when Nicholas Grassly of Imperial College London and colleagues reported a mere 11% efficacy per dose in the states of Uttar Pradesh and Bihar (Science, 17 November 2006, p. 1150). No wonder cases of polio are occurring there even in children who have received as many as 10 doses of OPV, Grassly says.

    A 2006 outbreak in Uttar Pradesh provided a chance to assess the efficacy of the new vaccine on the ground. Grassly and colleagues scoured data from India's National Polio Surveillance Project to identify children who had developed type 1 poliomyelitis, with onset of paralysis between 1997 and 2006. Using admittedly indirect measures, they then compared how many doses of vaccine the 2076 cases and age-matched controls had received. They relied on parent recall, validated with government data on how many immunization rounds had been conducted in each area, and which vaccine, monovalent or trivalent, had been used. “It's not perfect,” says Grassly of the methodology, “but it seems fairly reliable.”


    They found that the efficacy per dose of mOPV was 30%, three times that of tOPV in the same setting. “It is lower than you would expect and lower than you would like, but it is moving in the right direction,” says Grassly, lead author on a study published online last week in The Lancet.

    Paul Fine of the London School of Hygiene and Tropical Medicine agrees that the findings show that “mOPV is a better tool for type 1 polio. We would expect that.” But is it good enough to stop transmission? “We don't know,” says Fine.


    European Skin Turned Pale Only Recently, Gene Suggests

    1. Ann Gibbons


    Researchers have disagreed for decades about an issue that is only skin-deep: How quickly did the first modern humans who swept into Europe acquire pale skin? Now a new report on the evolution of a gene for skin color suggests that Europeans lightened up quite recently, perhaps only 6000 to 12,000 years ago. This contradicts a long-standing hypothesis that modern humans in Europe grew paler about 40,000 years ago, as soon as they migrated into northern latitudes. Under darker skies, pale skin absorbs more sunlight than dark skin, allowing ultraviolet rays to produce more vitamin D for bone growth and calcium absorption. “The [evolution of] light skin occurred long after the arrival of modern humans in Europe,” molecular anthropologist Heather Norton of the University of Arizona, Tucson, said in her talk.

    Lighten up.

    A gene for pale skin swept through Europeans relatively recently.


    The genetic origin of the spectrum of human skin colors has been one of the big puzzles of biology. Researchers made a major breakthrough in 2005 by discovering a gene, SLC24A5, that apparently causes pale skin in many Europeans, but not in Asians. A team led by geneticist Keith Cheng of Pennsylvania State University (PSU) College of Medicine in Hershey found two variants of the gene that differed by just one amino acid. Nearly all Africans and East Asians had one allele, whereas 98% of the 120 Europeans they studied had the other (Science, 28 October 2005, p. 601).

    Norton, who worked on the Cheng study as a graduate student, decided to find out when that mutation swept through Europeans. Working as a postdoc with geneticist Michael Hammer at the University of Arizona, she sequenced 9300 base pairs of DNA in the SLC24A5 gene in 41 Europeans, Africans, Asians, and American Indians.

    Using variations in the gene that did not cause paling, she calculated the background mutation rate of SLC24A5 and thereby determined that 18,000 years had passed since the light-skin allele was fixed in Europeans. But the error margins were large, so she also analyzed variation in the DNA flanking the gene. She found that Europeans with the allele had a “striking lack of diversity” in this flanking DNA—a sign of very recent genetic change, because not enough time has passed for new mutations to arise. The data suggest that the selective sweep occurred 5300 to 6000 years ago, but given the imprecision of method, the real date could be as far back as 12,000 years ago, Norton said. She added that other, unknown, genes probably also cause paling in Europeans.

    Either way, the implication is that our European ancestors were brown-skinned for tens of thousands of years—a suggestion made 30 years ago by Stanford University geneticist L. Luca Cavalli-Sforza. He argued that the early immigrants to Europe, who were hunter-gatherers, herders, and fishers, survived on ready-made sources of vitamin D in their diet. But when farming spread in the past 6000 years, he argued, Europeans had fewer sources of vitamin D in their food and needed to absorb more sunlight to produce the vitamin in their skin. Cultural factors such as heavier clothing might also have favored increased absorption of sunlight on the few exposed areas of skin, such as hands and faces, says paleoanthropologist Nina Jablonski of PSU in State College.

    Such recent changes in skin color show that humans are still evolving, says molecular anthropologist Henry Harpending of the University of Utah, Salt Lake City: “We have all tacitly assumed for years that modern humans showed up 45,000 years ago and have not changed much since, while this and other work shows that we continue to change, often at a very fast rate.”


    Gorillas' Hidden History Revealed

    1. Ann Gibbons


    Although gorillas are our closest living relatives other than chimpanzees, their evolution is something of a mystery. There are no fossils of gorillas and little DNA from wild ones. Now, a new study of nuclear DNA from the two species of wild gorillas offers a glimpse of their mysterious past and of how new species of primates arise.

    Unlike their cousins the chimps, these shy herbivores turn out to have diverged slowly into two species, apparently taking the better part of a million years, according to a talk at the meeting by molecular anthropologist Linda Vigilant of the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany. “It shows us how little we've known about speciation in gorillas,” says anthropological geneticist Anne Stone of Arizona State University in Tempe.

    Previous studies of the paternally inherited Y chromosome from gorillas suggested that the two species—eastern gorillas and western gorillas—interbred until recently. But maternally inherited mitochondrial DNA suggested that they separated more than 1 million years ago. Nuclear DNA studies sampled too few individuals to clear up the confusion, says evolutionary biologist Michael Jensen-Seaman of Duquesne University in Pittsburgh, Pennsylvania.

    Vigilant and her colleagues isolated DNA from the blood, liver, or feces of 18 of the 14,000 wild gorillas left on the planet, including three eastern gorillas from Uganda and the Congo; the western gorillas were chiefly from Cameroon. The team sequenced 14,000 base pairs of noncoding (and therefore presumably not under selection) nuclear DNA from each gorilla. Disparities turned up at 79 different sites, similar to the genetic diversity in chimpanzees, but twice as high as in humans, who are remarkable for their lack of variation. “This shows us again how odd humans are,” says Stone.

    Vigilant's team used the number of genetic differences to calculate a mutation rate, which allowed them to date the timing of the initial speciation to about 900,000 to 1 million years ago. That's just when the two species of chimpanzees went their separate evolutionary ways, suggesting that changes in climate broke up the dense forests that are home to both chimps and gorillas.

    Although chimpanzees sorted into two species rapidly, gorillas took much longer and continued to mate at low levels until 164,000 to 230,000 years ago, with males moving more than females, says Vigilant. That's surprising because today the two species live 1000 kilometers apart and most gorillas never venture far from home. “I find it amazing that gene flow persisted for hundreds of thousands of years,” she says.

    East met west.

    Eastern gorillas (right) bred with their western cousins until recently.


    Her study offers a rare window on speciation in apes, which serves as “a model for understanding human evolution,” says Jensen-Seaman. Until now, researchers have focused on the quick split between species of chimpanzees. In fact, says Jensen-Seaman, gorillas' “sloppy back-and-forth gene flow—a long, drawn-out process”—may be the norm.


    Adapting to Tibet's Thin Air

    1. Ann Gibbons


    Researchers seldom see Darwinian natural selection happening in living people. So physical anthropologist Cynthia Beall was delighted in 2004 when she discovered a trait that boosts the survival of some Tibetan children, apparently by raising the level of oxygen in their mothers' tissues—a crucial advantage during pregnancy 4 kilometers above sea level.

    On top of the world.

    Tibetans with oxygen-rich blood have more surviving children.


    Now Beall has updated her study by exploring a possible mechanism for the adaptation and by documenting that the adaptation represents some of the strongest natural selection yet measured in humans. Her team is showing how a genetic trait can dramatically improve survival in real time, in living mothers and babies. “I love this work,” says Mark Gladwin, chief of the vascular medicine branch of the National Heart, Lung, and Blood Institute in Bethesda, Maryland. “It shows you need a strong adaptive response to have children survive at high altitude.”

    Beall, of Case Western Reserve University in Cleveland, Ohio, and her colleagues have for a decade been gathering all sorts of family, biological, and fertility data from men, women, and children living in more than 900 households in 14 villages in Tibet. As part of the study, they also tracked the survival of children born to mothers who had high and low oxygen saturation in their blood ( Beall reported at the meeting that women with high levels of oxygen in their blood had more than twice as many surviving babies as had those with low oxygen levels—a ratio of 1:0.44. This is a startlingly strong selection pressure, she says—even stronger than that on the sickle cell gene, which protects against malaria and has a fitness ratio of 1:0.66.

    But exactly how do these women manage to carry extra oxygen in their blood? They do not produce more hemoglobin the way Andeans living at high altitude do. One possibility is that the women with high oxygen have an adaptation that Beall is exploring independently in these same Tibetan villagers. She found that some villagers exhale extra nitric oxide in their breath, a sign of additional amounts of the gas in their blood. In those Tibetans, nitric oxide dilates the blood vessels so they can pump more blood and oxygen to organs and tissues, as measured by images of heart and lung blood vessels. The Tibetans can boost their blood volume—and so pump more oxygen to their tissues—without producing more hemoglobin or raising the blood pressure in their lungs. That's the reverse of what happens when mountaineers suffer from oxygen deficiency: The blood pressure in their lungs rises, the blood vessels constrict, and fluid builds up, suffocating the lungs.

    The next step, says Beall, is to try to see whether these two lines of research meet. She wants to find the underlying gene behind the women's high-oxygen blood—and see whether it is related to genes that regulate levels of nitric oxide in the blood. She notes, however, that it's quite possible that the Tibetans have evolved more than one way to boost blood oxygen, and that these are independent adaptations. Gladwin suggests that Beall's team also measure nitric oxide and blood pressure in the lungs in pregnant women, who are under the most physiological stress at altitude and presumably would benefit most from this adaptation. “Study the pregnant women,” he says, “because that's where you'll see evolution in action.”

  14. Melting Opposition to Frozen Eggs

    1. Mitch Leslie

    Oocyte cryopreservation is starting to mature now that researchers have developed a faster, better way of freezing the vulnerable cells

    Better cold?

    Freezing a human egg without damaging or killing it is tricky.


    Under the hot desert sun in Phoenix, Arizona, something chilling is going on. Human eggs are slowly being iced down at what has been called the world's first bank offering frozen, donated eggs. For about $2500, Cryo Eggs International, the company running the bank, will sell an infertile woman an egg from one of its carefully screened donors. One egg won't be enough—women typically order six to eight to boost the odds of pregnancy—and the price doesn't include shipping in a liquid nitrogen-cooled container, which can run $200 for a U.S. address and around $1000 for delivery to Europe.

    Some ethicists and physicians decry the selling of donated eggs because obtaining them from a woman involves hormone injections and a surgical procedure that can cause side effects. But Cryo Eggs's business plan is significant not because it raises new ethical issues—after all, women have been donating eggs for decades and even been getting paid—but because it signals that the practice of egg freezing has reached a new stage. And some say it's about time, given how unfair fertility is. A typical man has almost a lifetime to become a father, but a woman's reproductive prime lasts only a decade or so—and coincides with the critical time for getting an education and establishing a career.

    Men facing cancer treatment that might render them sterile also have an advantage over women. They can freeze sperm for later use with in vitro fertilization (IVF)—some physicians even plan to store sperm stem cells from prepubescent boys scheduled for similar cancer therapy (See Brinster Review, p. 404).

    But eggs have been difficult to freeze. As the largest cell in the human body, an egg brims with water that can form damaging ice crystals. “The problem with eggs is that they are sensitive to low temperature,” says Juergen Liebermann, scientific director of the Fertility Centers of Illinois in Chicago.

    However, recent technical refinements to egg freezing, also known as oocyte cryopreservation, may soon even out some of the reproductive inequalities between men and women. Although some scientists remain cautious about its effectiveness and safety, others argue that cryopreservation is ready for widespread use—not just to help cancer patients secure their fertility but also to give women more control over when they start families. The technology has already jumped beyond the realm of research. For the last 3 years, Extend Fertility, based in Woburn, Massachusetts, has marketed its services as an option for career-minded women who want to put aside oocytes to be thawed years or even decades later, when their jobs are secure or they've found the right partner. Cryo Eggs's donor bank opened the same year. Fertility clinics now offer elective egg storage.

    Early frost

    Oocyte cryopreservation sidesteps some of the practical and ethical pitfalls of IVF, says Jeffrey Boldt, an embryologist at Community Health Networks in Indianapolis, Indiana, who is also a partner in and scientific director of Cryo Eggs. For example, what to do with “spare” embryos that aren't implanted poses a dilemma, he says. Many couples are loath to discard them, and it's illegal to use them for federally funded research. Moreover, divorcing couples have fought for custody of frozen embryos in court, just as they would over children, notes clinical embryologist Michael Tucker of Georgia Reproductive Specialists in Atlanta. Fertility specialists in countries such as Italy and Germany face a different legal difficulty: Embryo freezing is forbidden.

    Frozen oocytes obviate some of these concerns. For example, people may be less squeamish about tossing out a gamete than an embryo, Boldt says. And if spare oocytes are on ice, he adds, doctors shouldn't need to create as many embryos. Custody battles also shouldn't erupt, says Tucker: A woman's eggs are her own.

    As welcome as egg-freezing is, the trick has been getting it to work. The first reports of pregnancies from thawed oocytes date back to 1986, just 3 years after the first birth from a frozen embryo. Although the use of frozen embryos created by IVF boomed, technical hurdles have largely excluded cryopreserved eggs from the clinic. Depending on who's counting, frozen oocytes account for a total of 300 to 600 babies worldwide, compared with the roughly 300,000 births from frozen embryos.

    One reason for the slow acceptance of frozen oocytes in the 1980s and 1990s is that the IVF techniques then used were much less successful fertilizing thawed eggs than fresh ones. In addition to ice crystals, freezing an egg toughens the zona pellucida, the membrane around the egg that the sperm has to burrow through.

    Another factor that gave people pause was concern about possible birth defects. Eggs are also susceptible to cold because a mature oocyte is stalled in the middle of meiosis, the double division that parcels out the chromosomes to yield gametes. The chromosomes are hitched to the spindle, a network of microtubules that will help separate them if fertilization occurs. Even a slight drop below body temperature triggers a “chilling injury” to an egg, notes Stanley Leibo, a cryobiologist at the University of New Orleans in Louisiana. The spindle falls apart as the microtubules deploymerize. Although the structure reforms when temperatures rise, researchers haven't ruled out lasting damage from the disruption, such as abnormal numbers of chromosomes in the egg and rare birth defects.

    Freezing saps sperm, too, killing about 50% of them, says reproductive endocrinologist Kutluk Oktay of Weill Medical College of Cornell University in New York City. But the prodigious numbers in a semen sample usually ensure that enough healthy ones survive thawing to permit fertilization, he says.

    A shot in the egg for cryopreservation

    The advance that revived egg freezing, says reproductive endocrinologist Richard Paulson of the University of Southern California (USC) in Los Angeles, was intracytoplasmic sperm injection (ICSI). The technique, which researchers first tested on frozen human oocytes in 1995, shoots the sperm through the cold-hardened zona pellucida directly into the egg. “I'm surprised they got any fertilizations at all without it,” says Paulson. In 1997, Eleonora Porcu of the University of Bologna in Italy and colleagues reported the first birth from ICSI on a thawed oocyte.

    A new technique for freezing eggs might produce just as large an impact as ICSI. Bathing eggs in cryoprotectant molecules such as propanediol, ethylene glycol, and sucrose can shelter the cells from ice damage but doesn't eliminate it. The solution, many scientists now say, is a method for superfast chilling known as vitrification.

    The standard slow-freezing method—the technique Cryo Eggs uses with some modifications—entails gradually cooling the egg over about 90 minutes in a computer-controlled freezer. By contrast, fast-freezing involves dunking the egg into liquid nitrogen; it vitrifies, or transforms into a glassy material. The idea behind vitrification, says Leibo, is to “outrun chilling injury,” freezing the egg before ice can crystallize.

    Although only about 100 babies have been born from vitrified eggs, some researchers are already calling the method the future of oocyte cryopreservation. As Liebermann notes, recent statistics suggest that it boosts egg survival and fertilization over slow freezing, reducing the average number of oocytes required to produce a live birth from more than 50 to between 27 and 30.

    Ready for prime time?

    Hundreds of seemingly healthy babies around the world prove that oocyte cryopreservation works. But is it safe and efficient enough to become a standard part of assisted reproduction? In an analysis published last year in Fertility and Sterility, Oktay and colleagues pooled results of 26 studies on oocyte cryopreservation dating from 1997 to 2005, covering 97 children. Overall, slow-frozen eggs fell short of fresh ones on every measure, including fertilization rate and implantation rate. (There weren't enough data on vitrification to compare it to slow freezing, the researchers determined.) Porcu says that her experience is similar. About 3% to 4% of frozen eggs will yield a baby, versus 6% to 8% of fresh eggs, she says.

    A clear difference.

    A human oocyte (above) is full of water that forms ice crystals when frozen slowly. Faster freezing can minimize ice damage; a saline solution slowly frozen has a lot of ice (inset, left), whereas the vitrified solution is clear (inset, right).


    For many physicians and ethicists, the statistics back limited use of cryopreservation. Guidelines released by the American Society for Reproductive Medicine (ASRM) last fall regard the procedure as experimental and endorse it only for preserving fertility in women with cancer—as long as there's institutional review board supervision. An alternative that might enable women to retain their egg-producing capacity—freezing ovarian tissue and reimplanting it after treatment—is still for research only, the guidelines conclude. Moreover, ASRM advises against making elective egg freezing commercially available, citing the lack of long-term studies on babies' health. Oktay, who headed the ASRM committee, notes that freezing an egg effectively ages it 8 or 9 years. So unless a woman plans to wait longer than that to try to get pregnant, she is better off using her own fresh eggs.

    Other fertility experts contend that the time is ripe for egg freezing, given the lack of alternatives. Although ethicists railed when Extend Fertility began advertising elective egg freezing to young women, the practice has spread, especially in the United States, where fertility clinics are un-regulated. How many facilities offer it now isn't certain, says Oktay. His clinic doesn't. But Paulson's USC clinic does. “I don't feel I can deny access to this technology” to well-informed patients, he says. The procedure “levels the field for men and women” reproductively.

    Boldt also argues that the evidence backs the use of frozen eggs in fertility treatments for more than just cancer patients. “The technology is there to look at [egg banking] as a realistic option,” he says. Boldt adds that frozen oocytes even offer advantages over fresh ones; the IVF procedure is simpler because the egg donor and recipient don't need to be reproductively synchronized.

    Researchers agree on what's missing from our understanding of egg freezing: a clear picture of the health of the resulting children. Although some scientists have offered preliminary results for small numbers of children, no one has conducted a long-term, comprehensive study of babies born from defrosted eggs. That could soon change. Porcu and colleagues are working to create an Internet database in which clinicians can record information. The timing is apt. The oldest offspring are entering their reproductive years, so it will be possible to determine whether their fertility is normal. That research might tell us more about whether egg freezing deserves a warm reception.

  15. A Close Look at Urbisexuality

    1. John Travis

    A developmental biologist takes aim at understanding the evolutionary origins of eggs and sperm in our 600-million-year-old ancestor

    For a creature no one has ever seen and never will see alive, Urbilateria generates a lot of passion. Scientists have vigorously debated whether it sported legs or antennae, whether it had a true heart, and whether its body was segmented. Such arguments may never be settled, however. Urbilateria is a hypothetical organism that lived 550 million to 800 million years ago and was the last common ancestor of a menagerie that includes mollusks, worms, flies, mice, and people.

    Discussions of this mythical creature are more than just pub-fueled speculations at the end of a long day in the laboratory, say practitioners of evo-devo, a field that uses developmental biology to study evolution and vice versa. Although some animals have no anatomical symmetry (sponges) or display radial symmetry (corals and jellyfish), most animal phyla have bilateral symmetry, and Urbilateria would have been their forerunner. As such, Urbilateria offers a framework for thinking about how the current diversity of bilaterians emerged from the twists and turns of evolution.

    In 2002, Cassandra Extavour, a postdoc in Michael Akam's laboratory at the University of Cambridge in the U.K., realized something was missing from all the musings on Urbilateria. “So much of evo-devo over the last 20 years centered on what Urbilateria looked like,” says Extavour. “But no one [was] talking about how this animal reproduced.”

    So, in talks at the Society for Integrative and Comparative Biology annual meeting in January, and in book chapters in press, this developmental biologist has begun to discuss what she cleverly calls “urbisexuality.” Her approach makes sense, says Adam Wilkins, editor of Bioessays and a geneticist who studies sex determination. “We'd like to know what the earliest bilateral animals looked like and acted like, and that includes their reproductive history.”

    The sex lives of these ancient creatures has eluded Extavour, but she has made progress understanding one of the more fundamental aspects of Urbilaterian reproduction: the origins of their sperm and egg. Her work “is original,” says paleobiologist Douglas Erwin of the National Museum of Natural History in Washington, D.C., who has written considerably on the nature of Urbilateria. “It never dawned on me to consider these sorts of issues.” Wilkins agrees: “As far as I know, this is the first serious attempt to look at their reproduction.” Moreover, Extavour's contemplation of urbisexuality has led her to rethink the evolutionary connections between primordial germ cells, which ultimately make sperm and eggs, and the stem cells that give rise to other tissues.

    Future sperm or eggs.

    Antibodies to a protein called Vasa (pink) reveal the relatively few cells in this crustacean embryo destined to become germ cells.


    Humble beginnings

    The sexiest questions about urbisexuality obviously concern Urbilateria's sex life. Was the ancient animal hermaphroditic? Or were the sexes of Urbilateria separate, and if so, did reproduction rely on external or internal fertilization? If the latter were the case, then Urbilateria's gonads would have been quite sophisticated.

    Extavour considers the answers to such questions largely unknowable. Typically, evolutionary biologists piece together the lifestyle of a long-gone organism by scanning the modern tips of evolutionary trees for physical traits or developmental genetic networks shared by most of its descendents. Traits in common to a broad spectrum are likely ancestral. But so many reproductive strategies are strewn across every bilaterian phyla that the identification of an ancestral state is probably impossible. Both egg laying and vivipary, in which an embryo develops inside a parent, are seen across all clades, for example, so it's hard for researchers to tell which strategy evolved first.

    Picking a more tractable question about urbisexuality, Extavour has concentrated on what got her thinking about Urbilateria in the first place: primordial germ cells. She notes that these germ-cell precursors compete with one another during development within arthropods. The primordial germ cells often arise in one portion of the embryo and migrate relatively long distances to what becomes a gonad; there they develop into sperm or eggs. But studies had shown that only some of the primordial germ cells win this race to the gonad; the latecomers vanish from the embryo's so-called germ line, its eggs or sperm. This example of survival of the fittest—or fastest—led Extavour to take a broader Darwinian look at primordial germ cells. “I started to think about the role of germ cells in evolution and began wondering how different animals make germ lines,” she says.

    Much has been written about when and why animals first evolved a germ line. Some researchers have even argued that the emergence of specialized reproductive cells distinct from other tissue—the soma—was a prerequisite for multicellular organisms. Extavour agrees with that thinking. So, assuming that a germ line predated Urbilateria, she asked how certain cells in an Urbilateria embryo ended up as the animal's germ cells, not a heart or a nerve cell.

    Modern organisms typically use one of two methods to give germ cells their unique identity. By one route, preformation, part of an unfertilized egg's cytoplasm contains a distinctive mix of proteins, chemicals, and genetic instructions. This material is the so-called germplasm. As the egg divides and redivides, bits of germplasm wind up in some cells but not others. Inside those cells, it sets off a genetic cascade. Genes such as vasa and nanos become active, for example, and help prepare those cells for a future as eggs or sperm. In animals relying on preformation, that germplasm is retained or restocked within new oocytes; thus, it is inherited from generation to generation.

    Choosing germ cells.

    Of the two methods animals use to specify their germ cells, epigenesis is more widespread and likely to be the mode used by Urbilateria (artist's rendition), the earliest bilateral animal.


    Where there is no germplasm, epigenesis comes into play. All the cells in the embryo start out apparently equal. But as the embryo grows, certain cells release chemicals that force their neighbors to become the germ line. The question for Urbilateria: Do the cells that decide to become its primordial germ cells “inherit something or get an instruction?” asks Extavour.

    The mouse relies on epigenesis: Certain embryonic cells release bone-morphogenic proteins that tell other cells to become germ cells. But the more primitive animals commonly studied in the lab—zebrafish, fruit flies, nematodes, frogs—all depend on germplasm to specify their germ cells, so biologists had assumed that preformation was the ancestral state. In a 2003 review of existing data on 28 animal phyla, however, Extavour and Akam found little cytological, cell lineage, or experimental data to indicate preformation in most species. They concluded that epigenesis was very widespread and that it probably arose first.

    From stem cells to germ cells?

    Extavour has continued to amass data supporting that surprise and has also developed a model of how preformation could have arisen in some of Urbilateria's descendents. In epigenesis, many of the molecules inside primordial germ cells that drive the cells' differentiation into sperm or eggs vanish once that job is done. Extavour thinks that at some point post-Urbilateria, a mutation occurred that allowed those factors to persist in an oocyte—voilà, preformation. It could have been a simple matter of keeping genes such as vasa and nanos turned on in oocytes instead of turning them off—so simple, in fact, that Extavour suspects this happened multiple times on different evolutionary branches. If that's true, she speculates, there are modern animals able to employ both modes of germ cell specification. Even so, their embryos likely rely on only one.

    “Of all the evo-devo stories, Extavour's work is among the most interesting,” says Helen White-Cooper of the University of Oxford, U.K., who studies germ-cell development in fruit flies. To strengthen her case, Extavour should try to confirm whether an animal known to rely upon preformation “might be able to use another mechanism if forced,” White-Cooper suggests. Indeed, Extavour and collaborators are now doing just that.

    Next, Extavour wants to understand evolutionary links between primordial germ cells and the somatic stem cells that give rise to all the nongermline tissues in an organism. The two cell types share many morphological features, and certain genes that guide somatic stem cell growth appear related to genes active in primordial germ cells. Also, at the most basic level, both classes of cells are close to immortal: They survive and divide for a long time without specializing.

    Moreover, researchers have shown that the primordial germ cells of some species, including humans, can act as somatic stem cells, giving rise to more than just reproductive cells. And others are coaxing nongermline stem cells from early embryos to develop into sperm and eggs (Science, 23 September 2005, p. 1982).

    Cell biologists have considered somatic stem cells to be developmental spinoffs of germ cells because of their similarities in gene expression and differentiation. Twisting that idea a bit, Extavour hypothesizes that germ cells arose as a subset of stem cells. She notes that primitive nonbilateral animals such as sponges and jellyfish maintain stem cells that can give rise to either new somatic cells or gametes. Because the stem cells of sponges and jellyfish are dispersed throughout their bodies, she suggests that Urbilateria's germ cells were similarly scattered. As scientists identify more genes that are active in both primordial germ cells and somatic stem cells and trace their history, Extavour hopes to confirm this evolutionary scenario.

    Wilkins, who plans to collaborate with Extavour, calls her unconventional revision of the history of germ cells and stem cells a “big idea.” And he thinks Extavour's investigations of urbisexuality will draw others to probe the evolution of germ cells and reproductive systems. The topic, Wilkins laughs, “will become, I can't resist saying, sexier to study.”