News this Week

Science  29 Jun 2007:
Vol. 316, Issue 5833, pp. 1824

You are currently viewing the .

View Full Text

Log in to view the full text

Log in through your institution

Log in through your institution


    Along With Hope, North Korean Opening Brings Hard Choices

    1. Richard Stone

    A moment of truth is at hand in the long slog to denuclearize the Korean peninsula. As Science went to press, a team from the International Atomic Energy Agency (IAEA) had arrived in Pyongyang to discuss the shutdown of North Korea's plutonium-generating reactor at Yongbyon. If a game plan is agreed to, the next round of six-party talks, expected to convene in Beijing next month, will tackle thornier issues: a North Korean declaration of its nuclear facilities and materials, and the step-by-step dismantlement of its weapons program.

    No one anticipates smooth sailing in the upcoming talks among the two Koreas, China, Japan, Russia, and the United States. For starters, analysts doubt whether North Korea will come clean about all its nuclear activities. And the Bush Administration is resisting a key North Korean demand: the provision of light-water nuclear reactors (LWRs) for electricity generation. U.S. officials are debating alternatives as part of a compensation package for dismantlement. "This could be a make-or-break issue," says former U.S. State official Joel Wit, a visiting fellow at Johns Hopkins University's School of Advanced International Studies in Washington, D.C.

    Hopes are buoyed, however, by the surprisingly good outlook for the possible normalization of U.S.-North Korea ties, U.S. and South Korean officials told Science. Liaison offices could open in Pyongyang and Washington, D.C., within months after dismantlement begins, they say—although publicly, U.S. officials have insisted that denuclearization must be completed before normalization. And the Bush Administration has assented to North Korea's retaining nuclear capacity to produce medical radioisotopes, for example.

    At six-party talks last February, North Korea agreed to shut and seal the Yongbyon complex within 60 days—including the closure of a reprocessing facility in which plutonium presumably was extracted from spent fuel rods. But North Korea refused to proceed until $25 million frozen in a Macau account was released; a North Korean official confirmed on 25 June that the cash had arrived and would be used for "humanitarian purposes." With that glitch overcome, Assistant Secretary of State Christopher Hill, lead U.S. envoy to the talks, flew secretly to Pyongyang late last week—the highest level U.S. visit since U.S. officials in 2002 accused North Korea of pursuing a clandestine program to enrich uranium for bombs. North Korea subsequently expelled IAEA inspectors and pulled out of the Nuclear Non-Proliferation Treaty; it later tested a nuclear device (Science, 13 October 2006, p. 233).

    One giant leap?

    Returning from Pyongyang, lead U.S. nuclear negotiator Christopher Hill said North Korea reaffirmed its commitment to denuclearization.


    In Pyongyang this week, the four-person IAEA delegation led by Ollie Heinonen, deputy director-general for safeguards, was slated to discuss verification procedures for Yongbyon's shutdown, a process that could stretch into August. After completion, North Korea will receive 50,000 tons of heavy fuel oil.

    Future milestones may prove more elusive. Next up: North Korea must issue a declaration of nuclear assets. U.S. officials insist on a full accounting. "When the members of the six-party talks say [their entire] nuclear program, we mean all, all aspects of it," State Department spokesperson Sean McCormack told reporters last week. That would include a disclosure of equipment and facilities intended for uranium enrichment—a program whose existence North Korea has denied. The declaration will top the agenda of six-party talks next month, says a senior State Department official. "I hope we'll see a complete declaration by the end of this year," he says.

    Dismantlement would follow, but the parties have yet to agree on precisely what that entails—a "complete and irreversible" process, as the U.S. sees it, or one that could be undone if talks collapse. North Korea would receive another 950,000 tons of heavy fuel oil for dismantlement. But North Korean diplomats have consistently stated that they will settle for nothing less than LWRs as a long-term energy solution. "Obtaining at least one LWR is critical to [leader] Kim Jong Il in terms of domestic legitimacy," Peter Hayes and David Von Hippel of the Nautilus Institute in San Francisco, California, argue in a new analysis.* Under the now-scuttled 1994 Agreed Framework, North Korea was to receive a pair of LWRs for Yongbyon's dismantlement. Reactor construction was frozen in 2003, and it's unclear whether the Bush Administration will countenance an LWR revival at the six-party talks. "No breakthrough on that yet," says the State Department official.

    But other elements of a civilian nuclear program are on the table. The State Department official says the United States has "no problem" with North Korea's maintaining a capacity to produce medical radioisotopes. A cyclotron at the Institute of Atomic Energy in Pyongyang produces primarily gallium-66 for treating breast and liver cancers, and a research reactor at Yongbyon has generated iodine radioisotopes for diagnosis and treatment of thyroid cancer. The Soviet-made reactor would have to be converted from running on highly enriched uranium—the stuff of bombs—to low-enriched uranium. Such a conversion was carried out recently on a Libyan research reactor at a modest cost of less than $10 million, David Albright, president of the Institute for Science and International Security in Washington, D.C., noted in a report last March. He and Wit discussed options with officials in Pyongyang earlier this year, when they were apparently the first Americans allowed to visit North Korea's Institute of Atomic Energy.

    Sustaining peaceful nuclear activity would help ensure that a fraction of North Korea's estimated 2000 nuclear weapons researchers could put their skills to use after dismantlement, Wit says. A major initiative to engage Korean weaponeers, perhaps modeled after one launched in Russia after the Soviet breakup, "is entirely feasible," he says.

    However, the denuclearization pledge, which Hill says North Korean officials reaffirmed last week in Pyongyang, may yet prove illusory. "The political-symbolic value of nuclear weapons to Kim Jong Il may now surpass any affordable price," Hayes and Von Hippel assert. Building trust will be essential to convincing Kim that he can live without the bomb.


    Stem Cell Science Advances as Politics Stall

    1. Constance Holden

    Even as President George W. Bush last week again barred the door against changes in his stem cell policy, members of Congress vowed to continue to try to loosen restrictions on research, while a stream of striking new developments promised to alter the research landscape.

    The latest news comes in two reports in this week's issue of Nature on the cultivation of a new type of embryonic stem (ES) cell. Called EpiSCs, the cells are isolated from post-implantation mouse and rat embryos. This makes them more like human ES cells than are existing mouse ES cells, and they may offer a better tool for understanding how human cells grow and differentiate, the researchers say.

    The papers come on the heels of several announcements last week at the annual meeting of the International Society for Stem Cell Research in Cairns, Australia. Researchers at Oregon Health and Science University in Beaverton said they have achieved the long-sought goal of generating ES cells from cloned monkey embryos—a “remarkable breakthrough,” according to cloning researcher Jose Cibelli of Michigan State University in East Lansing. Oregon embryologist Shoukhrat Mitalipov attributes his group's success to a gentler technique, using polarized light and direct injection, for inserting the nucleus of a body cell into an enucleated egg.

    Also in Cairns, Paul de Sousa of Edinburgh University's Roslin Institute announced that his group had generated a human ES cell line parthenogenetically—using an unfertilized egg that otherwise would have been discarded at a fertility clinic. And Robert Lanza of Advanced Cell Technology in Worcester, Massachusetts, announced that he has developed a human ES cell line from an eight-cell embryo without destroying the embryo.

    The only one of these developments published so far is the mouse and rat ES cell work, done by two teams: one led by Ronald McKay of the U.S. National Institute of Neurological Disorders and Stroke with colleagues at the University of Oxford, U.K., and the other headed by Roger Pedersen and Ludovic Vallier at the University of Cambridge, U.K.

    As McKay explains it, traditional mouse ES cells cannot reveal a great deal about human ones because they are from a “more primitive” stage. For example, mouse cells, unlike other stem cell types, need the growth factor LIF (leukemia inhibitory factor). But “now we've found a mouse stem cell which follows the rule for the human cell,” McKay says. It comes from the epiblast of a mouse embryo 5.5 days after implantation in the uterus. These so-called EpiSCs are pluripotent and share other characteristics of human ES cells, says McKay, who thinks they represent a “missing link” between mouse ES cells and cells that are beginning to differentiate. The rat EpiSCs have similar properties to the mouse cells, says Pedersen, who predicts that “similar experimental conditions could be used to generate epiblast stem cells from most or all mammals.”

    Until now, says McKay, “most people thought you couldn't make cell lines after implantation.” In addition to helping elucidate human ES cells, says Renee Reijo Pera of Stanford University in Palo Alto, California, the new work suggests that scientists may be able to derive new types of ES cell lines—including from humans—that “may ultimately be more suitable for specialized purposes.”

    No go.

    Bush defends this year's stem cell veto.


    New developments have been seized upon by both sides of the debate, as the clamor to relax restrictions on human ES cell research continues. Advocates were outraged by Bush's second veto and were not mollified by an accompanying Executive Order encouraging the National Institutes of Health to continue to hunt for pluripotent cells that do not entail the destruction of embryos. Lawmakers promised to confront the president again. On 21 June, the day after the veto, the Senate Appropriations Committee amended a health budget bill to allow for federal funding of research using human ES cell lines derived before 15 June 2007—thus pushing Bush's deadline back by almost 6 years. House members aim to add the provision to as-yet-unspecified “must-pass” legislation.


    Seeking Clarity in Hormones' Effects on the Heart

    1. Jennifer Couzin

    Women hitting menopause these days can be forgiven for feeling baffled about the risks of hormone replacement therapy (HRT). Several years ago, researchers announced that the Women's Health Initiative (WHI), two massive trials of more than 27,000 women, had shown HRT to be surprisingly unhelpful, even unsafe—in particular, a combination of estrogen and progestin appeared to cause heart attacks rather than prevent them, as expected. Hormone use plummeted.

    But now new studies that break down WHI participants along age lines are suggesting that women in their 50s, those most likely to suffer menopause symptoms that can be helped by hormones, may not experience cardiac risks from the drugs after all—and might even benefit, depending on whether they received the combination or estrogen alone. Even among researchers who collaborate in the field, the findings remain both nuanced and contentious, with some disagreeing over how to interpret the data they collect. Researchers and the reporters who cover their work are struggling, too, in assessing the overall risk-benefit balance of HRT amid a stream of papers that examine individual risk factors in isolation.

    The latest salvo came last week in the New England Journal of Medicine. There, WHI researchers described computed tomography scans of the heart performed in a subset of WHI participants: more than 1000 women age 50 to 59 who had had a hysterectomy and, for an average of about 7 years, received either a placebo or estrogen alone. (Others in WHI received estrogen and progestin, to protect against uterine cancer.) Led by JoAnn Manson of Harvard University, a principal WHI investigator, the group found that those in the estrogen-only group had about 50% less coronary artery calcification. Higher levels of calcification are thought to increase risk of heart disease, although it is not certain that lower levels equate to lower risk.

    The study came after another in April in the Journal of the American Medical Association, which found fewer heart attacks in WHI participants on estrogen in their 50s compared with those on placebo. Although the difference was not statistically significant, it still seemed pronounced: 21 cases out of 1637 estrogen takers versus 34 out of 1673 in the placebo group. Heart attacks hit about equally among those in their 50s taking estrogen and progestin versus placebo, but again, the numbers were too small to definitively measure risk. Health hazards rose with age in both hormone cohorts.

    “Increasingly, the view is that the effects of estrogen on heart disease are different in younger, recently menopausal women than older women,” says Manson.

    One theory is that, in WHI, many volunteers were in their 60s and 70s and began receiving hormones when they were well into menopause and had adjusted to life with less estrogen. “The artery has developed for 20 years longer in the absence of any hormone and is now seeing it for the first time,” says Michael Mendelsohn, director of the Molecular Cardiology Research Institute at Tufts-New England Medical Center in Boston, Massachusetts. Such an abrupt change could cause unanticipated effects, especially in the presence of atherosclerosis.

    Heart hazard?

    In WHI's roughly 7-year trial of estrogen alone (top), heart attack risks seemed somewhat different than in its estrogen and progestin trial that ran about 5.6 years.


    It's possible that in younger, comparatively healthier hearts, estrogen may have the good effects seen in animal studies, such as making arteries more pliable and preventing white cells from sticking to them. But in older arteries, estrogen might “destabilize existing plaque,” speculates Jacques Rossouw, chief of the Women's Health Initiative Branch at the National Heart, Lung, and Blood Institute in Bethesda, Maryland.

    For women weighing HRT, interpreting studies like this one may be complicated by conflicting messages from the investigators. Manson, for example, says the new data on calcification “support the theory that estrogen may slow plaque buildup.” She now believes that “heart risk does not appear to enter into the equation for younger women seeking relief of menopausal symptoms.”

    But WHI investigator Marcia Stefanick of Stanford University in Palo Alto, California, her cross-country co-author, thinks differently. “To extrapolate this subsample of women to all women who are 50 to 59 is a huge mistake,” she says, noting in particular that a very high number were obese, and it's not clear how the data apply to thinner women. Stefanick and Manson urge a bright line between estrogen taken alone and the combination of estrogen and progestin. Manson, however, is more convinced than Stefanick that the former regimen appears a bit safer than the latter, except that both increase stroke risk equally. But because estrogen alone can raise the risk of uterine cancer, it is usually taken only by women who have had a hysterectomy. “We definitely have disagreements” about interpreting the cardiac data, but “we are working together” to disseminate it, Stefanick says.

    Meanwhile, the media tend to cover one study and one disease at a time, leaving the big picture elusive or seemingly inconsistent. WHI, Stefanick explains, has so much data on so many dimensions of health and hormones—breast cancer, bone density, memory, heart health, and more—that it's publishing separate studies on each of these parameters. “You have a new paper, and everyone says you've reversed” your position on the safety of hormones, she says, “and you have to say, 'No, [before] I was talking about something else.'”


    Replacement Genome Gives Microbe New Identity

    1. Elizabeth Pennisi

    For decades, molecular biologists have genetically modified microbes and other kinds of cells by adding short DNA sequences, whole genes, and even large pieces of chromosomes. Now, in a feat reported in a paper published online by Science this week (, one group has induced a bacterium to take up an entire 1.08-million-base genome in one gulp. In doing so, microbiologist John Glass and his colleagues at the J. Craig Venter Institute in Rockville, Maryland, have transformed one bacterial species into another.

    “This is a significant and unexpected advance,” says molecular biologist Robert Holt of the Michael Smith Genome Sciences Centre in Vancouver, Canada. But the advance remains somewhat mysterious. Glass says he doesn't fully understand why the genome transplant succeeded, and it's not clear how applicable their technique will be to other microbes. Nonetheless, “it's a necessary step toward creating artificial life,” says microbiologist Frederick Blattner of the University of Wisconsin, Madison.

    Glass and his colleagues are among several groups trying to build a microbe with the minimal gene set needed for life, with the goal of then adding other useful genes, such as ones for making biofuels. In anticipation, Glass and colleagues wanted to develop a way to move a complete genome into a living cell.

    As a proof of principle, they tried transplanting the single, circular chromosome of Mycoplasma mycoides large colony (LC) into a close relative, M. capricolum. Both of these innocuous goat pathogens lack the cell walls typical of many other bacteria, eliminating a possible impediment to genome transfer.

    At the Venter Institute, Carole Lartigue and her colleagues first added two genes to M. mycoides LC that would provide proof if the transfer of its genome worked. One gene conferred antibiotic resistance, and the other caused bacteria expressing it to turn blue. Lartigue removed the modified chromosome from M. mycoides LC, checked to make sure she had stripped off all proteins from the DNA, and then added the naked genome to a tube of M. capricolum. Within 4 days, blue colonies appeared, indicating that M. capricolum had taken up the foreign DNA. When they analyzed these blue bacteria for sequences specific to either mycoplasma, the researchers found no evidence of the host bacterium's DNA.

    Microbial geneticist Antoine Danchin of the Pasteur Institute in Paris calls the experiment “an exceptional technical feat.” Yet, he laments, “many controls are missing.” And that has prevented Glass's team, as well as independent scientists, from truly understanding how the introduced DNA takes over the host cell.

    Glass suspects that at first, both genomes are present in M. capricolum. But when one of those double-genomed microbes divides, one genome somehow goes to one daughter cell and the other to the second. By exposing the growing colony to an antibiotic, the researchers selected for cells that contain only the M. mycoides LC genome.

    Species makeover.

    Blue signals successful genome transfer in these bacterial colonies.


    Other researchers are not sure the strategy will work on bacteria with cell walls. And Danchin expects it will be difficult to swap genomes among bacteria that aren't as closely related. Regardless, George Church of Harvard University questions the need for genome transplantation; instead of starting with a minimal genome, he's making useful chemicals by simply adding customized genes to existing species' genomes.

    Nonetheless, Markus Schmidt of the Organisation for International Dialogue and Conflict Management in Vienna, Austria, predicts that the mycoplasma genome swap will force more discussions about the societal and security issues related to synthetic biology. “We are one step closer to synthetic organisms,” he says.

  5. U.S. BUDGET

    Democratic Congress Begins to Put Its Stamp on Science

    1. Eli Kintisch*
    1. With reporting by Jocelyn Kaiser, Andrew Lawler, Jeffrey Mervis, and Erik Stokstad.

    Six months into their rule on Capitol Hill, the Democrats have begun to make their mark on science policy. Many of their moves have underscored differences with the White House, including efforts to overturn the ban on federal funding for work on new embryonic stem lines, prominent accusations that the Bush Administration has politicized science advice, and proposals to increase and reshape funding for climate change research (see sidebar). But as far as the Administration's most prominent science initiative is concerned, the new Congress has so far been more than supportive, at least in loosening the purse strings: It is poised to top the president's generous requests for the multiagency American Competitiveness Initiative (ACI), which is aimed at sharply increasing funds for the physical sciences.

    It's unclear how the hyperpartisan atmosphere might affect Democratic budget aims, but the ambitious spending plans are helping balloon domestic spending bills. That's attracted White House threats of the veto pen. And looming over the whole process are yet-to-be-written defense bills, which could be the big spoiler if war-related funding requires some across-the-board cuts later in the year.

    In the past few weeks, House committees have approved most of the appropriations bills that contain funds for science, and a picture has started to emerge of how science policy is shaping up in the new Congress. Some highlights, agency by agency, of the action thus far:

    National Institutes of Health (NIH): There's not much relief in sight for NIH. An appropriations bill passed by a House panel and a companion measure approved by the Senate spending panel would both give NIH a small raise, reversing the president's proposed $279 million cut. The Senate boost of $1 billion, for example, would provide a 3.5% increase—only half the amount biomedical research advocates are hoping for. That would bring NIH's total budget to $29.9 billion, $250 million more than the House has approved.

    Even the Senate total is less than meets the eye, however. Both the House and Senate measures would add $200 million to the $100 million that NIH now transfers to the Global AIDS Fund, effectively cutting the Senate raise to only 2.8%. Still, even that meager increase would push the bill's total above the limit the White House has indicated would be acceptable. A provision that would permit federal funding for recently developed stem cell lines (see p. 1825) would further encourage a Bush veto. Congressional action “is only half the battle,” says Jon Retzlaff of the Federation of American Societies for Experimental Biology in Bethesda, Maryland.

    NASA: The House appropriations committee has given a thumbs-up to the president's $3.9 billion exploration effort, to be run by NASA, but the committee also made clear that the agency's stressed science programs must thrive as well. Lawmakers added $60 million for data, research, and analysis in 2008, a slap at the agency's attempts to hold down such spending in order to pay for science project overruns and a new launcher. The House bill also directs NASA to ask the National Research Council to conduct a study of life and microgravity sciences, two areas the agency has virtually abandoned in recent years. The boosts in science, however, would come largely by deducting funds from NASA's tracking and data-relay satellite system, used to communicate with both military and civilian satellites—a cut certain to be opposed by the Administration. Senate appropriators have yet to act.

    National Science Foundation (NSF): House appropriators have added $80 million to the president's request for NSF, for a total budget increase of 10%, to $6.51 billion. Nearly all the money the House added would supplement NSF's $750 million education directorate. Legislators were especially kind to the agency's fledgling effort to help undergraduates who want to become math and science teachers, adding $36 million to the $10 million Robert Noyce Scholarship program. The most controversial element of the House approach is a $10 million program to support so-called transformative research. The chair of NSF's oversight board, Steven Beering, says such a program “would be wonderful.” But foundation officials oppose a new program to do what they say NSF is already doing—funding the most innovative research—citing as proof the large number of NSF-supported U.S. Nobel laureates.

    Department of Energy (DOE): Science lobbyists are ecstatic over bipartisan generosity toward the physical sciences, ACI's focus. The House has basically matched the Administration's $4.4 billion increase for DOE's Office of Science, the government's biggest patron of the physical sciences, with some extra funds for earmarks and climate studies. That would amount to a 16% boost. American Physical Society lobbyist Michael Lubell says he “thought we had a big problem last fall” after the Democratic triumph because of what he calls “Democratic tendencies” to support industrial, near-term research. But he calls the Democrats' performance thus far “very pleasing.”

    Environmental Protection Agency (EPA) and National Oceanic and Atmospheric Administration (NOAA): The House and Senate spending committee bills are $300 million apart in their plans for the Environmental Protection Agency, although the gap is narrower in the research account. The House would appropriate $8.1 billion for EPA, a 4.7% increase over last year, and boost the agency's spending on science and technology by $55 million to $788 million. The majority of the increase for science would go to a new climate change commission (see below). In addition, clean-air research would rise by an unprecedented 21%, to $114 million. Details on the Senate plan weren't available by press time, but the total for science and technology would rise to $773 million.

    The House, which normally cuts the president's funding request for NOAA, would instead increase it by $190 million to just above $4 billion. The Office of Oceanic and Atmospheric Research is slated for $415 million, an increase of $52 million over last year. Of that amount, $20 million would go to competitive grants in climate research. “I haven't seen anything that big recently,” says Peter Hill of the Consortium for Oceanographic Research & Education in Washington, D.C. Hill expects the Senate will drop in some earmarks, perhaps bumping up the agency to $4.3 billion.

  6. New Priorities for Climate Change Research

    1. Eli Kintisch

    When Democrats gained control of the U.S. Congress, they made climate change one of their top priorities. But they quickly realized that putting into law caps on greenhouse gas emissions could take years of political wrangling—and possibly a new president. So while proposals for emissions controls have captured headlines (Science, 11 May, p. 813), key legislators have quietly focused on a more immediate goal: reordering priorities in climate change research to reflect the most pressing questions.

    Budget bills now working their way through Congress (see accompanying story) include more than half a billion dollars for new applied energy research, a novel $50 million climate research commission that would address regional impacts, and some $17 million to spread the message on climate change through education and public outreach. Climate change research has sufficiently quantified anthropogenic warming, say Democratic aides. These new initiatives focus on “the causes, the impacts, and solutions,” as a spokesperson for House Majority Leader Steny Hoyer (D-MD) describes them.

    Some Democratic proposals have followed explicit calls—even requests for hardware—from the science community. Earth science researchers were dismayed when a Pentagon review stripped climate sensors from an $11.5 billion weather satellite system last year (Science, 16 June 2006, p. 1580), but Congress did little more than investigate. This year, a draft spending bill would set aside $24.9 million for NASA and the National Oceanic and Atmospheric Administration to begin to develop two of the canceled sensors—both crucial for measuring Earth's heat balance—to bolt onto the crafts later if possible. The same bill calls for $60 million to start developing a series of earth science missions at NASA in the precise order recommended last year by a National Academies panel that looked at needs and priorities for Earth observation over the next decade. The proposed educational funds also loosely follow that panel's recommendation to “improve scientific literacy” about Earth's climate.

    Elsewhere, Democrats have set out on their own. Representative Norman Dicks (D-WA), chair of the Interior appropriations subcommittee, held a hearing in April on potential climate change impacts on everything from drought in the Great Basin in the western United States to insect populations that could ravage American forests. His subcommittee subsequently approved $94 million for new climate research at environmental agencies and endorsed Dicks's proposal for a climate commission that one aide describes as “out of the box.” Chaired by the president of the National Academy of Sciences, it would disburse $50 million over 2 years through the Environmental Protection Agency for underfunded research areas with an emphasis on regional impacts and adaptation ($5 million would go to administration). Similarly, last week the House passed $20 million in new funding for improved computer models.

    Greening of Congress.

    House Majority Leader Steny Hoyer touts Democrats' policies.


    Some of these efforts are likely to run into opposition on the floor of the House and in the Senate. The senior Republican on the House Appropriations Committee, Jerry Lewis (R-CA), for example, has opposed Dicks's commission, calling instead for “an in-depth review of the basic science” of climate change. Also displeased with the moves is presidential science adviser John Marburger, who says the government is already addressing the key questions and its “strong prioritization process” is fine as is.


    Seeking Agriculture's Ancient Roots

    1. Michael Balter

    As they pinpoint when and where many crops were first domesticated, researchers are painting a new picture of how—and perhaps why—humans began to change their relationship to plants

    Wheat's eye view.

    Crop plants adapted slowly to human cultivation, evolving on a timescale of millennia rather than centuries.


    JALÈS, FRANCE—In his lab in a 12th century fortress that now houses the Archéorient research center here, archaeobotanist George Willcox pops the top off a plastic capsule filled with tiny black particles, spills them out into a petri dish, and puts the dish under a binocular microscope. Magnified 50 times, the particles leap into focus. They are charred fragments of wheat spikelets from a 10,500-year-old archaeological site in Turkey called Nevali Çori. Wheat spikelets are attached to the central stalk of the wheat ear and carry the seeds, or grain, that humans grind into flour. “Look at the scar at the lower end of the spikelet, where it has broken off,” Willcox says. The scar is jagged—a hallmark of domesticated wheat. It's a sign that the spikelet did not come off easily but detached only when harvested, so the plant probably needed human help to disperse its seeds. “This is the earliest evidence for domesticated wheat in the world.”

    Research field.

    George Willcox grows cereals for science at Jalès.


    Willcox spills the contents of a second capsule into another dish. The scars are round and smooth, showing that these spikelets easily detached and dispersed their stores of grain. “This is wild wheat, also from Nevali Çori,” he says. So in the earliest cultivated fields, wild and domesticated wheat grew in close proximity.

    The scarred spikelets under Willcox's microscope represent one simple, physical sign of a very complicated process: the rise of agriculture. Farming was revolutionary in its implications for humanity, providing the food surpluses that later fueled full-blown civilization, with all of its blessings and curses. Domestication—defined as the physical changes plants undergo as they adapt to human cultivation—was key to this transformation. It allowed former foragers to increasingly control when, where, and in what quantities food plants were grown rather than simply depending upon the vagaries of nature. And unlike other aspects of early agriculture, such as whether a seed was planted or simply gathered by human hands, “domestication is visible” in the archaeological record, says archaeologist Timothy Denham of Monash University in Clayton, Australia.

    Over the past decade, a string of high-profile papers has pinpointed the time and place of the first domestication of crops, ranging from wheat and maize to figs and chili peppers. Now researchers are beginning to fit all of these into a larger story of worldwide plant domestication.

    At Nevali Çori, where wild and domesticated plants grew in the same fields and perhaps even exchanged genes, Willcox and colleagues conclude that full domestication might have taken thousands of years rather than the 200 years or fewer that some archaeobotanists had predicted. “They could not have gone from one kind of economy to another in just a few generations,” Willcox says of the early cultivators. “These things happened gradually.”

    A decade or so ago, most archaeologists saw the advent of agriculture as an abrupt break with the hunting-and-gathering lifestyle on which hominids had relied for millions of years. Researchers thought that domesticated crops appeared very soon after people began to cultivate fields, first in the Near East as early as 13,000 years ago, then somewhat later in a handful of other regions.

    But the new data suggest that the road from gathering wild plants to cultivating them and finally domesticating them was long and winding (see chart), unfolding over many millennia. “If the agricultural revolution is supposed to be evidence for a punctuated change in human cultural evolution, it seems to have taken quite a long time to get to the punctuation point,” says archaeobiologist Melinda Zeder of the Smithsonian Institution in Washington, D.C. Douglas Kennett of the University of Oregon, Eugene, agrees. “Agriculture was not a revolution,” he says. “People were messing about with plants for a very long time.”

    Clues to how this slow transition took place are accumulating rapidly. An alliance of archaeologists and geneticists armed with new techniques for probing plant genomes and analyzing microscopic plant remains (see sidebar on p. 1834) has been tracing the route to farming in much closer detail. In the Near East, for example, researchers are finding that domestication itself happened a bit later than had been thought, although humans apparently cultivated wild cereals for thousands of years before plants showed physical changes. Meanwhile, new research in the Americas has pushed the dates for the first domestication of squash and other crops back to about 10,000 years ago, making the roots of farming in the New World almost as deep as those in the Old World.

    Moreover, new archaeological work shows that plants were domesticated independently in many parts of the globe. There is now convincing evidence for at least 10 such “centers of origin,” including Africa, southern India, and even New Guinea (see map). “All around the world, people took this very new step and started cultivating plants,” which led to their domestication, says Smithsonian archaeobotanist Dolores Piperno. The rush of new data could help eventually solve the puzzle of why agriculture arose in the first place—a riddle archaeologists have been trying to solve for nearly a century.

    Wild plants: The long goodbye

    In his writings about evolution, Charles Darwin argued that domestication was a clear example of selection in action. By cultivating plants—growing them deliberately—humans intentionally or unintentionally select certain traits. Today, researchers define domestication as the genetically determined physical and physiological changes a plant has undergone in response to human behavior. “Domestication is the result of genetic changes that have evolved because of cultivation,” explains archaeologist Dorian Fuller of the Institute of Archaeology at University College London (UCL).

    These alterations make up what botanists call the “domestication syndrome”: signs that plants have adapted to humans and that researchers eagerly seek at archaeological sites. In cereals such as wheat and barley, the syndrome includes the tendency for spikelets to stay on the stalk until they are harvested, as seen in the jaggedly scarred specimens found at Nevali Çori, plus larger seeds and a thinner seed coat that allows easier germination. (It also includes less visible traits, such as simultaneous flowering times.)

    Once humans began to cultivate plants, how long did domestication take? In 1990, the pendulum swung toward a rapid scenario after archaeobotanist Gordon Hillman of UCL and plant biologist Stuart Davies of Cardiff University in Wales plugged data from cultivation experiments into a computer model. They concluded that domestication might have occurred within 200 years and perhaps in as few as 20 to 30 years, assuming, as many archaeologists have, that early farmers used sickles to harvest their crops. Sickles presumably would have strongly selected for spikelets that stayed on the stalk until harvest, because those that dropped earlier would be lost and not replanted. “It was possible to put together a nice story, that agriculture appeared fairly abruptly,” says botanist Mark Nesbitt of the Royal Botanic Gardens, Kew, in Richmond, U.K.

    Before long, however, new data began to raise doubts about this story. For example, at Jalès, Willcox and colleagues conducted experiments in a nearby field, cultivating wild varieties of wheat, barley, and rye to deduce how quickly domesticated forms might evolve. The answer: not very fast. No matter how researchers harvested the grains, a good portion of the easy-to-detach wild spikelets fell to the ground and germinated to sprout a new generation of wild wheat.

    Meanwhile, a remarkable discovery in Israel also suggested a long run-up to domestication. In 1989, a team led by Dani Nadel of the University of Haifa in Israel began excavating a site called Ohalo II on the southwest shore of the Sea of Galilee. The site was radiocarbon-dated to 23,000 years ago, when the last Ice Age was still in full frost and at least 10,000 years before the earliest domesticated plants. Excavators found the remains of huts, plus a burial and several hearths. More than 90,000 individual plant remains were recovered, including acorns, pistachios, wild olives, and lots of wild wheat and barley. But “there is not a single domesticated species at this site,” says team member Ehud Weiss of Bar-Ilan University in Ramat Gan, Israel, nor any evidence that the people of Ohalo II were cultivating the cereals rather than just gathering them.

    To their surprise, however, the researchers, in collaboration with Piperno, found microscopic remains of barley and possibly wheat on a large stone implement. They concluded that the inhabitants of Ohalo II had ground the grains to make flour and possibly also baked dough in one of the ovenlike hearths.

    “Ohalo II is an important warning to archaeologists,” Fuller says. “We need to abandon some of our long-held assumptions that as soon as people began to use cereals, they would begin to [cultivate and] domesticate them.”

    More recently, some researchers have begun taking a second look at just when domesticated plants first showed up in the Near East. For decades, excavators had pegged this transformation to an archaeological period that began about 11,800 years ago and is marked by the first permanently settled villages. There were a few claims for even earlier dates, such as a few relatively large seeds of rye at Abu Hureyra in Syria, dated to about 13,000 years ago, and which Hillman argued were domesticated. But in a 2002 survey, Nesbitt found that the earliest Near Eastern villages lacked definitive evidence of domesticated cereals, although wild plants were plentiful. Unambiguous signs of domestication didn't turn up until about 10,500 years ago, in larger settlements with different architecture and a much more complex social organization, he concluded.

    All in the family.

    Maize and its wild ancestor teosinte (left) are closely related despite their differences.


    “There is no current evidence for domesticated plants in the [first settled villages],” Weiss agrees. “But it was probably a very energetic period, when people all across the region were playing with cultivation of wild plants.” And once plants were domesticated, making farming more efficient and intensive, this way of life apparently exploded across the Near East, as large farming villages sprung up like mushrooms and people quickly formed trade and communication networks over the entire region.

    The notion of a long run-up to domestication also gets support from new findings by Willcox and archaeobotanist Ken-ichi Tanno of the Research Institute for Humanity and Nature in Kyoto, Japan. They examined charred wheat spikelets from four sites of different ages in Syria and Turkey. There was a clear trend over nearly 3000 years: Earlier sites had fewer domesticated spikelets and later sites had more. At 10,500-year-old Nevali Çori, only about 10% of the spikelets were clearly domesticated, whereas 36% were domesticated at 8500-year-old el-Kerkh in Syria and 64% at 7500-year-old Kosak Shamali, also in Syria, Willcox and Tanno reported last year in Science (31 March 2006, p. 1886). These results suggest that wild varieties were only gradually replaced by domesticated ones, they say.

    “Domestication was the culmination of a lengthy process in which plants were cultivated but retained their wild phenotypes,” says geneticist Terry Brown of the University of Manchester in the U.K. “Early farmers were receiving the benefits of agriculture long before domestication evolved.” Even Hillman says that he is “very impressed” with the analysis, although it contradicts his previous work: “[Domestication] probably did take this long.”

    But why? Fuller, in an article earlier this year in the Annals of Botany, suggests that humans may have exerted weak rather than strong selection pressure on their crops. “Weaker selection means domestication would take longer, while stronger selection means it would happen more quickly,” he explains.

    And there are many ways that early farmers' behavior might have weakened selection. For example, Fuller questioned whether sickles were actually used in early harvesting. Other methods, such as picking already-fallen spikelets from the ground, would not have selected for spikelets that stay on the stalk. Although sickles date as far back as 15,000 years ago, no domesticated plants show up before 10,500 years ago. So the first sickles may have been used for other tasks, such as cutting reeds for floor matting, rather than harvesting grains, Fuller argued.

    Willcox favors an alternative explanation: During hard years, early farmers replenished their seed stocks with wild varieties, thus slowing domestication. Only when farmers began planting domesticated plants farther from the wild stands—physically and genetically isolating them from their wild ancestors—did the process speed up, he says. Reproductive isolation of domesticated and wild plants could have acted as a “trigger,” agrees Manchester's Brown, spurring increasing proportions of domesticates as farming spread across the Near East. Eventually, says Weiss, sowing, tilling, and harvesting “create[d] these artificial environments that lead to domestication. … It meant totally new ideas and a totally new way of life.”

    Multiple birth.

    People in many different parts of the world independently began to cultivate and eventually domesticate plants.


    New World, new paradigm

    At the same time that archaeologists are concluding that Old World crops were fully domesticated a little later than once thought, recent discoveries are pushing domestication in the New World back, way back. Not so long ago, researchers saw little evidence for farming of crops such as squash, maize, and manioc before about 5000 years ago. “Some archaeologists thought little of importance had taken place in these tropical forests,” Piperno says. “We didn't have the data.” Researchers now have new methods to identify microscopic bits of poorly preserved tropical plants, and genetic studies can date when domesticated lineages split from wild ancestors.


    A 23,000-year-old wheat fragment from Ohalo II.


    “We were misled by what was not preserved and what we could not see,” says anthropologist Tom Dillehay of Vanderbilt University in Nashville, Tennessee. “These people had a very sophisticated knowledge of the plants that were out there.”

    Archaeologists began to see more clearly back in 1997, when the Smithsonian's Bruce Smith radiocarbon-dated domesticated seeds and other fragments of pepo squash seeds from a cave near Oaxaca, Mexico, to nearly 10,000 years ago (Science, 9 May 1997, pp. 894 and 932). The signs of domestication were clear: The seeds were larger and the stems and rinds thicker than those of closely related wild squash that still grows in the region; indeed the fragments found were identical to today's domesticated pepo squash. Since then, earlier dates have steadily accumulated for the domestication of nearly every New World crop. Piperno's team has dated starch grains from domesticated manioc, arrowroot, and maize on milling stones in Panama to up to 7800 years old, and other Panamanian sites have yielded dates for these crops that are nearly as early.

    This week, on page 1890 of this issue of Science, a team led by Dillehay reports 10,000-year-old squash and 8500-year-old peanuts on the floors and hearths of houses made of stone and reeds in the Andes Mountains of Peru. Genetic studies and the distribution of possible wild ancestors suggest that these crops were probably domesticated elsewhere, in South America's lowland tropical forests. So these very ancient dates show how quickly domesticated crops spread from their original centers of origin, the team concludes. But identifying domestication is not always easy: Smith questions whether Dillehay's evidence proves that squash, peanuts, and other plants had actually undergone “any of the genetic or morphological markers of domestication.”

    All the same, the flurry of early dates in the New World is “remarkable,” says ethno-botanist Eve Emshwiller of the University of Wisconsin, Madison, because the first domesticates appear not too long after humans colonized the Americas, at least 13,000 years ago. That's a contrast to the Old World, where people lived for tens of thousands of years before domesticating plants. Dillehay agrees: “People between 13,000 and 10,000 years ago were adapting to [changing climatic conditions] more favorably than we had thought before.”

    Genetic data support the early dates, too. For example, John Doebley of the University of Wisconsin, Madison, genotyped numerous specimens of that New World staple, maize, and its wild ancestor, teosinte. From the number of genetic changes between teosinte and maize, and the likely speed of the “molecular clock,” Doebley's team concluded in a paper published in the Proceedings of the National Academy of Sciences (PNAS) in 2002 that maize was domesticated about 9000 years ago. And they found that maize was probably domesticated only once, in the Balsas River Valley of southern Mexico.

    In an astonishing stream of studies, Doebley and other researchers have also taken a detailed look at the genetic changes underpinning maize domestication. The transformation of teosinte to maize was dramatic, as these plants look so different that researchers once doubted their relationship. Ears of teosinte are multistalked and have only five to 12 kernels, whereas single-stalk maize ears have 500 or more. A tough casing also protects teosinte kernels, whereas maize kernels are “naked” and accessible to humans. Indeed, some archaeologists have suggested that the unappetizing teosinte was first domesticated to make alcoholic drinks from its sugary stalks rather than for the dinner table.

    Maize domestication genes include tb1, which controls the number of stalks, pbf, which controls protein storage in the kernel, and su1, which affects starch storage. Recently, Doebley teamed up with ancient DNA specialists to track changes in these genes in ancient maize, using 11 maize cobs from Mexico and New Mexico dated from 5000 to about 600 years ago. The domesticated variants of tb1 and pbf were present in all the ancient DNA samples, and all the Mexican cobs had the domesticated variant of the su1 gene. But 1900-year-old cobs from New Mexico showed a mix of wild and domesticated variants, the team reported in Science (14 November 2003, p. 1158).

    If the domesticated variant of su1—which may give corn the properties necessary for making good tortillas—was not widespread in maize populations until much later, then domestication might have taken place over an extended period, the team concluded. “There must be several stages to genetic domestication of plants,” says Manchester's Brown.

    Doebley's work has spurred the archaeologists to try to keep up. His finding that maize was domesticated 9000 years ago in Mexico's Balsa River region inspired Piperno's international team to comb the valleys in search of confirmation, for example. In the 30 May online edition of PNAS, they reported preliminary evidence that domesticated squash and maize were grown on ancient lakesides probably by 8500 years ago, although the dates are not yet confirmed. “We think that before long we will be able to push the archaeological dates back to match the genetic data,” says Piperno.

    Yet even if people in the New World were domesticating plants early, they did not necessarily become full-fledged farmers right away, some archaeologists argue. “The first plant domestication was 10,000 years ago, but the development of village-based agricultural economies did not happen until more than 5000 years later,” says Smith. In a 2001 paper in the Journal of Archaeological Research, Smith argued that in many parts of the world initial plant domestication was followed by a long period of “low-level food production,” during which prehistoric peoples continued to hunt and gather while slowly adding already domesticated crops to their diet.

    “Domestication of a plant is one thing, and fully adopting it is another,” agrees Dillehay. But he argues that his new evidence from the Peruvian Andes, which includes houses, may indicate that both settled village life and farming economies arose earlier than researchers thought, at least in some parts of the Americas. Piperno agrees that the work of Dillehay and others may now be providing the “missing evidence” to fill at least some of that 5000-year gap.

    Tell me why

    Back in the 1950s, many archaeologists thought agriculture was born in only two places: the Near East and the Americas. From these two fountainheads of farming, the story went, agriculture spread throughout the world. Yet archaeologists now recognize at least 10 independent centers, and even regions once thought to be agricultural backwaters have taken on a new importance. In 2003, a team led by Monash's Denham clinched the case that bananas, taro, and yams were independently domesticated in New Guinea nearly 7000 years ago (Science, 11 July 2003, p. 180).

    So if domestication happened repeatedly, what sparked this new relationship between people and plants? Researchers have pondered the question since the 1920s, when Australian prehistorian V. Gordon Childe pegged the rise of farming to dramatic climatic changes now known to have taken place around 11,500 years ago. That's when the last Ice Age ended and the Pleistocene period gave way to the much milder Holocene—the geological epoch in which we live today, with a warmer, wetter, and more stable climate.

    Childe's hypothesis sparked a lot of research. But since his day researchers have swung back and forth between environmental explanations and those that focus more on social changes within increasingly sedentary communities of hunters and gatherers. All the same, most archaeologists agree that the origins of agriculture have something to do with the broader transition from the Pleistocene to the Holocene. “I am comfortable seeing this climate change as a precondition for agriculture,” says the Smithsonian's Smith. But he points out that it can't be the sole explanation for the rise of farming in regions such as eastern North America, where squash and several other crops were domesticated only about 5000 years ago.

    Some researchers correlate the origins of farming not with the early Holocene but with a late Pleistocene global cold snap called the Younger Dryas, which hit about 13,000 years ago and sharply reversed warming trends for more than a millennium. This hypothesis was prompted by excavations at Abu Hureyra in Syria's Euphrates Valley, led by British archaeologist Andrew Moore, now at the Rochester Institute of Technology in New York. Abu Hureyra was first occupied by hunter-gatherers about 13,500 years ago and later by early farmers, providing a rare window on the transition to agriculture. UCL's Hillman, who analyzed the plant remains, suggested that the Younger Dryas had a devastating effect on the availability of the wild cereals and other plants at the site. Hunter-gatherers eventually disappeared, and a short time later possible first evidence of farming—larger grains of rye—show up. Hillman and Moore proposed that the region's hunter-gatherers invented agriculture to solve food shortages brought on by the cold climate.

    Plant domestication: increasing dependence on cultivars for food. Decreasing dependence on wild plants for food.


    “Hillman's evidence is convincing,” at least for the Near East, says Piperno. “The Younger Dryas may have been some kind of trigger.” The worldwide invention of agriculture, Piperno adds, suggests “that there must have been a common set of underlying factors.”

    But not everyone is persuaded by Hillman's case for rye domestication. And after its possible appearance at Abu Hureyra, domesticated rye doesn't show up for thousands of years anywhere in the Near East. Even if the Younger Dryas can explain the sequence of events at Abu Hureyra, it hasn't been shown to spur farming in other regions, says David Harris of the Institute of Archaeology in London. Willcox, in a 2005 review of Near East farming in the journal Vegetation History and Archaeobotany, argued that agriculture did not really catch on until after the Younger Dryas was over and the Holocene, with its more stable climatic conditions, had begun.

    Indeed, the agricultural lifestyle might have been “impossible” during the glacial conditions of the Pleistocene but “mandatory” during the Holocene, argued ecologist Peter Richerson of the University of California, Davis, and his colleagues in a 2001 paper in American Antiquity. One explanation: Dramatically lower carbon dioxide levels during the Pleistocene might have made farming untenable, a hypothesis first proposed back in 1995 by botanist Rowan Sage of the University of Toronto. Crops grow more in higher ambient CO2 levels. As the Holocene began, CO2 levels rose by roughly 50%, from 180 parts per million to 280 ppm in just a few thousand years, according to polar ice-core records. “This would have had a big effect on photosynthesis and plant productivity,” Richerson says.

    The Pleistocene-Holocene transition might also have affected decisions about what to eat. Recently, Piperno, Denham, Kennett, and others have been studying the choices humans make, borrowing methods from optimal foraging theory, a Darwinian approach that assumes humans and other animals pursue the most advantageous strategy for getting food. In a recent study, Piperno looked at the lowland tropics of the New World, as forests expanded into once-open areas. Based on the changing availability of both plants and animals, she calculated that farming would have been more advantageous than foraging right around the time that the first domesticated crops appear, about 10,000 years ago.

    But some archaeologists think that too much emphasis on environmental explanations gives short shrift to the less easily testable social and symbolic aspects of human behavior. “We have tended to leave these aspects out and focused on an economic paradigm,” says archaeologist Joy McCorriston of Ohio State University in Columbus.

    In the 1980s, for example, the late French prehistorian Jacques Cauvin, who founded the Jalès center, proposed that in the Near East a rise of religious symbolism changed the relationship between people and nature and made farming possible. More recently, archaeologist Brian Hayden of Simon Fraser University in Burnaby, Canada, argued that farming had been invented by ambitious hunter-gatherers seeking greater prestige and wealth within their communities.

    As ideas are batted back and forth, some doubt that a global explanation for agriculture will be found. “We are all thrashing around, trying to find an explanation for something that is worldwide,” says archaeologist Graeme Barker of the University of Cambridge in the U.K. “It is far too simplistic.” But that won't stop researchers from trying. Says Kennett: “The transition to agriculture is one of the central questions in archaeology. We need to understand it.”


    Starch Reveals Crop Identities

    1. Michael Balter

    Starch grains identify manioc (top) and maize (bottom).


    Until very recently, archaeologists searching for the first domesticated forms of tropical plants such as yams, manioc, and bananas just kept on looking. The humid tropical environments in which these plants grow destroyed evidence of their existence, leaving archaeologists with “patchy and speculative” accounts of their domestication, says archaeobotanist Andrew Fairbairn of the University of Queensland in Brisbane, Australia.

    Then in the mid-1990s, archaeologists realized the potential of starch grain analysis, a technique used for more than a century by botanists to identify modern plants. Plants manufacture and store starches in microscopic organelles called amyloplasts. Both the size of the amyloplasts and the pattern of starch deposition vary from plant to plant, often making it possible to distinguish species. “This methodology makes things visible that were previously invisible,” says archaeobotanist Linda Perry of the Smithsonian Institution in Washington, D.C. That new visibility has pushed back the dates of domestication for a number of tropical crops, including squash, manioc, and chili peppers (see main text). When Perry and her colleagues went looking for chili pepper starch grains in Central and South America, for example, they found them seemingly everywhere: in sediments, on milling stones and stone tools, and on pottery shards. The oldest date back to 6100 years ago.

    What's more, in some plants—although not all—starch grains of wild and domesticated strains are distinct. For example, starch grains of wild chili peppers are 5 to 6 micrometers long, whereas the domesticated versions are a whopping 20 micrometers. The method is now used to identify everything from bananas to maize to wild barley and has “breathed new life into the investigation of early agriculture,” says Timothy Denham of Monash University in Clayton, Australia.


    Relative Differences: The Myth of 1%

    1. Jon Cohen

    Genomewise, humans and chimpanzees are quite similar, but studies are showing that they are not as similar as many tend to believe

    In a groundbreaking 1975 paper published in Science, evolutionary biologist Allan Wilson of the University of California (UC), Berkeley, and his erstwhile graduate student Mary-Claire King made a convincing argument for a 1% genetic difference between humans and chimpanzees. “At the time, that was heretical,” says King, now a medical geneticist at the University of Washington, Seattle. Subsequent studies bore their conclusion out, and today we take as a given that the two species are genetically 99% the same.

    But truth be told, Wilson and King also noted that the 1% difference wasn't the whole story. They predicted that there must be profound differences outside genes—they focused on gene regulation—to account for the anatomical and behavioral disparities between our knuckle-dragging cousins and us. Several recent studies have proven them perspicacious again, raising the question of whether the 1% truism should be retired.

    “For many, many years, the 1% difference served us well because it was underappreciated how similar we were,” says Pascal Gagneux, a zoologist at UC San Diego. “Now it's totally clear that it's more a hindrance for understanding than a help.”

    Using novel yardsticks and the flood of sequence data now available for several species, researchers have uncovered a wide range of genomic features that may help explain why we walk upright and have bigger brains—and why chimps remain resistant to AIDS and rarely miscarry. Researchers are finding that on top of the 1% distinction, chunks of missing DNA, extra genes, altered connections in gene networks, and the very structure of chromosomes confound any quantification of “humanness” versus “chimpness.” “There isn't one single way to express the genetic distance between two complicated living organisms,” Gagneux adds.

    When King and the rest of the researchers in the Chimpanzee Sequencing and Analysis Consortium first detailed the genome of our closest relative in 2005, they simultaneously provided the best validation yet of the 1% figure and the most dramatic evidence of its limitations. The consortium researchers aligned 2.4 billion bases from each species and came up with a 1.23% difference. However, as the chimpanzee consortium noted, the figure reflects only base substitutions, not the many stretches of DNA that have been inserted or deleted in the genomes. The chimp consortium calculated that these “indels,” which can disrupt genes and cause serious diseases such as cystic fibrosis, alone accounted for about a 3% additional difference (Science, 2 September 2005, p. 1468).

    The 6.4% difference.

    Throughout evolution, the gain (+) in the number of copies of some genes and the loss (-) of others have contributed to humanchimp differences.


    Entire genes are also routinely and randomly duplicated or lost, further distinguishing humans from chimps. A team led by Matthew Hahn, who does computational genomics at Indiana University, Bloomington, has assessed gene gain and loss in the mouse, rat, dog, chimpanzee, and human genomes. In the December 2006 issue of PLoS ONE, Hahn and co-workers reported that human and chimpanzee gene copy numbers differ by a whopping 6.4%, concluding that “gene duplication and loss may have played a greater role than nucleotide substitution in the evolution of uniquely human phenotypes and certainly a greater role than has been widely appreciated.”

    Yet it remains a daunting task to link genotype to phenotype. Many, if not most, of the 35 million base-pair changes, 5 million indels in each species, and 689 extra genes in humans may have no functional meaning. “To sort out the differences that matter from the ones that don't is really difficult,” says David Haussler, a biomolecular engineer at UC Santa Cruz, who has identified novel elements in the human genome that appear to regulate genes (Science, 29 September 2006, p. 1908).

    Daniel Geschwind, a neuroscientist at UC Los Angeles (UCLA), has taken at stab at figuring out what matters by applying systems biology to quantifying and analyzing genetic differences between human and chimpanzee brains. Working with his graduate student Michael Oldham and UCLA biostatistician Steve Horvath, Geschwind compared which of 4000 genes were turned on at the same time, or “coexpressed,” in specific regions of the dissected brains.

    With these data, they built gene networks for each species. “A gene's position in a network has huge implications,” Geschwind says. Genes that are coexpressed most frequently with other genes have the most functional relevance, he argues.

    Geschwind and his colleagues clustered the networks into seven modules that correspond to various brain regions, such as the cortex. Comparisons of the map of each cluster's network in each species plainly showed that certain connections exist in humans but not chimps. In the cortex, for example, 17.4% of the connections were specific to humans, Geschwind and co-workers reported in the 21 November 2006 Proceedings of the National Academy of Sciences. Although the differences don't immediately reveal why, say, humans get Alzheimer's and chimps don't, the maps clearly organize and prioritize differences. “It really brings the critical hypotheses into strong relief,” says Geschwind.

    Could researchers combine all of what's known and come up with a precise percentage difference between humans and chimpanzees? “I don't think there's any way to calculate a number,” says geneticist Svante Pääbo, a chimp consortium member based at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany. “In the end, it's a political and social and cultural thing about how we see our differences.”


    Turning Ocean Water Into Rain

    1. Yudhijit Bhattacharjee

    A novel technology may end the curse of bad drinking water on remote Indian islands—and offer an alternative method of desalination for mainland communities, too

    Limitless resource.

    The Kavaratti plant uses the ocean's temperature gradient to produce fresh water.


    KAVARATTI, INDIA—With its coconut palms and white-sand beaches, this coral island in the Arabian Sea seems like a tropical paradise—until you taste the water. For decades, the 11,000 people of Kavaratti have had to drink the brackish water from their wells, supplemented by a modest supply of monsoon rainwater. Now, however, the islanders are quenching their thirst with fresh water distilled from the turquoise expanse that surrounds them—thanks to a novel desalination method that's being held up as a model solution for water shortages along India's teeming mainland coast.

    Most desalination plants either boil seawater and then condense the vapors (thermal distillation) or pump seawater at high pressure across a salt-retaining membrane (reverse osmosis). Both methods are energy-intensive and expensive to maintain. But the plant at Kavaratti, part of the Lakshadweep archipelago, is exploiting a third strategy that has been known for half a century but rarely implemented: using the ocean's own thermal energy to desalinate water.

    The concept is simple. Water at the ocean's surface is warm, with a temperature that's typically between 26° and 30°C in the tropics. At a depth of 350 meters, it drops to a chilly 13°C or so. At the plant, surface water is pumped into an onshore vacuum chamber where the low pressure causes some of the water to vaporize. In another chamber, cold water drawn from the depths condenses the vapor into fresh water. “We are simply mimicking how nature makes rain,” says S. Kathiroli, director of the National Institute of Ocean Technology (NIOT) in Chennai, which built the plant.

    Known as low-temperature thermal desalination (LTTD), the technology is an offshoot of a more ambitious idea: to convert the ocean's thermal energy into electricity, first proposed by French physicist Jacques d'Arsonval in 1881. Competition from cheaper energy sources has prevented ocean thermal energy conversion from taking off, although experimental plants in Hawaii and Japan have shown that the concept works. LTTD has fared better—a plant in Italy operated commercially during the 1990s—but the technology has largely remained on the margins.

    Thirsty archipelago.

    Indian officials plan to build desalination plants on each of Lakshadweep's islands.

    The Indian venture is a bold attempt to bring thermal-driven desalination into the mainstream by massively multiplying production. NIOT admits that the year-old Kavaratti plant, which produces 100,000 liters of fresh water a day, is not as energy-efficient as rival technologies: It consumes 30% more energy per unit water than a reverse-osmosis plant, for instance. But scaling up the technology 100-fold, officials believe, will unlock its potential.

    To test that idea, NIOT has built a plant with a capacity of 1 million liters per day on a floating barge 40 kilometers off the coast of Chennai, on the opposite coast of India. Last month, NIOT engineers completed a 60-day trial of the plant, giving away drums of fresh water to passing ships. The institute is now inviting investors to help ratchet up the operation to 10 million liters a day by installing more condensers and evaporation chambers, which officials say would halve the cost to less than $1 per 1000 liters. That would be 25% cheaper than seawater desalination using reverse osmosis, says Kathiroli. There's a lower environmental cost too, he points out: Concentrated brine left over from reverse osmosis is often flushed back into the ocean to the detriment of local marine organisms.

    Experts in India and abroad are watching the project closely. “It's a strategy worth pursuing,” says Luis Vega, who designed an ocean thermal energy plant that produced electricity and desalinated water for the Natural Energy Laboratory of Hawaii Authority in the 1990s. But Vega doubts that scaling up will reduce costs much. Jayanta Bandyopadhyay, a water-policy expert at the Indian Institute of Management in Kolkata, says the government is right to experiment with desalination but must also invest more in low-tech solutions such as rainwater harvesting.

    When NIOT researchers began working on ocean thermal energy a decade ago, electricity, not drinking water, was the prize they were after. But after multiple failures to install a deep water pipe at sea to draw cold water from a few hundred meters below the sea's surface, the government in 2003 pulled the plug. Kathiroli, who took over as NIOT director the following year, revived the project with the simpler target of desalination. This requires a smaller temperature differential than the 20°C needed to make electricity, and therefore water can be drawn from a shallower, more manageable depth. “We were driven by our ego,” says Kathiroli. “We wanted to show that we could do it.”

    The government approved the proposal, and after completing a pilot project, NIOT engineers in 2005 began building the Kavaratti plant. The steep bathymetry of the island—the seabed plunges several hundred meters a short distance from shore—enabled accessing deep water without venturing far from land.

    Since coming online in late 2005, the plant has pumped fresh water to a network of public taps for 2 hours every morning and evening. Islanders say they now use groundwater—which many have been drinking all their lives—only for washing and cleaning. “This water tastes better, and food cooked in it tastes better too,” says M. Qasim, a schoolteacher.

    Another benefit has been the prevention of waterborne diseases, once rampant on Kavaratti because of the many septic tanks near the shallow water table. P. S. Ashraf, superintendent of the island's only hospital, says he and his colleagues have witnessed around 50% fewer diarrhea and dysentery cases since the plant was commissioned.

    Buoyed by the success, officials plan to build similar plants on Lakshadweep's 10 other islands. They expect that the Kavaratti experience will help make the new plants more cost-effective. “We are confident of streamlining the process considerably,” says NIOT engineer Purnima Jalihal.

    Although thermally driven desalination may be a good option for islands, it must pass a bigger economic test on the mainland, where the coast's gradual slope requires going several kilometers offshore to access deep water. NIOT's barge plant near Chennai will have to compete with a reverse-osmosis plant with a 100-million-liter capacity being built nearby onshore by a Spanish waterworks company, Befesa. The Chennai plant will have the added expense of transporting fresh water from the barge to the mainland, says Ravi Bondada, a business manager for Befesa in Chennai: “They are making a good attempt, but the economics will have to be proved.”

    Kathiroli agrees that the government should continue to pursue conservation strategies such as better river management and improve rainwater collection for drinking water. Nevertheless, he emphasizes that the need for fresh water is enormous; the shortfall for Chennai alone is 300 million liters a day. Hopes for ocean thermal technology are running high because it is young: “Reverse osmosis has been fine-tuned for over 40 years or more,” Kathiroli says. “We are just starting out.”


    A Spare Magnet, a Borrowed Laser, and One Quick Shot at Glory

    1. Adrian Cho

    Using equipment they have on hand, a small band of physicists hopes to confirm the existence of a new particle—by shining a laser through a wall


    Wester (left) and Chou say their experiment offers high potential payoff at low cost.


    BATAVIA, ILLINOIS—In particle physics, the quintessential big science, $30,000 usually doesn't buy you much. For example, the mammoth Large Hadron Collider (LHC) under construction at the European particle physics laboratory, CERN, near Geneva, Switzerland, costs a staggering $3.8 billion, and each of the atom smasher's 1232 main steering magnets costs 1 million Swiss francs ($800,000). But for less than 0.001% of the cost of the LHC, a tiny team here at Fermi National Accelerator Laboratory (Fermilab) hopes to pull off an experiment that could clinch a discovery as revolutionary as any the LHC might make.

    It's a long shot, to be sure. If it works, the odd little experiment would prove the existence of an unexpected new particle first hinted at by an experiment known as PVLAS at Legnaro National Laboratory of Italy's National Institute for Nuclear Physics (Science, 17 March 2006, p. 1535). But even the members of the Fermilab team themselves suspect that, instead of evidence of new physics, the PVLAS signal is some sort of experimental artifact, a mysterious hiccup in the machinery parading as a particle.

    Still, testing the dubious result is so easy—and, if it's real, so potentially revolutionary—that experimenters around the world are straining to do it first. To confirm the PVLAS result, all they have to do is shine a laser through a solid wall—in the Fermilab experiment, a high-tech mirror. That's something you can attempt with spare parts and a little help from your friends. Half a dozen groups are racing to perform the experiment right now.

    “It's an easy test, and it's not often that people in high-energy physics have a chance to work on such a small experiment,” says Aaron Chou, a postdoc at Fermilab and co-leader of the 11-member team. “For me, it's great fun.” David Christian, head of Fermilab's experimental physics projects department, says the project was an easy sell to lab officials. “When the potential payoff is big and the time and effort and cost are all small, it's easy to say 'Go ahead.'” Still, Christian adds, “Almost for sure, [the PVLAS] result is wrong.”

    Physicists won't be certain until they train lasers on walls, however. Any light leaking through would confirm a particle transformation suggested by the strange PVLAS results. To probe the electromagnetic properties of empty space, PVLAS researchers shined a polarized laser beam down a long vacuum pipe. Perpendicular to the pipe, they applied a strong magnetic field. To their surprise, they found that the polarization rotated as the light passed through the magnetic field.

    That twisting could be explained if photons were turning into some new type of uncharged particle. Suppose the laser light entered the magnet with its polarization canted just slightly from the direction of the field. In that case, the laser beam can be thought of as many photons polarized parallel to the field and a few polarized perpendicular to it. If some of those polarized parallel to the field interact with it and turn into particles, then the ratio of perpendicular to parallel photons would increase and the polarization of the light would rotate slightly away from the field.

    The PVLAS data suggest that a photon turns into a particle only 1/500,000,000 as massive as the electron. The observation does not prove that the particle exists, emphasizes Giovanni Cantatore, a physicist at the University of Trieste in Italy and spokesperson for PVLAS. “We took special care not only to not say that [it does] but also to not give the impression that we were saying that,” he says. Nevertheless, the result has piqued physicists' interest, in part because the putative particle resembles the long-sought axion. Invented to smooth over conceptual problems in the theory of the strong nuclear force and a candidate for the mysterious “dark matter” that makes up 85% of the matter in the universe, the axion should emerge from photons in the same way (Science, 11 April 1997, p. 200).

    The particle cannot be the axion, however, because photons appear to turn into it far too readily. In fact, according to the PVLAS results, photons change so rapidly that particles should gush from stars and drain them of their energy, says Pasquale Serpico, a theorist at Fermilab. “The sun would burn out in 1000 years, and we have historical evidence that that's not the case,” he says. It's possible that photons change to particles more slowly in the innards of the sun, Serpico says. But that's a conceptual Band-Aid some might find off-putting.

    To prove the particles exist, experimenters must run the process backward and convert the particles back into light. And that's exactly what Fermilab's Chou, William Wester, and nine of their colleagues hope to do in an experiment they've dubbed GammeV. Like viewers of a cooking show following along at home, the researchers are clearly making do with what they have handy. An extra superconducting magnet from Fermilab's Tevatron collider supplies the field. A laser borrowed from elsewhere in the lab will crank out the photons. The whole experiment will be controlled by a circuit board that Fermilab designed and distributes to high-school teachers to run small cosmic-ray experiments and other demonstrations.

    The experimenters will shine the laser down a vacuum pipe running through the magnet. If some of the photons change into particles, they'll pass through the wall—a mirror that sits within the magnet—and emerge into a magnetic field on the other side. The magnetic field should not only change photons into particles but also eventually change particles back into photons. So a few of the particles that sail through the wall would turn back into photons on the other side, and the team hopes to detect them, one at a time, with a very sensitive phototube—the only piece of expensive new equipment the team has purchased. The laser will blast 100 million billion photons into the mirror 20 times a second in each 10-hour run, Wester says. “If the configuration of the apparatus is ideal, then at the end of that 10 hours we'd have 50 regenerated photons,” Wester says. And that would be plenty to clinch the case for the particle.

    Light sifter.

    Fermilab researcher Raymond Tomlin examines a mirrored plunger that will reflect photons but allow any exotic particles to continue unimpeded down an evacuated tube.


    Used to working in groups of hundreds, the GammeV team members relish the opportunity to try something smaller and more liberating. “It's like something out of the good old days when an experiment that had seven people on it was a big experiment” says Fermilab's Peter Mazur, a magnet expert and one of the team's senior members. “One almost feels guilty about taking time away from one's day job to do it.”

    Of course, anyone with an appropriate magnet, a laser, and a mirror can do the experiment. In addition to the Fermilab and PVLAS groups, teams are working on versions of the test at CERN; the Thomas Jefferson National Accelerator Facility in Newport News, Virginia; the German Electron Synchrotron laboratory in Hamburg; and the Laboratory for the Use of Intense Lasers in Palaiseau, France. Fermilab researchers hope to have their data collected and analyzed before the end of July, but even that may not be soon enough to beat the competition to the punch.

    Moreover, all contenders may be racing toward a finish line that is about to vanish. In the past year, the PVLAS team has rebuilt its experiment, and preliminary data from the new rig show no sign of the signal, says the University of Trieste's Cantatore. “The new results we have are not confirming the old ones,” he says. “That's the bottom line.” Still, Cantatore says, the researchers cannot say what produced the previous signal, so some glimmer of mystery remains.

    The chances that the Fermilab team will clinch a major discovery are small and shrinking. But the cheap little experiment is still worth doing, Wester says. “We score very high on the potential per cost scale,” he says. “The chance to see something really earth-shattering is very exciting.” And, in particle physics today, the chance to do a sweet little experiment with a handful of your colleagues for almost no money is just too good to pass up.