News this Week

Science  29 Sep 2006:
Vol. 313, Issue 5795, pp. 1864

    Pollen Contamination May Explain Controversial Inheritance

    1. Elizabeth Pennisi

    Eighteen months ago, plant researchers shook the foundations of genetics with a gene that seemed to defy the basic rules of inheritance. Arabidopsis plants carrying only mutant versions of the gene, called HOTHEAD, somehow gave rise to progeny carrying a wild-type allele supposedly missing from both parents. These startling results led Susan Lolle and Robert Pruitt, plant geneticists at Purdue University in West Lafayette, Indiana, and their colleagues to propose that the plant's cells carried a hidden stash of genomic information, with the memory of the wild-type gene sequence encoded in RNA. Every once in a while, that RNA would correct the mutated HOTHEAD, they suggested.

    The work, reported in the 25 March 2005 issue of Nature, stunned plant biologists, who have since offered several explanations for how the mutant gene could be corrected; one graduate student's proposal on his blog even became part of a Plant Cell paper.

    But has the whole episode been much ado about nothing? Yesterday, in an online letter published by Nature, Steve Jacobsen, a Howard Hughes Medical Institute investigator and plant geneticist at the University of California, Los Angeles (UCLA), and several colleagues argued that undetected contamination—wild-type pollen that accidentally fertilized plants considered to be self-fertilizing—may have “reintroduced” old versions of the HOTHEAD gene. “Contamination has always been the simplest explanation,” says Luca Comai, a plant geneticist at the University of California, Davis. “I would bet 100 against 1 that it's all explained by pollen contamination.”

    Lolle and Pruitt are willing to play those odds, noting that they conducted tests to rule out such contamination. “We've spent a lot of time trying to put the nail in the coffin of the outcrossing problem,” says Lolle.

    When Lolle, now at the University of Waterloo in Ontario, Canada, Pruitt, and their colleagues originally bred homozygous mutants of HOTHEAD, they found that 10% of the progeny wound up with one corrected HOTHEAD sequence, and in two cases, both mutant alleles were fixed. Moreover, when the researchers fertilized wild-type plants with pollen from a homozygous HOTHEAD mutant, 8 of 164 resulting seedlings had two wild-type alleles; all should have been heterozygous.

    Arabidopsis typically reproduces through self-fertilization, but the researchers checked for contamination from other plants. Unpublished DNA fingerprint experiments, they say, pinned down which plant's pollen sired progeny with the restored wild-type allele. “Based on those fingerprints, stray pollen doesn't account for the changes we've seen,” Lolle says.

    Originally thrilled by the mutant HOTHEAD results, Jacobsen and postdoctoral fellow Peng Peng set up an experiment to monitor restoration in it and another mutated gene. They found that if one of the genes was corrected, so was the other, suggesting an outside source of the wild-type alleles. The researchers then grew some HOTHEAD mutants in a room with a mix of other strains with known genotypes, while at the same time growing other HOTHEAD mutants in isolation.

    For the latter plants, they took extra precautions. To avoid chance contamination from pollen in the air, on an insect, or on the person tending the plants, Jacobsen took some plants home and asked his wife to water them. Peng even sent seeds home to his parents in China. And a UCLA mathematician and a computer scientist were recruited to set up their own nurseries. “We wanted to make sure these were people who had no contact with Arabidopsis,” says Jacobsen.

    In mixed company, the wild-type HOTHEAD regularly reappeared, replicating Lolle and Pruitt's results. For one particular mutant, the wild-type HOTHEAD made a comeback in 156 out of its 994 progeny. But some of those 156 contained genes, such as one for a green-fluorescing protein, that could only have come from one of the plants nearby. And in 2735 cases when a plant with mutant HOTHEAD was grown in isolation, no wild-type version of the gene showed up. “When Jacobsen took great pains to isolate the plants, he couldn't reproduce the [reversion] phenomenon,” notes Steven Henikoff, a geneticist at the Fred Hutchinson Cancer Research Center in Seattle, Washington.

    The HOTHEAD mutation itself may be to blame for all the confusion. In the mutant plants, the sepals don't open, keeping the stamens—and their pollen—penned in, while the stigma protrudes out, ready to receive any stray pollen. “We believe that would decrease the efficiency of self-pollination” and promote cross-fertilization, says Jacobsen.


    Instead of opening (left) to allow pollen-laden stamen access to the female stigma, fused mutant flowers (right) trap pollen, hindering self-fertilization.


    Pruitt and Lolle aren't sure what to make of these new data. “I can't explain his results just as he can't explain mine,” says Pruitt. But he does concede that although their plants grew in plastic sleeves, another plant's pollen could have gotten in from outside. “Ours were not in as complete isolation as Steve's,” says Pruitt. “I want to repeat Steve's experiments and see what we can see.” Until that happens, the jury is still out about whether HOTHEAD is a true genetic outlaw.


    Mud Eruption Threatens Villagers in Java

    1. Dennis Normile

    As a torrent of hot mud swamped rice paddies and inundated villages on the Indonesian island of Java, emergency crews last week started drilling a well to plug the eruption. A gas exploration project appears to have punctured a 2700-meter-deep geologic formation, releasing an unprecedented volume of pressurized steam and water that is carrying a river of mud to the surface. The geyser, which began in May, has made more than 10,000 people homeless and put many out of work as well. Experts say it may take until late November to shut down the leak.

    Liquid landscape.

    Steam and water from a deep carbonate formation are spreading mud over villages near Java's coast.


    The accident occurred near the coastal city of Sidoarjo, about 700 kilometers east of Jakarta, reportedly while the firm Lapindo Brantas Inc. was drilling an exploratory gas well. According to Rudi Rubiandini, a petroleum engineer at Institut Teknologi Bandung and adviser to Indonesia's Ministry of Environment, drilling had reached a depth of about 2800 meters when the accident occurred on 29 May. The drill string had become stuck in the well, he says, and while the crew was trying to free it, a geyser of mud and water erupted from the ground about 150 meters away.

    Rubiandini says the well goes through a thick clay seam from 500 to 1300 meters, then sands, shales, volcanic debris, and into permeable carbonate rock. Highly pressurized hot water and steam from the carbonate formation, he says, appear to have broken out below the point where the equipment was stuck in the well and either eroded a channel to the surface or followed natural fractures. Along the way, the river of hot, brackish water is eroding the clay layer to brew the hot mud that eventually rises to the surface. Some have speculated that this is a naturally occurring mud volcano, but L. William Abel, an American drilling expert advising Lapindo Brantas, says he believes the mud flow results from a drilling breach of a deep, pressurized reservoir.

    Abel, whose ABEL Engineering/Well Control Co., based in Houston, Texas, has been involved in containing well accidents worldwide, says the volume of mud flowing out of the ground—about 50,000 cubic meters per day—is unprecedented. The mud has spread over 240 hectares, swamping whole villages, factories, shrimp farms, and rice paddies. The first attempt to block the flow was stymied in early August when the site was threatened by the rising tide of mud.

    Efforts to block the flow had to wait while workers erected a higher retaining wall around a work site about 500 meters away from the original well. On 18 September, a drilling crew started on the first of two relief wells that they hope will intercept the original well within the shale formation 1500 to 1800 m down. They will then pump in high-density drilling mud to hydrostatically plug the leak.

    Abel says this is a standard drilling industry technique; he is confident it will work. “There has never been one blowout in the history of drilling that was too tough to fix,” he says.

    The land around the geyser, meanwhile, is subsiding as the underlying clay erodes. Some 1400 military personnel are building containment ponds to hold the mud and water that continues to flow to the surface. Local and national officials are considering diverting the gunk into a local river and the sea. But that would be “a big disaster for fisheries and tourist areas,” says Eko Teguh Paripurno, a geologist who heads a disaster research center at the University of National Development in Yogyakarta. Until the geyser is capped, Indonesian officials face a difficult choice between fouling additional land or the sea.


    Parasitic Weed Uses Chemical Cues to Find Host Plant

    1. Elizabeth Pennisi

    Dodder may be the bloodhound of the plant world. A plant that parasitizes other plants, it sniffs out its victim, Justin Runyon and his colleagues at Pennsylvania State University in State College report on page 1964. “This is a pretty cool example of plants behaving in a way most people think only animals behave,” says Richard Karban, a community ecologist at the University of California, Davis. It's also an effective strategy: Dodder ranks among the U.S. Department of Agriculture's top 10 noxious weeds.


    A plant that preys on other plants, dodder winds its way up its host.


    The work bolsters the notion that plants have a chemical language, an idea that's been hotly debated for the past 2 decades. “The results go a long way toward convincing people that plant-plant interaction via volatiles is a real phenomenon,” says Eran Pichersky, an evolutionary biologist at the University of Michigan, Ann Arbor.

    Barely able to carry out photosynthesis, dodder survives by attaching to the stems and leaves of other plants and robbing them of nutrients. A relative of morning glories, it has many names—goldthread, strangle-weed, witches' shoelaces—that aptly describe the dense, yellowish mats that blanket its hosts, reducing agricultural productivity by as much as 90%.

    Runyon, a graduate student working with Pennsylvania State chemical ecologists Consuelo De Moraes and Mark Mescher, wanted to know how dodder found its mark. He placed seedlings of the dodder species Cuscuta pentagona in small vials fitted with a collar of filter paper on which he traced their growth and found that 80% of them headed toward a tomato plant placed nearby. He then put seedlings in an open-air chamber with two 90-degree side tunnels, one of which led to a chamber containing four tomato plants and the other to a chamber with four artificial plants; the seedlings could catch a whiff of the plants through the tunnel. About 77% of the seedlings grew toward the actual plants, the group reports. Runyon then replaced the plants with a vial of plant extract and the fakes with a vial containing only solvent. Again, the dodder seedlings made the right choice.

    The seedlings also grew toward touch-me-not plants (Impatiens) and, to a lesser extent, wheat. But when given the choice, the dodder avoided wheat, a poor host, in favor of tomato. Runyon discovered that wheat emits a chemical that somewhat repels the dodder—a finding with possible practical implications, given that dodder is so hard to control. This result, says De Moraes, “suggests the possibility of using volatiles to enhance plant defenses, either by applying repellent compounds or perhaps by engineering plants to produce them.”

    Past studies have indicated that plants under attack from herbivores emit signals telling nearby plants to boost their chemical defenses. But some researchers have been dubious about this evidence of plant-plant chatter, arguing that the experiments took place in closed chambers where artificially high concentrations of odors built up. Runyon's experiments were “rigorously conducted in an ‘open’ experimental design, so it's hard to argue that the responses they observe in the greenhouse are not occurring in the real world,” says Ian Baldwin of the Max Planck Institute for Chemical Ecology in Jena, Germany.

    Many questions remain about how plants perceive the still-unidentified volatile signals. But there will be rapid progress, predicts Andre Kessler, a chemical ecologist at Cornell University. Runyon and his colleagues, he says, have “opened up a new door that can bring us closer to the understanding of airborne plant-plant interactions.”


    Study Says HIV Blood Levels Don't Predict Immune Decline

    1. Jon Cohen

    HIV levels in the blood are poor predictors of the rate at which the virus is destroying an infected individual's immune system, says a new study of nearly 3000 people. This challenges influential reports from a decade ago that link so-called plasma viral load levels to the onset of AIDS and death. The new work, published in the 27 September Journal of the American Medical Association (JAMA), also reinvigorates long-standing, contentious debates about how best to use viral loads in clinical care and about the mechanisms driving HIV's destruction of the immune system. “It's important because it means that things are not as obvious as we previously thought,” says Daniel Douek, an immunologist at the U.S. National Institute of Allergy and Infectious Diseases (NIAID) in Bethesda, Maryland, who studies HIV pathogenesis. “There's more going on than just viral load causing problems.”

    Benigno Rodríguez and Michael Lederman of Case Western Reserve University in Cleveland, Ohio, headed the study, which analyzed two separate cohorts of HIV-infected people to see how well an untreated person's first recorded levels of HIV RNA predicted immune destruction, as gauged by the annual loss of critical CD4 white blood cells. Their data simultaneously confirmed and challenged what has become dogma.

    Confirming a landmark 1996 study published in Science, the researchers report that groups of people with higher viral loads lost more CD4 cells each year. But on an individual basis, viral load accurately predicted a person's CD4 decline just 4% to 6% of the time. “It really nicely illustrates that when you look at cohorts and find a general phenomenon—yeah, virus is high and CD4 is low—it can be very, very poorly accountable when you look at individuals,” says immunologist Anthony Fauci, who heads NIAID. Lederman says this disconnect reflects the fact that if people are grouped by different viral loads, there is a “huge overlap” in CD4 loss between the groups.

    Out of range.

    The rate of CD4 decline varies widely within a group of untreated HIV-infected people who have similar viral loads at first visit.


    Virologist John Mellors of the University of Pittsburgh in Pennsylvania, who led the team that published the 1996 Science paper, doesn't buy the conclusion. “We don't agree with the paper at all,” says Mellors. “HIV RNA is the most powerful predictor of time to AIDS and death.” He suggests that the JAMA paper's results may reflect that CD4 measurements vary a great deal in different labs. He also contends that viral load levels should continue to play an essential role in determining when to start people on treatment.

    In another controversial twist, the JAMA paper and an accompanying editorial each note that the disconnect between individual viral loads and immune destruction supports the idea that only a small proportion of CD4 loss is due to direct killing by HIV. Rather, the authors contend that HIV causes immune mayhem mainly by over-“activating” the entire system. Once activated, cells self-destruct, leading many HIV-uninfected CD4s to an early grave. Both Fauci and Douek agree with this assessment. “We must not forget that the virus is at the heart of this,” says Douek. “But immune activation is probably more important than viral load in terms of rates of progression.”

    Virologist David Ho, head of the Aaron Diamond AIDS Research Center in New York City, questions that conclusion because he says the study leaves out a critical parameter: that people replenish CD4s at different rates. “Think kinetically,” says Ho, noting that a person's bank balance is determined both by income and expenses.

    The most “exciting implication” of the findings, says the editorial, may be an increased effort to test treatments that try to tamp down activation, augmenting the current strategy that just targets HIV.


    T Cells a Boon for Colon Cancer Prognosis

    1. Jennifer Couzin

    Patients with early-stage colorectal cancer have an upbeat prognosis, and many don't even receive chemotherapy. But 30% will relapse, challenging the tenet that grape-sized tumors that haven't spread aren't an undue cause for concern. Reaching back to a theory proposed in the late 19th century, a team of French scientists now says that in that 30%, recurrence may have little to do with the tumor itself. Rather, poor prognosis could stem from a lackluster immune reaction to cancer.

    In 1889, a British surgeon proposed that a cancer's path is determined not just by qualities intrinsic to tumor cells—the “seed”—but also by the “soil,” the environment in which a cancer grows. Attention has focused on the first, however, and treatment for virtually all solid cancers, like those in the breast, lung, and colon, is based on the tumor's size and whether it has spread. With the advent of genetics, dozens of papers have reported differences in gene expression in tumor cells and sought to tie prognosis to these patterns, with mixed success.

    Immunologists Jérôme Galon of the Institute for Health and Medical Research in Paris, Franck Pagès of the Georges Pompidou European Hospital, and their colleagues took a different tack. Guided by their earlier work suggesting a link between metastasis and a weak immune response, the group hunted for T cells in the vicinity of colorectal tumors. They relied on banked samples from 415 patients collected over the past 16 years, along with information about how those patients had fared. They focused their search on the T cells that attack bacteria, viruses, and other pathogens and those that remember enemies they've encountered before.

    Tumor in check?

    A patient with T cells (brown) swarming near a colorectal tumor (blue) has a good shot at survival.


    Their findings, reported on page 1960, surprised even them: The density of T cells near tumor cells was, in these patients, a better predictor of survival than traditional staging based on a cancer's size and spread. “It's an incredibly great step forward,” says Robert Schreiber, an immunologist at Washington University School of Medicine in St. Louis, Missouri. “One of the remaining arguments has been, ‘Does this really occur in humans?'… The answer here is, ‘Yes, it does.'”

    Galon, Pagès, and the others divided their samples into two groups, depending on whether immune cell concentrations were high or low. Patients whose tumors brimmed with CD3-positive T cells, for example, had a 5-year survival rate of 73%, compared with 30% for patients with low densities of CD3 cells around the tumor. In those with earlier-stage tumors, the spread was still clear: High densities of CD3 T cells correlated with a 79% chance of survival after 5 years. With low densities, the proportion was 33%. It's not known yet why some patients have more robust immune responses and whether tumor types help drive the difference.

    “The more T cells are infiltrating these lesions, the better survival is,” says Karin de Visser, a tumor immunologist at the Netherlands Cancer Institute in Amsterdam. It's often believed that a heightened immune system drives cancer rather than squelching it. For instance, conditions that induce chronic inflammation, such as colitis, seem to confer a greater cancer risk. But this work suggests that once a cancer forms, the immune system can also help beat it back.

    De Visser and others agree that the research needs to be repeated in larger groups of colorectal cancer patients and in those with other cancers. There have been hints in melanoma that T cells can hinder that cancer's progression; but in breast cancer, says de Visser, a handful of cases suggest that the opposite may be true.

    More immediately, Galon hopes that if the results hold, T cell densities could help identify apparently low-risk individuals who would benefit from aggressive treatment. “Today there is no way to predict those patients who will have a bad outcome” in this cohort, he says. In theory, the finding could also lead to new treatments that stimulate these T cells to proliferate and contain a tumor.


    Scientists Create Human Stem Cell Line From 'Dead' Embryos

    1. Constance Holden

    Scientists are working feverishly on methods to create lines of human embryonic stem (ES) cells that do not involve the destruction of human embryos. Last week, a European team reported that it had done just that, cultivating a line of stem cells from an “arrested” or nonviable embryo. They and others say that the technique could satisfy people who object to stem cell research on the ground that it harms potential human life—although others are skeptical that there can be a universally accepted definition of “arrested.”

    Under arrest.

    Normal embryo (left) has more and more-uniform cells than do those that have stopped growing.


    Donald Landry and Howard Zucker of Columbia University proposed several years ago that embryos that had stopped dividing might still yield individual cells that could be induced to grow separately. Now a team headed by biologist Miodrag Stojkovic, who has labs in Valencia, Spain, and at Sintocell in Leskovac, Serbia, reports that it generated a pluripotent ES cell line—that is, cells that can develop into all types of bodily tissues—from one of 13 embryos that had stopped developing 6 to 7 days after fertilization, at the blastocyst stage. The researchers waited from 24 to 48 hours after the last cell division to determine that the embryos, donated for research, were no longer viable, they reported online in Stem Cells on 21 September.

    They then removed the zona pellucida, or outer covering, of embryos and plated them on a growth medium. In five of the 13 cultures, some cells proliferated. In one case, the scientists were able to achieve a “fully characterized” human ES cell line that could differentiate into all three germ layers both in the dish and in live mice. Attempts to cultivate ES cell lines from another 119 embryos that were arrested at an earlier stage—as 4- to 10-cell morulas—were not successful.

    Stojkovic's team claims that as many as two-thirds of in vitro fertilization embryos fail to reach the stage at which they can be implanted. Now, they say, scientists will be able to use this material that would otherwise be discarded. The authors assert that their approach is more efficient than one reported last month by scientists at Advanced Cell Technology (ACT) in California: deriving an ES cell line from a single cell taken from a morula (Science, 25 August, p. 1031). The ACT authors were criticized for claiming their procedure could be done without harming embryos when in fact they destroyed those used in their experiments.

    The new study “offers hopeful possibilities” for getting the National Institutes of Health, which does not permit harm to live embryos, to fund research on newly derived ES cell lines, says Stanford University bioethicist William Hurlbut. “But scientists have yet to arrive at reasonable criteria for ‘embryo death'’; 24 hours without cell division may not be enough.” Landry agrees that “explicit definitions for irreversible arrest” of embryo growth are still lacking and is working to come up with watertight criteria.

    With the recent papers from ACT and Stojkovic's group, scientists are bringing closer to reality various alternatives to embryo destruction that a few years ago were only ideas. Last year, a group at the Massachusetts Institute of Technology showed that a technique called altered nuclear transfer, involving the creation of an embryo incapable of developing, is feasible in mice. And a group at the University of Milan has claimed in as-yet-unpublished work that it has generated human ES cell lines by parthenogenesis, using only unfertilized eggs.


    Royal Society Takes a Shot at ExxonMobil

    1. Eliot Marshall

    The world's oldest scientific society has challenged the world's richest corporation over what it sees as an attempt to confuse people about global warming. In a sharply worded letter made public last week, the 346-year-old Royal Society criticizes the oil giant ExxonMobil for giving money to “organizations that have been misinforming the public about the science of climate change” and for promoting an “inaccurate and misleading” view, to wit: that scientists do not agree about the influence of human activity on rising temperatures. ExxonMobil issued a rebuttal, and some climate-change skeptics attacked the Royal Society for trying to stifle debate.

    The letter, drafted by Royal Society Senior Manager for Policy Communication Bob Ward and approved by the society's leadership, was sent to ExxonMobil's U.K. director of corporate affairs. It charges that the company's Corporate Citizenship Report and a February 2006 strategy booklet called Tomorrow's Energy are “very misleading.” In one passage, the company dismisses the work of the Intergovernmental Panel on Climate Change (IPCC) as based on “expert judgment rather than objective, reproducible statistical methods.” The ExxonMobil documents claim it is “very difficult to determine objectively the extent to which recent climate changes might be the result of human actions.” These statements, the Royal Society says, are “not consistent with the scientific literature.”

    Ward also questions ExxonMobil's funding of groups that deny the connection between carbon dioxide and climate change. For example, he told Science that the corporation last year gave $25,000 to the Center for the Study of Carbon Dioxide and Global Change in Tempe, Arizona. On its Web site, the center claims “there is no compelling reason to believe that the rise in temperature [since the industrial revolution] was caused by the rise in CO2.” According to the Royal Society, “some 39 organizations” listed as beneficiaries on ExxonMobil's own Web site give out misleading information. Ward says that when he met with company officials in July, they said they planned to discontinue support for such groups. This month's letter, he says, was an effort to “follow up.”

    ExxonMobil spokesperson David Eglinton responded that the Royal Society described the company “inaccurately and unfairly.” A company statement says, “We know that carbon emissions are one of the factors that contribute to climate change,” and “ExxonMobil is taking steps to reduce and minimize … greenhouse gas emissions.” It notes that the company has sponsored a major climate research project at Stanford University. The funding of other policy groups is in review, ExxonMobil says.

    Several groups or individuals who contest the IPCC view of climate turned fire on the Royal Society. The George C. Marshall Institute in Washington, D.C., distributed critical remarks by several experts, including atmospheric scientist William Gray, a professor emeritus at Colorado State University in Fort Collins. “I am appalled … that the Royal Society would try to muzzle” skeptics, Gray says. Economist Ruth Lea, director of the conservative Centre for Policy Studies in London, says that the Royal Society was “ill advised” to “wade into the murky world of politics and popular opinion.”

    Ward replies that his letter is merely an attempt to ensure a high-quality debate. He adds that it springs from the Royal Society's motto—nullius in verba—which is taken to mean that facts, not assertions, are what matter.


    Search for Giant Scope Site Narrows to Two

    1. Robert Koenig

    The Square Kilometer Array (SKA)—planned to be the world's most powerful radio telescope—will be built in either Australia or in southern Africa. The two other site contenders, Argentina and China, were eliminated on technical grounds, according to an announcement made this week by the SKA steering committee. The final site decision is not expected for several years, with construction starting 2 years after the final choice if international funders agree to back the estimated $1 billion project (Science, 18 August, p. 910).

    SKA's international director, astronomer Richard Schilizzi, said both short-listed sites “can meet the full range of requirements” for the instrument, which will link thousands of antennae spread over 3000 kilometers. Perhaps the most important requirement is low radio interference. U.K. astronomer Phil Diamond, a former steering committee chair, said both potential sites are making progress toward “protecting these unique environments with radio-quiet zones” through laws or regulations.

    The Argentina site was rejected mainly because of ionospheric instability; China's proposed site amid karst hills did not offer the right geography to position SKA's central elements. Bo Peng of China's National Astronomical Observatories in Beijing described the decision as being “fair from the scientific point of view.”

    South Africa's SKA chief Bernie Fanaroff says his group would like the final site selection to be made as soon as feasible, but his Australian counterpart Brian Boyle says that SKA should avoid setting “an unrealistic time scale” that doesn't allow time to finalize the design, build infrastructure, and line up international funding. Schilizzi says he expects the final site decision “towards the end of the decade,” allowing time to raise the money and prepare the site for construction. Some doubts remain about funding, with the U.S. National Science Foundation so far avoiding a firm commitment. But Boyle says he is confident that SKA will eventually be built: “It is too good of an astronomy opportunity to pass up.”


    Embracing Small Science in a Big Way

    1. Adrian Cho*
    1. With reporting by Robert F. Service.

    The U.S. Department of Energy redirects its big machines toward small-scale research, as materials science overtakes particle physics


    The world's first x-ray laser, shown in this artist's depiction, will open new avenues of research in materials science—and close a chapter in the history of high-energy physics at SLAC.


    The heftiest building blocks of nature were discovered at the Stanford Linear Accelerator Center (SLAC). In 1967, researchers at the Menlo Park, California, laboratory shot electrons from a 3-kilometer-long “linac” into protons and detected the first hints of smaller particles within. Those infinitesimal bits of matter, known as quarks, are now a cornerstone of the so-called Standard Model, which describes how one type of quark decays into another. Nearly 40 years later, high-energy physicists still use the linac to feed particles into a collider, called PEP-II, with which they compare quarks and antiquarks.

    But in 2008, PEP-II will shut down, and a year later, researchers at SLAC will power up the world's first x-ray laser, the Linac Coherent Light Source (LCLS). It won't produce exotic subatomic particles that might allow researchers to peer further into the fundamental structure of matter. Instead, by shining a billion times brighter than other x-ray sources, LCLS will probe the properties of familiar materials in unprecedented ways, determining, for example, the structure of a protein from a sample of just one molecule. “The LCLS will be revolutionary,” says Persis Drell, SLAC's deputy director for particle physics.

    To nonphysicists, the lab's shift from elementary particles to materials may not seem like a big deal. But it's a prime example of the change sweeping over the Department of Energy's (DOE's) $4 billion Office of Science, which runs 10 national labs, funds thousands of academic researchers, and provides 42% of the U.S. government's overall funding for the physical sciences.

    For half a century after World War II, nuclear and high-energy particle physics ruled the roost. DOE-funded scientists have captured 23 Nobel prizes for exploring the composition of the universe and other grand academic puzzles. In the past 2 decades, however, the Office of Science has directed ever more resources to the down-to-earth fields of materials sciences, condensed-matter physics, chemistry, and nanotechnology, all supported by its Basic Energy Sciences (BES) program. Leveraging its expertise in accelerator physics, DOE has built a suite of large “user facilities” that allow researchers to study a vast array of materials, catalog thousands of protein structures, and even analyze the respiration of insects. Its $1.4 billion budget makes BES by far the office's largest program.

    The shift in priorities, which reflects the emerging opportunities in the physical sciences themselves, is changing the role and the culture of the DOE science labs. Once the cloistered preserves of a small coterie of scientists pursuing the most esoteric questions, the DOE labs now serve as gathering places where thousands of researchers from many fields perform myriad studies. “You have experiments coming and going every week and every few hours,” says Lonny Berman, an x-ray physicist at the National Synchrotron Light Source (NSLS) at Brookhaven National Laboratory in Upton, New York. “It's like Grand Central Station.”

    Small science, big machines

    The impact of the shift in DOE science can be seen clearly at Oak Ridge National Laboratory in the fir-covered foothills of the Smoky Mountains in eastern Tennessee. Once considered a backwater, the lab is becoming a mecca for materials scientists thanks to two new user facilities. In April, the lab fired up its $1.4 billion Spallation Neutron Source (SNS), a sprawling accelerator complex that produces intense pulses of high-energy neutrons for probing materials. Nearby stands the newly opened Center for Nanophase Materials Sciences, one of five nanoscience centers DOE is building around the country. “It's a tremendously enabling facility,” says Philip Rack, a materials scientist at the University of Tennessee, Knoxville, who collaborates with Oak Ridge researchers.

    Research forecast.

    User facilities supported by DOE's Basic Energy Sciences program (red) have proliferated, reinvigorating several national laboratories. Labs that specialize in high-energy or nuclear physics (blue), however, face less sunny futures.


    SNS is the latest in a string of accelerator-based facilities aimed at small science. In the 1980s, DOE began building synchrotrons to generate x-rays—which the circulating particles radiate in copious amounts—to probe the structures of materials. The first was at Brookhaven. It was followed by one at Lawrence Berkeley National Laboratory (LBNL) in California and another at Argonne National Laboratory in Illinois. Labs that had once focused on particle physics had new missions. The various BES user facilities are “the envy of the world,” says Raymond Orbach, head of the Office of Science and the agency's first undersecretary for science (see sidebar on p. 1874).

    The BES program has grown even as DOE's Office of Science has trimmed or held steady its programs in high-energy physics, nuclear physics, fusion-energy science, and biological and environmental research. In addition to the compelling scientific opportunities, observers credit the leadership of Patricia Dehmer, who has headed BES since 1995 (see sidebar). “Within the Office of Science, the competition is between the associate directors,” says a congressional staffer who follows DOE science. “And Pat Dehmer is just a smarter player.”

    The new machines have also changed the culture of science at the host labs. At particle physics labs such as SLAC or Fermi National Accelerator Laboratory (Fermilab) in Batavia, Illinois, hundreds of physicists work for years on a single experiment. And experimenters and accelerator physicists typically work hand in hand to improve the performance of the machine they share.

    At an x-ray source, most users come for just a few days at a time and have no interaction with the accelerator physicists. That's especially true of biologists, who use the x-rays to decipher the structures of proteins. “We're the ultimate users,” says Steven Almo, a protein crystallographer at the Albert Einstein College of Medicine in New York City who uses Brookhaven's NSLS. “We just want to go out there, get the data, come home, and solve the structure.” That attitude irks some lab scientists, who say that it's much harder than in the old days to find users who are also interested in improving the machine.

    The user facilities have sewn new ties between the labs and industry. At NSLS, ExxonMobil and IBM own and have outfitted seven of the synchrotron's 65 beamlines, and others pay to perform proprietary research there. “If the facility does its job right, it becomes a focal point for collaboration between industry, the national labs, and the universities,” says Chi-Chang Kao, a solid state physicist at Brookhaven. For example, at Argonne's Advanced Photon Source, researchers from Harvard Medical School and Hawaii Biotech Inc. used a beamline owned by a consortium of pharmaceutical companies to determine how the dengue fever virus changes shape as it infects a cell.

    A new model laboratory

    A push to focus more on “use-inspired research” could lead to even more dramatic changes. With ATT's Bell Laboratories of old as his model, LBNL Director Steven Chu envisions multidisciplinary teams that can translate basic research into technology attacking large energy problems. One proposed project, called Helios, would pursue the biology, materials science, and nanotechnology necessary to efficiently convert solar power to liquid fuel, whether through the production of ethanol from cellulose, improved solar cells, or direct conversion of sunlight to fuel. Chu would like DOE to provide flexible “block grants” for such projects so that lab managers could quickly follow up on promising leads.

    Such boundary-bridging efforts would be a break from current operating procedures for the national labs, in which researchers from various divisions in the same lab answer to different program managers at DOE headquarters in Germantown, Maryland. “They want to own you and to take credit for your work; that's how the system works,” says Heinz Frei, a physical chemist at LBNL and a member of the Helios team. For crosscutting efforts to flourish, Frei says, “there has to be an understanding at headquarters” that such projects span program boundaries.

    Michael Lubell, a lobbyist with the American Physical Society, says Orbach has been key in fostering such cooperation, especially now that his position as undersecretary for science gives him responsibility for applied research done in other parts of DOE. Orbach also works well with Under Secretary of Energy David Garman, who runs DOE's fossil energy, nuclear energy, and other applied nonweapons programs, Lubell says. But the bridge-building may falter once either leaves office, he says.

    Scattered clouds

    DOE hasn't turned its back on its high-energy and nuclear physics programs, requesting $775 million and $454 million for them, respectively, in its 2007 budget. But particle physicists at SLAC and Fermilab and nuclear physicists at Brookhaven and the Thomas Jefferson National Accelerator Facility in Newport News, Virginia, face a tougher struggle to win political support for billion-dollar projects that seek to understand the origin of mass or the nature of matter in the early universe. “I would think it would be a mistake on the part of high-energy and nuclear physicists to expect that, at least in the near future, they'll receive a fixed piece of the pie,” says Robert Richardson, vice provost for research at Cornell University, who has served on various national advisory committees.

    Putting basics first.

    Spending on the basic energy sciences within DOE's Office of Science has doubled—and soon could grow even faster—while spending for most other fields has remained flat.

    Despite its steady growth, DOE's BES program has also generated some controversy. Some researchers say that it funds too many groups and centers. “Lab stewardship has been equated with maintenance of a status quo that includes too many less-than-cutting-edge research teams,” says one researcher who asked not to be identified because he receives BES funding. Many researchers say that DOE starves grant-seekers in tough times to keep the machines running. In recent years, the balance in the BES budget between grants and facilities has slipped from an even funding split to a 45:55 ratio, Orbach says. With the projected budget increases, DOE hopes to restore the balance.

    Other researchers worry that BES has skimped on the basic accelerator research necessary to develop the next generation of user facilities. “BES has appropriately pushed its resources toward the users to get the [maximum] science out of the machines, and that doesn't require a lot of R&D,” says Rod Gerig, an accelerator physicist at Argonne. Some researchers say that the United States already lags behind Europe and Asia, which are building a plethora of new x-ray sources.

    Growing pains aside, basic energy sciences are on the rise. That's fine with many SLAC researchers, who view the lab's shift of mission as an opportunity to try something new. Accelerator physicist Franz-Josef Decker says that SLAC's x-ray laser will be so experimental that accelerator physicists and users will need to pull together just to make it work. “I don't think we'll turn it on and say, ‘Okay, that's it,'” he says. Change is welcome, it seems, as long as it comes with a challenge.


    A Manager Who Cashes In on Consensus

    1. Eli Kintisch

    Patricia Dehmer was hoping to get 75 basic researchers to attend a workshop on solar power. Instead, 200 scientists signed up, and the head of the Department of Energy's (DOE's) Basic Energy Sciences (BES) office needed to book a second hotel to handle the overflow. It's what she calls finding “a latent community” of scientists ready to tackle a new problem. And it's one of the techniques that she has used successfully over the past 11 years to more than double BES's budget, turning it into a $1.4 billion juggernaut (see main text).


    “These [initiatives] would have been tough sells had the community not come together,” says chemical physicist John Hemminger, chair of the BES Advisory Committee, about the 2005 solar-power workshop and similar gatherings on hydrogen and lighting. The community's strong support for those nascent initiatives, in turn, helped grease the budget skids with higher-ups at DOE, in the White House, and in Congress. “You have things ready and well-justified, and when the time is right, they go,” says Dehmer, a chemical physicist who came to DOE headquarters in Washington, D.C., in 1995 from DOE's Argonne National Laboratory in Illinois.

    Dehmer has also learned how to stagger projects so that they don't overwhelm her budget. In doing so, she shepherded the completion of the $1.4 billion Spallation Neutron Source at Oak Ridge National Laboratory in Tennessee and four of five nanotechnology centers at other DOE national labs on time and on budget.

    A former DOE colleague, Ari Patrinos, ruefully admits how Dehmer's “highly organized” approach steered resources to BES—at times at the expense of his Office of Biological and Environmental Research. “The way she times the projects is masterful,” says Patrinos, now at Synthetic Genomics in Rockville, Maryland. A tireless commitment to her job hasn't hurt either, Patrinos adds: “I remember e-mails from her at 3:40 a.m.; she was perpetually working.”


    Ray Orbach Asks Science to Serve Society

    1. Eli Kintisch

    For a decade, chemist Radoslav Adzic has explored the basic structure of metal-electrolyte interfaces at Brookhaven National Laboratory in Upton, New York. His employer, the U.S. Department of Energy (DOE), has long sponsored fundamental science on catalysis in such systems in hopes of making hydrogen fuel cells efficient enough to one day replace fossil fuels as an energy source. But it wasn't until 2004 that Adzic decided to tackle a research question with more direct applications: how to use monolayers of platinum to build cheaper fuel cells, focusing on hydrogen.

    It wasn't a random decision. The year before, President George W. Bush had proposed an 8-year, $1.2 billion hydrogen fuels program that would begin with applied engineering studies. After attending a DOE-sponsored workshop to discuss the basic research needed to turn hydrogen into a commercially viable fuel, Adzic won a $700,000 grant to study how monolayers of platinum could lead to cheaper fuel cells. Although it's still basic research, there's a clear product in mind.

    That program, and others like it, reflects changes sweeping through DOE's $4-billion-a-year Office of Science under the leadership of Raymond Orbach. A physicist and former chancellor of the University of California, Riverside, Orbach joined DOE in 2002 and has won praise for grafting an applied component onto DOE's basic science portfolio without diluting the quality of the research itself. Michael Lubell, a lobbyist for the American Physical Society, calls the hydrogen funds “a testament to Ray” and praises his success in enlarging the pot for such work without ceding control of it to more technology-focused parts of DOE. Adzic agrees: “Rather than just characterizing a system, I'm helping to solve certain showstoppers.”

    Bolstering research on hydrogen fuels is only one of Orbach's achievements. He's also brought international renown to DOE's once-puny civilian supercomputing program and made progress on lab safety procedures. And physical scientists applaud how he helped his boss, Energy Secretary Samuel Bodman, sell the White House on a 10-year doubling of federal spending in the physical sciences by emphasizing its role in U.S. competitiveness (Science, 17 February, p. 929).


    Ray Orbach (left) hopes researchers can help his boss, Energy Secretary Samuel Bodman, (right) do his job, too.


    In June, Orbach donned a second hat by becoming the department's first undersecretary for science. His mission: Optimize the research goals that span DOE's work in energy, science, weapons, and waste by offering the fruits of the Office of Science's basic research portfolio to the rest of the agency. “There isn't a single thing that DOE does that's not grounded in science. [So] having someone who can ask scientific questions [about] each of the missions is crucial,” says David Goldston, staff director of the House Science Committee. “He's the perfect person for the job.”

    Keys to the kingdom. No one doubts that fundamental research could better fulfill energy needs. A 1997 report by the President's Council of Advisors on Science and Technology, for example, called for “better coordination” between basic and applied energy research. “Everyone knows it's a problem, but nothing's happened,” says physicist George Crabtree, a manager at DOE's Argonne National Laboratory in Illinois.

    One obstacle is the current rewards system in academia. Take the science behind superconductivity, which holds the promise of low-resistance power lines or incredibly efficient transformers. The kind of discovery that earns a scientist a paper in a top journal—learning why a material changes phase at a certain temperature—is too theoretical to help a company trying to make superconducting materials. But a commercially valuable yet incremental improvement in that technology wouldn't interest those top-tier journals. So a scientist might not even bother to record such an advance. “If the currency is just PRL [Physics Review Letters], Nature, and Science, you'll just move on,” says materials scientist John Sarrao of Los Alamos National Laboratory in New Mexico.

    Another barrier to developing new technologies, says Bodman, is DOE's current compartmentalized bureaucracy. In July, he sent out a memo giving Orbach “detailed access” to DOE's vast empire, hoping that regular meetings among disparate programs will break through that mentality. It's not a new concept, Orbach says, but “what's new is the intensity and importance” of those meetings.

    Money greases the wheels of cooperation. In addition to the hydrogen initiative and a similar effort in solar energy, Orbach has called for $250 million for biofuel start-ups involving industrial scientists, technologists, and genomicists (Science, 11 August, p. 746). Sharlene Weatherwax, a DOE program manager, says a previous partnership with DOE's technology program might have consisted of a single grant.

    Orbach knows that change doesn't come easily for areas, such as nuclear weapons development, that have traditionally been walled off from civilian research. In initial meetings with applied-research managers, he admits, “people don't quite know what to make of us.” But Edward Moses, director of the National Ignition Facility, a superlaser at Lawrence Livermore National Laboratory in California, says Orbach is helping him grow a civilian research community to utilize an instrument designed to maintain the nation's nuclear arsenal.

    Some fear that such cross-fertilizing could weaken basic science at DOE. “There is a danger of letting the basic program become a technical-support enterprise for the applied programs,” says energy expert Robert Fri, a former Environmental Protection Agency official who believes unfettered basic work can “cook up” whole new energy ideas. Materials scientist Ward Plummer of the University of Tennessee, Knoxville, decries a 20% decline in funding core, unsolicited research within DOE's Office of Basic Energy Sciences in the last 3 years at the same time that solar energy, nanotechnology, and hydrogen programs have grown.

    Plummer and others hope that DOE's new effort to define so-called grand challenges will stop that erosion. And although Orbach says he has no plans to “fuzz the boundaries” between basic and applied work, he is looking for greater cooperation between the two camps. A recent discussion with managers studying how fluids flow in dry soil at DOE's planned nuclear waste fuel repository at Yucca Mountain, Nevada, proves its value, he says. “When we met with Fossil Energy and learned more about carbon dioxide sequestration,” Orbach recalls, “it suddenly popped out that that's the same problem.”

    Whatever happens, Orbach says DOE is determined to squeeze more impact out of its science. That's good news for Adzic, who relishes taking on challenges “directly important to society.” It's also a good deal for academics. “If you publish something relevant” to a problem, says Adzic, “your paper is more [often] cited.”


    Living Among the Maya, Past and Present

    1. Michael Bawaya*
    1. Michael Bawaya is the editor of American Archaeology.

    Archaeologist Arthur Demarest advocates passionately for studying the ancient Maya and for helping their living descendents in Guatemala


    Arthur Demarest (center), Tomás Barrientos (right), and local Maya gather in a restored royal ball court.


    SAN JUAN, PUERTO RICO—Arthur Demarest strides into the seminar room, a good 10 minutes late for his talk on postcolonial approaches to archaeology and tourism. The moderator is about to introduce the next speaker but quickly yields the lectern to Demarest. Smartly turned out in a black Brazilian suit and a longish haircut, the archaeologist ignores his prepared remarks and eschews the lectern. Instead, he theatrically wanders back and forth in front of his audience at the Society for American Archaeology (SAA) conference here, gesturing and spicing his speech with expletives and evoking occasional laughter.

    Demarest speaks passionately about his philosophy of using archaeological projects to better the lives of impoverished local people, a practice often referred to as community archaeology. “This is my baby,” he says of his community work, but adds that Vanderbilt University in Nashville, Tennessee, where he is a professor, gives him grief because of it. Demarest criticizes many of his colleagues for not adopting this practice, then declares that discovering royal palaces (he's found several) bores him. “Sorry for the breathless presentation,” he concludes.

    Demarest is arguably one of the most successful proponents of community archaeology, with a string of dramatic discoveries on the Maya to his credit and a fundraising record—tied to his community efforts—that many might envy. “He's a visionary when it comes to [community] archaeology,” says Raymond Chavez, vice president of Counterpart International, a nonprofit development organization based in Washington, D.C., that works with Demarest. “Everyone pretty much acknowledges Arthur's pioneering work” in Guatemala.

    Although Demarest disparages the “Indiana Jones” style of archaeology, in some respects he resembles the onscreen archaeologist, battling looters and sometimes appearing with bodyguards. Some colleagues say he puts too much emphasis on public relations but agree that he's done much to popularize Mayan archaeology. “Arthur's work is significant because it raises the level of interest in all things Maya,” says Maya scholar Arlen Chase of the University of Central Florida in Orlando.

    Into the field

    Demarest, 53, says he declared his intent to become an archaeologist at age 4, while growing up in New Orleans, Louisiana. He's the son of a Cajun inventor and the “world's greatest cook,” he says. He earned a Ph.D. in the field from Harvard in 1981, studying under noted Mayanist Gordon Willey. He was offered a job at Harvard in 1986 but instead chose Vanderbilt. Vanderbilt gave him the resources to direct a huge project in the Petexbatún region in northwest Guatemala, and Demarest has been working in the country ever since.

    For nearly 20 years, he has excavated sites along the great Maya trade route that ran north to south. He has given the field “important insights into ancient Maya warfare and trade,” says Jeremy Sabloff of the University of Pennsylvania Museum of Archaeology and Anthropology in Philadelphia.

    In Petexbatún, Demarest found evidence of warfare that engendered the collapse of the local Maya society. The findings, such as a shattered throne, are “among the most detailed and graphic for violence accompanying political collapse in Maya civilization,” says archaeologist David Friedel of Southern Methodist University in Dallas, Texas. Demarest showed that the Maya collapse in Petexbatún was “likely precipitated by warfare driven by political and ideological factors rather than by population pressure and contingent agricultural failure,” as some archaeologists had thought, says Friedel. Debate on the causes of the collapse continues for the rest of the Maya world, and since 1999 Demarest has worked at Cancuén, a medium-sized city on the trade route, where he has found a massive royal palace and evidence of a massacre.

    For Demarest, the most important part of his work is the community component. He argues that too many archaeologists are seduced by what he calls the “goodies”: palaces, tombs, and other spectacular finds. Temporary projects employ local workers, many of whom are descendents of the Maya, but when the digs end, the locals are suddenly unemployed and so may loot the site. Archaeology “is like finding an oil well. It can make everybody rich, or it can destroy the area,” he says.

    Demarest designed the Cancuén project to make the local people “stakeholders,” and it seems to have worked. For example, the excavations have made Cancuén a minor tourist attraction, and the only way to reach the site is by water. So the Maya own and operate a boat service to ferry tourists, as well as a visitor's center and lodge. Overall, tourism at Cancuén has boosted their income by 35%, according to Chavez. The locals have reciprocated by telling Demarest where to dig and informing him about looting incidents. Using locals' knowledge has become part of his scientific methodology. “The distinction between development and science is gone,” he says.

    He says the Guatemalans consider him “a political figure,” and in 2004, he became the first American to be awarded the National Order of Cultural Patrimony by the Guatemalan government. The government has also adopted his community-focused philosophy, mandating that all archaeological projects in Guatemala include a community-development component. Chavez says Demarest was the first to implement a strategy that allowed the local Maya to profit from tourism to archaeological sites.

    Demarest insists that the community work benefits research, noting that the local Maya played a role in his most recent major discovery: the massacre of what he suspects was the royal court at Cancuén. Demarest's team found 34 bodies dressed in full regalia, including one with a necklace containing 36 jaguar canines. The locals helped recover looted monuments with glyphs that revealed the date that the victims were killed—800 C.E.—and that one of them was Kan Maax, Cancuén's king. Demarest considers the massacre to be one of the early signs of the collapse of Maya civilization, exhibiting a new, more “destructive” warfare; previous conquerors tended to take over cities rather than destroy them, he says.

    At work.

    At Cancuén, Arthur Demarest has excavated a royal palace and artifacts including a circular altar stone (inset) that was looted, then recovered with the help of locals.


    To fund such projects, Demarest has won major grants from a variety of sources, many of them outside the traditional research agencies. He has raised more than $5 million during the course of the Cancuén project, much of which goes to community-development endeavors managed by the local Maya. He says the U.S. Agency for International Development has awarded more than $1 million, and a National Institutes of Health matching grant has provided more than $600,000. Corporate and private donations have exceeded $300,000, he says.

    A starring role

    Demarest also gets plenty of publicity—his discoveries have been covered by The New York Times, The Washington Post, National Public Radio, the News Hour with Jim Lehrer, and other international media—and he seems to relish the spotlight.

    Despite the high-profile discoveries, other researchers are quick to point out that Demarest is only one of many leading Mayanists. Chase says he respects Demarest's work but that no single archaeologist is shaping Maya research. Several archaeologists were critical of what they perceived as Demarest's publicity-seeking activity but declined to comment on the record. “He does it for his own aggrandizement,” says one distinguished Maya archaeologist.

    The discovery that may have garnered the most attention was the treasure-laden tomb of the second great king of Petexbatún. Demarest says Touchstone, the movie production company, approached him about making a movie based on the 1991 field season, which took place during a civil war that ended in 1996, and he's writing a popular book about it. A movie about his work was produced by National Geographic, which has also served as one of his major patrons. However, during his SAA presentation Demarest harshly criticized the movie as simplistic. After meeting a National Geographic representative who attended the talk, he lamented his candor. “Another foot in my mouth. … I'm half Cajun and half Italian, and I can't control my mouth or my hands,” he said, referring to his tendency to gesture.

    Barbara Moffet, director of communications at National Geographic, says Demarest has been awarded an unusually high number of the organization's modest grants. “We have funded Dr. Demarest so extensively because he does dynamic, important archaeology,” Moffet says. Working with Demarest, she adds, “is never dull.”

    Excitement, for Demarest, isn't defined exclusively by major discoveries. In 2003, for example, he, his crew, and the local Maya helped Guatemalan authorities recover a magnificent Maya altar taken from Cancuén by looters who were involved with drug traffickers, an event that received extensive media coverage. According to archaeologist Tomás Barrientos, who codirects the Cancuén project, Demarest spent a significant amount of his own money to help the investigation. There were rumors that the criminals threatened to kill the archaeologists for testifying against them, and for a time Demarest hired bodyguards.

    Unlike most American archaeologists working in Guatemala, Demarest lives there most of the year. Although he's on campus at Vanderbilt now, for much of the past 3 years he has been in Guatemala on extended leave. He raised two sons in Guatemala and recently married his second wife, a Guatemalan woman who shares his spartan existence when he is at Cancuén, which is hidden away in the jungle. His lifestyle is “very hard on your personal life,” he says. At the same time, he takes pleasure in the roar of the howler monkeys, tolerates the bugs, and is unfazed by cold showers. “I intend to keep digging until I'm dead.”

    He often chooses new sites for excavation and works there “6 or 7 years, [to] produce a generation of students,” he says. Eight of Guatemala's 10 Ph.D. archaeologists studied under him, and he believes that he has had far more impact on his Guatemalan than his U.S. students. “I don't feel like I'm very useful in the States,” where he finds life “a little bit boring,” he says. “You can do a world of good in Guatemala.”


    'Google of the Brain': Atlas Maps Brain's Genetic Activity

    1. Yudhijit Bhattacharjee

    With his Microsoft money, Paul Allen has funded a gene-expression database of the mouse brain

    In 2002, Microsoft co-founder and philanthropist Paul Allen asked a handful of neuroscientists how best he could use some of his fortune to advance their field. The researchers recommended the creation of a map of gene expression in the mouse brain, which they said would combine with knowledge gained from the Human Genome Project to illuminate how our brains work. The idea became the inaugural project of the Allen Institute for Brain Science in Seattle, Washington; $40 million and 4 years later, the map is now finished.

    A mouse click away.

    Researchers can view the expression of any gene in the mouse brain—such as etv1 (left)—by simply typing its name into the atlas. The atlas also provides 3D visuals (inset).


    The completion of the Allen Brain Atlas, announced this week, caps a painstaking effort that involved analyzing more than 250,000 slices of mouse brain to determine which of the 21,000 or so known genes in the animal's genome are turned on in the brain, as well as where and to what extent. The result is a freely accessible, searchable digital database ( Researchers are ecstatic: They say the gene-expression map will accelerate the search for drugs to treat psychiatric illnesses and help address fundamental questions about the development and function of different brain structures. “Some are calling it the Google of the brain,” says Joanne Wang, a pharmaceutics researcher at the University of Washington, Seattle. “I could not think of a better term than that.”

    The interdisciplinary project involved neuroscientists and geneticists, as well as bioinformaticists and software engineers who worked on automating the analysis. “It took more than a year just to integrate the microscope to the software,” says Allan Jones, the chief scientific officer of the project. Having mice as the object of study allowed for a high degree of experimental control, Jones notes; not only were the animals genetically identical, but they were also given the same diet and handled the same way to the same age—56 days—before brain tissue was drawn from them.

    The map shows that 80% of genes in the mouse genome are expressed in the brain—higher than the 70% figure that researchers previously thought. “Also, roughly 25% of the genes expressed in the brain show some kind of regional restriction,” Jones says. Among the things that the data set will allow researchers to do, he adds, is “group cells in the brain that have similar patterns of gene expression, which could reveal functionally relevant brain structures that are still unknown.”

    The atlas, whose data have been made available in installments since December 2004, is already saving researchers a lot of time and trouble, says Jane Roskams of the University of British Columbia in Vancouver, Canada. Roskams, who studies the development of neurons in the vertebrate embryo, has been using the map to test how combinations of glutamate receptors on neurons in the olfactory bulb may be responsible for differences in how likely they are to undergo programmed cell death. The map helps narrow down alternative hypotheses quickly instead of testing them all through hours of lab work, says Roskams.

    Wang has found that the atlas aids her work on using molecules known as membrane transporters, which ferry cargo into and out of the brain, to deliver drugs to targeted brain areas. Doing this requires understanding where and how abundantly drug-transporter genes are expressed in the brain, which is “not a trivial task for individual labs,” she says. “I had a graduate student work for almost a whole year to map the brain expression pattern of a single transporter gene discovered in our lab.” Now, she can simply type a gene's name into the site and click a button to view its expression profile.

    Jones says the institute is now embarking on a project to create a similar gene-expression map for the human cortex; he and his colleagues have already begun talks with brain banks and neurosurgeons. “It's going to be more of an experimental venture,” says Jones, “because we will have no control over the genetic background and the environmental factors behind the samples we end up analyzing.”

    Still, he believes the new project will provide important insights into genetic drivers of the overall structure of the cortex as well as how developmental abnormalities unfold. For example, Karen Berman, a psychiatrist at the National Institute Mental of Health in Bethesda, Maryland, plans to use the current mouse atlas as well as the planned human cortex map to home in on the specific gene defects that cause Williams syndrome, a developmental disorder she's studied for several years.

    “This will also help us to look at interactions between genes of interest and how their effects develop in the brain over time,” she says. “That could help us intervene before the disorder sets in.” Researchers say the atlas will definitely quicken the development of such interventions and prove that Allen spent his money wisely.

  14. PASCOS 2006

    Neutrino Physics Probes Mysteries of 'Flavor' and Origins

    1. Tom Siegfried
    1. Tom Siegfried is a writer in Los Angeles, California.

    Fifty years ago, Fred Reines and collaborators reported (in Science) the first confirmed detection of the elusive neutrino. At the meeting,* physicists marked the anniversary with a flood of new and anticipated neutrino data offering insights into particle physics and astronomy.

    Neutrinos are produced abundantly throughout the cosmos. They are generated by cosmic rays striking Earth's atmosphere, by nuclear fusion reactions in the cores of stars, and in supernova explosions and other energetic cosmic processes. There are three known varieties, or “flavors,” of neutrinos, associated with the electron, muon, and tau particles (collectively known as leptons).

    As they propagate through space, neutrinos travel not as pure flavors but rather as mixtures, sort of like a vanilla-chocolate swirl. Neutrinos beginning a journey as primarily one flavor may end up being detected as another. Such identity switches, or “flavor oscillations,” have been confirmed by several neutrino experiments, most recently by an international collaboration known as MINOS.

    The MINOS detector, housed in an iron mine in Soudan, Minnesota, measures neutrinos produced at the Fermi National Accelerator Laboratory, 735 kilometers away in Batavia, Illinois. The latest results confirm the disappearance of muon neutrinos in transit, suggesting a switch of flavors on the way, said Donna Naples of the University of Pittsburgh in Pennsylvania, reporting at PASCOS for the collaboration.


    Neutrinos from this beamline in Illinois change type in transit to Minnesota.


    “We see a significant deficit” of muon neutrinos, Naples said. The Soudan detector recorded 215 events in the relevant energy range, compared with more than 300 expected if the neutrinos did not shift flavors. The collaboration's paper with the new data has been submitted for publication and is available online at Further data, Naples said, will help pin down neutrino properties more precisely, in particular the masses of the three neutrino types and the precise amount of mixing between the flavors.

    For astronomy, much of the current neutrino excitement focuses on understanding the more energetic neutrinos that may shed light on the nature of high-energy gamma rays in space and the origin of cosmic rays. “Cosmic rays are totally not understood. We don't know where they come from or how they are accelerated,” said neutrino physicist Francis Halzen of the University of Wisconsin, Madison.

    Because high-energy neutrinos may be produced in the same processes that create cosmic rays, several experiments are now planned or in progress to seek neutrinos at the top of the energy scale, exceeding 1 trillion electron volts (TeV). Hopes are highest for results from IceCube, a massive detector array partially completed more than a kilometer below the ice's surface at the South Pole.

    IceCube detects neutrinos produced when muons from cosmic ray showers collide with atoms in the ice and may also be able to measure the arrival of TeV neutrinos from space. If such neutrinos are found, they could help identify the mechanism that produces high-energy gamma rays, reported Matt Kistler of Ohio State University, Columbus. Recent observations from observatories such as HESS in Namibia have revealed the existence of extremely high-energy gamma ray sources in the galaxy. It's unclear whether these high-energy rays are produced by a process involving leptons (electrons colliding with photons) or hadrons (the decay of pions). If pion decay is the cause, high-energy neutrinos would also be produced, and their detection by IceCube or other neutrino experiments could verify the pion-decay explanation, Kistler said.

    In any case, neutrino experiments now operating, under construction, or planned promise to turn what was once a hypothetical particle into a powerful astronomical tool—a prospect that Reines himself envisioned at the time of his experimental discovery, Halzen said.

    “People at the time didn't know whether [the neutrino] was a mathematical trick to fix up theory or whether it was a real particle,” Halzen said. “As soon as Reines discovered a real particle, he suggested that this was a way to do astronomy.”

    • *12th International Symposium on Particles, Strings, and Cosmology.

  15. PASCOS 2006

    A Cosmic-Scale Test for String Theory?

    1. Tom Siegfried
    1. Tom Siegfried is a writer in Los Angeles, California.

    Theorists who think nature's ultimate building blocks are vibrating strands of energy called superstrings have always had a big problem converting skeptics. One reason: The strings are far too small to see. In fact, they are supposedly so small that no conceivable microscope (or particle accelerator) could ever render them visible. But some string theorists now believe they've found a way to make superstrings observable: supersize them.

    Superstrings that were supertiny shortly after the big bang could have been stretched by the expansion of the universe to cosmic size today, Robert Myers of the Perimeter Institute for Theoretical Physics in Waterloo, Ontario, Canada, noted at the meeting.* He described several ongoing investigations of the properties that such cosmic strings would have and how they might be detected.

    “I think it's pretty exciting,” said Princeton University astrophysicist David Spergel. “It's the potential to see physics from really high energies that we can't get any other way. It's the potential for a really exciting surprise.”

    Earlier analyses indicated that strings from the time of the universe's birth would be diluted away by the subsequent expansion or would be too unstable to survive to the current epoch. But advances in recent years have shown that's not necessarily true, Myers said. “We have scenarios where you get, in fact, a rich network of different kinds of strings generated at exit from inflation,” the brief burst of superfast expansion following the big bang, Myers said. “So we have the exciting possibility that in certain scenarios, the superstrings might appear in a network of cosmic strings.”

    Pumped up.

    Computer simulations show that cosmic inflation might have created networks of enormous “superstrings.”


    One way of detecting cosmic superstrings would rely on precise timing of the spinning rates of fast pulsars, neutron stars that emit blips of radiation at precise intervals. Small variations in the intervals might be the effect of background gravitational waves produced by cosmic string networks.

    Another promising approach, Myers said, would be to detect gravitational waves directly by means of experiments such as LIGO, the twin observatories in Louisiana and Washington state (Science, 21 April 2000, p. 420). As cosmic strings wiggle, rapid acceleration of their mass can generate powerful gravitational-wave beams. LIGO may not be sensitive enough to detect them, but a planned set of three space-based gravitational wave detectors known as LISA would be a good bet.

    Another method would exploit gravitational lensing, the power of massive objects to bend light passing nearby. The gravity of a string in space could bend light enough to split the image of a galaxy in the background, giving astronomers the impression of two identical galaxies sitting side by side.

    Finally, physicists might detect distortions imprinted by strings in the cosmic background radiation, the smooth glow of microwaves from the big bang that permeates all of space. Evidence could come from extremely precise measurements such as those expected from the European Space Agency's Planck Surveyor mission, scheduled for launch next year.

    “We have people actually working on looking for string signatures for their thesis,” said Spergel. “So maybe we'll detect them.”

    • *12th International Symposium on Particles, Strings, and Cosmology.

  16. PASCOS 2006

    Snapshots From the Meeting

    1. Tom Siegfried
    1. Tom Siegfried is a writer in Los Angeles, California.

    Shooting the moon. Recent observations of a galactic cluster collision seem to rule out modified theories of gravity for explaining dark matter (Science, 25 August, p. 1033). But Gia Dvali of New York University reports that other changes to the law of gravity over vast distances are still possible and could explain why the universe appears to be expanding at an accelerating rate.


    In fact, such modifications would produce observable variations in the orbit of the moon around Earth, Dvali pointed out. Deviations on the order of 1 millimeter would be a sign that mysterious “dark energy” is not needed to explain cosmic acceleration. Such precise measurements may be within the reach of a new generation of laser-ranging experiments, in which researchers bounce laser beams off reflectors that Apollo astronauts left on the moon.

    “It would be absolutely amazing,” Dvali said. “By looking at the moon, you can derive information about dark energy.”

    Dark possiblities. In the cosmos, invisible, unidentified “dark” matter outmasses the ordinary “baryonic” matter known on Earth about 10 to 1. That may sound like a big difference, but to physicists it's mystifyingly close. It suggests that dark matter and ordinary matter were originally produced by related mechanisms.

    At the meeting,* physicist Leszek Roszkowski of the University of Sheffield in the U.K. discussed the possibility that both forms of matter owe their origin to Q balls, exotic objects possibly formed in the early universe. A Q ball is basically a bag of squarks, hypothetical partner particles to ordinary quarks predicted by a theory known as supersymmetry. If Q balls decayed into both ordinary matter and dark matter, it would explain why the amounts of the two forms of matter are similar.

    That idea doesn't work if the dark matter is composed of WIMPs: weakly interacting massive particles that supersymmetry also predicts. Accelerator experiments and searches for dark matter using underground detectors rule out the mass range for WIMPs predicted for Q-ball decay. But Roszkowski and collaborator Osamu Seto of the University of Sussex, U.K., calculate that those objections to the Q-ball scenario can be avoided if Q-ball decay produces another supersymmetric particle called the axino. If so, the axino's mass would be about the same as the proton's—much less than the mass predicted for WIMPs and not detectable by current underground experiments.

    “That's bad news for WIMP dark-matter searches,” said Roszkowski. But it's possible that evidence for axinos could be produced in the Large Hadron Collider, scheduled to begin operation outside Geneva next year.

    • *12th International Symposium on Particles, Strings, and Cosmology.

  17. Mining the Molecules That Made Our Mind

    1. Elizabeth Pennisi

    By comparing the human genome with those of other species, researchers are finding many genes potentially related to brain evolution—but no one is sure which ones helped shape the uniquely human brain


    In the 17 August Toronto Sun, the headline trumpeted “A single gene led to humans. …” The Independent in the United Kingdom declared “Revealed: the gene that gave us bigger brains.” And in Minnesota, newspaper readers were greeted with “Scientists ID genes that make humans smarter than chimps.” All three stories announced the discovery of a brain-related genetic difference between chimps and humans, and in all three, the newspapers got carried away.

    True, the gene featured, called HAR1F, is active in the right place at the right time during brain development to have spurred the expansion of the cerebral cortex, the center for higher cognition in people. HAR1F, which stands for “human accelerated region 1 forward” gene, also shows signs of rapid change since humans and chimps went their separate ways, suggesting that this gene conferred a survival advantage to our ancestors (Science NOW, 16 August,

    Yet HAR1F is only one of about 10 genes to emerge in the past 4 years as potentially key to the evolution of the uniquely skilled human brain. And these discoveries are but the beginning chapters of an epic evolutionary story that we are just starting to read. With the genomes of dozens of species in hand, including human, chimp, and rhesus macaque, as well as powerful bioinformatics methods for comparing and analyzing all this DNA, research into the molecular basis of human evolution has exploded. Scientists using multiple comparative genomic strategies have uncovered hundreds of genes that provide tantalizing clues about hominid evolution. And some of the most provocative finds pertain to brain evolution.

    A few researchers, for example, are delving deep into the tree of life, uncovering genetic evidence about when a central nervous system first began to fire from analyses of the genes of simple organisms such as jellyfish. But the scientists grabbing the headlines are the ones finding brain genes that have changed rapidly, duplicated, merged, or boosted their expression since our lineage split from that of chimps. Such genes are prime candidates for crafting the modern human brain.

    Many scientists find the potential to understand ourselves irresistible. A rush of comparative genomics results about human evolution are being presented at meetings, and publications are starting to stream out. “Everyone is jumping on the bandwagon,” says Bruce Lahn, a human geneticist at the University of Chicago, Illinois. Progress at identifying the DNA tweaks that distinguish people from other species should be swift. “Within a year or two, we will have a comprehensive list of genome changes to start looking at,” says Christopher A. Walsh, a neuroscientist at Harvard Medical School in Boston.

    Emphasize “start.” Despite what the headlines imply, genomic data so far offer provocative clues rather than direct answers to what caused humans to branch out from the primate tree to become walking, talking, creative beings. Tying genomic events such as a gene duplication to human evolution is a challenge, because often researchers know little about what their candidate genes do. To make a concrete connection between a genetic change and the evolution of the human brain “is a much slower process,” says Walsh.

    Positive results

    Philosophers and scientists alike have long sought to understand what makes humans unique in the animal kingdom. Among other differences, our brain is three times the size of a chimp's, with a multilayered cortex capable of doing calculus or writing plays. Much of the explanation for our braininess is encoded in the genome, but with 20,000 or so genes to choose from, geneticists have only just begun to come up with effective search strategies to highlight the crucial ones.

    One method is to seek genes that natural selection has favored in humans but not in our close cousins, the chimps. Genes that have experienced such positive selection have evolved more quickly than the background rate of evolution. Researchers can spot these speedy evolvers by comparing genes in humans and other species. For example, a gene called FOXP2 is mutated in a family with a severe language disorder; 6 years ago, a team led by Svante Pääbo at the Max Planck Institute (MPI) for Evolutionary Anthropology in Leipzig, Germany, found that the human protein encoded by the gene differed from the chimp's version in two of its 715 amino acids. Given the amount of time since chimp and human ancestors diverged, no such changes were expected. The protein's alteration may have helped humans develop the fine motor control needed to mouth words, Pääbo suggested (Science NOW, 14 August 2002,

    Pääbo looked for evidence of positive selection on one gene, but Lahn and his colleagues have taken a broader sweep across the genome. They examined 214 genes involved in brain disorders or active only in the brain, comparing humans and macaques and, in a separate analysis, mice and rats. For each pair, the researchers counted both the changes in a gene's bases that made no difference to the encoded protein—considered the background rate of evolution—and the changes that altered an amino acid. The higher the proportion of protein-altering changes, the faster the gene had evolved.

    Overall, Lahn's team found that evolution in the primate genes was trotting along about 37% faster than in the rodent genes. Among just the two primates, human brain genes, particularly those involved in development, were the sprinters, outpacing the number of changes in the equivalent genes in the macaque. Two genes showed particularly strong evidence of selection: ASPM and microcephalin, each of which underlie microcephaly, a genetic disorder that results in a diminutive brain, Lahn and his colleagues reported at the end of 2004.

    The “single gene” heralded last month by the Toronto Sun emerged from an even broader search for positive selection—one that covered entire genomes. David Haussler and Katherine Pollard, both bioinformaticists at the University of California (UC), Santa Cruz, and their colleagues developed a sophisticated computer program that matched up the sequenced genomes of multiple species, including chimp, human, rodent, dog, and chicken. The scan, reported online 16 August in Nature, picked out 49 regions in which most genomes shared the same sequence but the human DNA was considerably different, indicating it had changed quite a bit since roughly 6 million years ago when our ancestors split off from other primates.

    HAR1F, for example, is part of a DNA sequence that, compared with the other species, has experienced 18 base changes over its 118-base stretch; less than one such base change would be expected over those 6 million years based on the accepted mutation rate for human DNA.

    HAR1F codes for an RNA that is never translated into a protein. UC Santa Cruz cell biologist Sofie Salama, working with Pierre Vanderhaegen, a neuroscientist at the University of Brussels in Belgium, has found that the gene is very active in the developing brains of 2-month- to 5-month-old human embryos. The RNA exists in cells that organize the human cerebral cortex into layers, and because RNA can play a role in gene regulation, Pollard and her colleagues suspect that HAR1F's RNA helps control the production of proteins involved in cortex development and that a change in its regulatory abilities prompted a larger, more complex cortex.

    Key expression

    That HAR1F's possible role in differentiating the human brain from the chimp's involves regulating other genes should not be surprising. There are relatively few differences in the proteins encoded by the two species' genes—FOXP2 being one exception—and researchers have long suspected that changes in where, when, and how much a gene is expressed may instead be the real key to our uniqueness.

    To understand the evolutionary importance of changes in gene activity, some genomicists have simply looked for genes that are more active in one species than in another. Others have sought out extra copies of genes and other DNA that might also affect gene expression.

    Pääbo pioneered the former approach in 2002 when he, Wolfgang Enard, also of the MPI for Evolutionary Anthropology, and their colleagues compared the overall amount of messenger RNA (mRNA) produced in various human, chimp, orangutan, and macaque tissues, including gray matter. “This was the first attempt to use high-throughput technology to address this very important issue,” says Xun Gu, a molecular evolutionist at Iowa State University in Ames.

    Making room.

    Skulls (top to bottom) of the human, chimp, orangutan, and macaque housed brains weighing 1350, 400, 400, and 100 grams, respectively.

    CREDIT: C. A. WALSH ET AL., PLoS BIOLOGY 3, 50 (2004)

    Pääbo's crew used a microarray to detect the mRNA concentration of 12,000 genes. Overall, Pääbo and his colleagues found that genes in the brain tend to have undergone more changes in expression—increases or decreases relative to the chimp—compared with genes in other organs since the two lineages diverged. But they saw little such gene-expression variation between brain and other organs when comparing the chimp with other primates. The result suggested that altered gene activity played a role in distinguishing the human brain from those of its cousins, Pääbo and his colleagues reported (Science, 12 April 2002, pp. 233, 340).

    Gu's team later took a closer look at Pääbo's data and found a trend in the human versus chimp gene-expression changes in brain. The human brain genes typically had greater activity than their chimp counterparts. “We demonstrated that evolutionary changes may be mainly caused by a set of genes with increased rather than decreased expression,” says Gu.

    Mario Cáceres of the Salk Institute for Biological Studies in San Diego, California, and his colleagues have reached a similar conclusion. They've used mRNA assays to compare gene expression in the cerebral cortex of humans, chimps, and macaques, finding 91 genes that had changed their activity level since chimps and humans went their separate ways evolutionarily. About 80 of those became more active in the human brain, they noted in 2003.

    Double the dose

    None of these original expression studies tried to pin down the cause of the increased gene activity. But studies by James Sikela, a genome scientist at the University of Colorado Health Sciences Center in Aurora, Evan Eichler of the University of Washington, Seattle, and others offer one potential explanation: The human genome has plenty of extra copies of genes and other DNA sequences.

    Genetic reduction.

    Several genes implicated in human evolution are linked to microcephaly, in which the cortex is smaller (left panels) compared to the normal brain (right panels).

    CREDIT: J. BOND ET AL., NATURE GENETICS 37, 353 (2005)

    In 2004, Sikela, Jonathan Pollack of Stanford University in Palo Alto, California, and their colleagues did a pioneering full-genome scan of five primates, including humans, and found 1005 genes that had increased or decreased in number in one of the species after it split off from the ancestral primate tree. Of those, 134 had undergone duplication in the human lineage.

    The next year, Eichler and his colleagues completed genomewide catalogs in humans and chimps of so-called segmental duplications, pieces of copied DNA that range in size from a few thousand base pairs to large sections of chromosomes. Eichler's group found that the genomes have separate duplication histories. Overall, duplicated segments in the chimp outnumber those in human, but in humans, there's a greater variety. More genes appear multiple times in our DNA than in chimp DNA. By one count, humans have extra copies of 177 full and partial genes. Each of those genes is a candidate for having altered expression patterns, and Eichler has been investigating the function and activity of several duplicated regions.

    The availability of multiple copies of a gene provides more than just a way for a gene to produce additional amounts of its protein or RNA. These extra copies are material with which evolution can play, perhaps dedicating the activity of one copy of a gene to a subset of cells, for example. “The likelihood of innovation is much higher,” says Eichler.

    In some cases, such innovation could come from parts of a gene that proliferate instead of a whole gene undergoing duplication. Sikela, Magdalena Popesco, a molecular geneticist at the University of Colorado Health Sciences Center, and their colleagues have recently discovered a stretch of DNA, consisting of just two exons from a gene, that has increased from a single copy in mice to 212 copies in humans and may have evolved crucial new functions in the process.

    Sikela's previous work had identified 134 genes that duplicated primarily after human ancestors split off from other primates. Popesco, Sikela, Gerald Wyckoff of the University of Missouri, Kansas City, and their colleagues compared the sequences of these genes in primates, mice, and rats, and one gene in particular stood out: MGC8902. Humans have 49 copies of this gene, whereas chimps have 10 and macaques have four, the group reported in the 1 September issue of Science (p. 1304).

    No one knows exactly what the gene does, but a closer look revealed that it primarily consists of six copies of a two-exon segment that encodes a protein domain—a peptide that has a particular fold or twist—called DUF1220. This domain also has no known function, but the DNA for it is sprinkled liberally throughout the human genome, with the two-exon segment showing up in about two dozen different genes. These genes are active throughout the body, but Popesco notes that they are particularly busy in neurons in the cortex, suggesting that the domain is important in the brain. “The work highlights the potential importance of duplications in the emergence of novel genes within the hominoid lineage,” says Eichler.

    Another intriguing human gene apparently arose when part of one gene replaced part of another. While comparing the human and chimp genome, glycobiologist Ajit Varki of UC San Diego had noticed that the front end of a gene called SIGLEC-11 was quite different between the two species, whereas the back end was virtually identical. The front end of the human version turned out to come from a gene called SIGLEC-16, which appeared as a duplicated copy of SIGLEC-11 about 15 million years ago. SIGLEC-16 is now dysfunctional in both the human and the chimp, suggesting that it lost function before the two lineages split.

    However, at some point in hominid evolution, part of SIGLEC-16 must have replaced the front part of the DNA of SIGLEC-11, creating a “brand-new, ‘human specific’ protein,” says Varki. The gene for this protein now turns on in the human brain, specifically in microglia, cells known to be important to the growth of nerve cells (Science, 9 September 2005, p. 1693). The finding is “tantalizing, but of uncertain significance,” Varki points out.

    Finding function

    Varki's measured summary of the SIGLEC-11 story would be apt for many of these headline-grabbing genes. Whether it's FOXP2, HAR1F, or the DUF1220 domain, “there's a tendency for people to think this [gene] is it”—the explanation for why people are unique, says Pääbo. For his part, Varki warns that the quest to understand human evolution is too brain-centric. “We are also defined by differences in our reproductive, musculoskeletal, and immune systems, as well as by our skin,” he points out.

    Moreover, almost every gene linked to human brain evolution comes with questions about the strategies used to establish the connection. For example, some scientists are not convinced that the methods used to detect positive selection in a gene are reliable or that the results will hold up as more genome sequences are added to these analyses. Genes that appear to change rapidly may still fall within the range of normal variation, for example. Moreover, rapid change in a gene or extra copies “does not mean that these genes are [all] important during the evolution of the human brain,” says Gu.

    Some scientists also question the relevance of certain findings, such as a protein-coding gene's mRNA being produced in a brain region. After all, not all mRNAs are translated into proteins. “I wouldn't put much stock in claims about genes being expressed or not expressed in the brain unless there's direct experimental evidence” that the genes' proteins are actually made, says Todd Preuss, a neuroscientist at Emory University in Atlanta, Georgia.

    Eichler illustrates the challenges the field faces when he complains about spending the past 5 years trying in vain to figure out the role of a promising family of genes that emerged from his survey of segmental duplications. The family is in one of the most rapidly evolving duplicates. The genes are about 12 million years old and have undergone rapid evolution in humans, chimps, and gorillas. But the gene doesn't exist in mice, and the human copies and their surrounding DNA are too similar to track individually in disease studies.

    Pääbo shares Eichler's frustration at the field being unable to move more swiftly beyond highlighting candidate brain-evolution genes. “It's getting a little stale,” he admits, “to say, ‘I have another case of positive selection.’ The challenge is to link the evidence of positive selection to brain function.”

    That may be starting to happen in a few select cases. Take the microcephaly genes, ASPM and others, which may provide clues about how the human cortex got so big. Wieland Huttner of the MPI of Molecular Cell Biology and Genetics in Dresden has demonstrated how the amount of ASPM affects brain growth in mouse embryos. He studied mouse embryonic neuroepithelial cells, the stem cells that give rise to neurons. The longer these stem cells remain undifferentiated, the more they divide and the more neurons that ultimately form.

    In anticipation of cell division, ASPM concentrates at opposite ends of the cell and helps organize the microtubules that pull duplicated chromosomes apart. When Huttner reduced the cell's cache of ASPM, cells no longer divided symmetrically. Instead of forming two daughter stem cells, one of those “daughters” specialized as a neuron, short-circuiting the expansion of the cortex, his team reported in the 5 July Proceedings of the National Academy of Sciences.

    Researchers have found that two other genes associated with human microcephaly are active during cell division as well. Last year, Jacquelyn Bond of the University of Leeds in the U.K., C. Geoffrey Woods of the University of Cambridge, and their colleagues used mice to show that cyclin-dependent kinase 5 regulatory protein (CDK5RAP2) and centromere-associated protein J (CENPJ) are active in the same cells affected by ASPM, the embryonic neuroepithelial cells of the frontal cortex. Both the CENPJ and CDK5RAP2 proteins colocate with ASPM during cell division, Bond, Woods, and their colleagues reported. Earlier this year, Lahn's team showed that the genes for these two proteins have been evolving rapidly in the human lineage, similar to ASPM.

    Another somewhat developed scenario for human brain evolution centers on the DNA that regulates a gene called prodynorphin (PDYN). The protein encoded by the gene is a precursor for opiate compounds important in perception, pain, social behavior, and learning and memory. In rats, for example, increased production of prodynorphin in the brain translates into higher pain thresholds. The gene's activity is under the influence of a 68-base DNA sequence. Greg Wray, an evolutionary biologist at Duke University in Durham, North Carolina, and his colleagues have found that humans have up to four copies of this regulatory stretch, whereas monkeys and other great apes only have a single copy. Moreover, the researchers have found five base changes in this regulatory DNA that have occurred since the split between chimps and humans, a sign of accelerated evolution.

    To each their own.

    Speciation among primates was helped by the duplication of DNA segments that helped differentiate genomes. Stained chromosomes reveal that whereas macaques and baboons have one copy of this segment (red), humans have four, and chimps and bonobos have hundreds.


    In the lab, nerve cells with the chimp version of the regulatory sequence make less prodynorphin, Wray's team reported in the December 2005 PLoS Biology. That same regulatory DNA from the rhesus macaque, gorilla, and bonobo also failed to stimulate adequate prodynorphin mRNA production in human nerve cells. It seems that “humans have evolved to make more of this key brain peptide,” Wray concludes. His team is now reconstructing the ancestral sequence of this piece of DNA.

    The prodynorphin example is one of the most advanced, but even this evolutionary story is unfinished, because no one knows exactly what effect the extra prodynorphin has in the human brain, says Wray. And most other examples are in even earlier stages. “In virtually all cases, the link of genes or genomic patterns with human brain evolution is only tentative and based on suggestive evidence,” says Lahn. “The situation may not change anytime soon due to the complexity of the questions and because we can't redo the experiment that evolution did in many millions of years.” Headline writers, pay heed.

  18. The Evolution of Function & Form

    A special two-page feature — available as individually downloadable images (suitable for slides) or in PDF form — explores the insights on the evolution of organs such as the brain, eye, and heart, and on pattern formation in general, that are emerging from modern genetics and genomics.

    Plate 1: The brain.
    Plate 2: The eye.
    Plate 3: The heart.

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution