News this Week

Science  19 Oct 2007:
Vol. 318, Issue 5849, pp. 372

You are currently viewing the .

View Full Text

Log in to view the full text

Log in through your institution

Log in through your institution


    Nobel Peace Prize Won by Host of Scientists and One Crusader

    1. Richard A. Kerr,
    2. Eli Kintisch*
    1. With reporting by Pallava Bagla.

    The announcement came as a shock to Robert Watson. “It would never have crossed my mind that a scientific assessment process would be named in a Nobel Peace Prize,” he says. “If anyone had told me that could happen, I would have said, ‘You have to be smoking something.’” But stone-cold sober the Norwegian Nobel Committee was when it awarded the prize to the United Nations-sponsored Intergovernmental Panel on Climate Change (IPCC)—which Watson chaired from 1997 to 2002—and to Al Gore for their “efforts to build up and disseminate greater knowledge about man-made climate change” because such change may increase “the danger of violent conflicts and wars, within and between states.”

    Winners all.

    IPCC chair Rajendra Kumar Pachauri (left), representing several thousand scientists, and Al Gore share the Nobel Peace Prize for creating and spreading knowledge of climate change.


    The odd-couple winners are a good match, most scientists believe. On the one hand, there's the organization of thousands of unpaid, nearly anonymous researchers meticulously assessing the state of climate science; on the other, a former politician using that science to underpin his media-savvy campaign to save the world from climate catastrophe. “The combination of IPCC, with its very careful examination of scientific knowledge, and Al Gore's ability to bring the message to politicians and the public” has worked well, says Bert Bolin, the first chair of IPCC. Not that their work is done. There's still the matter of steeling the public's will to meet the costs of countering the threat.

    On the IPCC side, the winners are legion. “This is an honor that goes to all the scientists and authors who have contributed to the work of the IPCC,” says Indian engineer and economist Rajendra Kumar Pachauri, current IPCC chair. The award recognizes a vast amount of unpaid hard work on their part, says geoscientist Michael Oppenheimer of Princeton University, who has served IPCC in various capacities since the United Nations established the body in 1988. “There's an incredible amount of time involved,” he says, flying to meetings in every corner of the world, hammering out consensus, responding to thousands of reviews, and extracting government approval word by word for three different working groups for each report (Science, 9 February, p. 754). “There is a price,” says Oppenheimer. “People burn out.”

    Working against burnout is “a sense of community responsibility,” says Oppenheimer. “A free society provides the space so you can do science” and create knowledge. In return, he says, climate researchers serve on IPCC to distill that knowledge in a credible way for policymakers. Adds Watson: “They want informed political decisions. If they want their science to be part of informed policymaking, the IPCC is the vehicle.” And then there is self-interest. “I get more out of IPCC than I put in,” says Oppenheimer. “IPCC meetings are very useful.” They force a critical analysis of a scientist's own specialty and provide exposure to the top people in other fields, scientists say.

    The other winner of the prize is far more familiar to the public. But Gore has also been well-known to the scientific community for decades. Scientists say few politicians have relied upon or involved more researchers in their policy work than Gore. “My relationship with Al Gore was born in combat,” says climate researcher Stephen Schneider of Stanford University in Palo Alto, California, who recalls a 1981 hearing then-representative Gore held in which Schneider opposed a move by the Reagan Administration to cut climate research. “We were soldiers in the same war … for 25 years.”

    Climate researchers have known Gore as the rare policymaker who brings scientists in—and listens. When he visited Lamont-Doherty Earth Observatory in Palisades, New York, as a senator, recalls geochemist Wallace Broecker, “he said, ‘I don't want a tour. I just want to sit around a table with some of your climate people.’” While Gore was writing his 1992 book Earth in the Balance, recalls atmospheric chemist Michael McElroy of Harvard University, the then-senator spent 2 hours on the phone nailing down a “pretty subtle chemical point” about ocean acidification. “He came into these issues with a visceral feel that this was an important issue,” says McElroy, “like the Vietnam War had been when he was a young man.”

    Schneider thinks the award to both Gore and IPCC recognizes their dual roles in promoting climate science. “We provide the credibility the Gores and Blairs and Schwarzeneggers need,” he says of the panel. And Gore's treatment of that science? “He did a pretty good job of communicating complex scientific information to a lay audience,” says McElroy of Gore's film An Inconvenient Truth. “If it was a scientist doing it, it would be different. But I don't think there were any glaring errors.” The publicity, Broecker says, accomplished far more than IPCC's scientists could have done on their own: “Gore put it in a way that people listened. We're much further along to meaningful action [to cut emissions] because of him.”

    IPCC led the way, Watson says. Its reports forging increasingly strong links between human activity and global warming were instrumental in moving nations toward drafting and signing the Kyoto Protocol for cutting greenhouse gas emissions, he says. But more recently, says Oppenheimer, other forces have come into play: high oil prices and a new energy crisis; events ascribable to global warming, such as the dwindling of Arctic sea ice; and weather events such as Hurricane Katrina that are at least analogs of weather in a greenhouse world.

    And then “along comes Al Gore,” says Oppenheimer. The end result has been an explosion of media attention and, in the United States, unprecedented political debate and even emission-cutting legislation. But it's not over, warns political communications researcher Matthew Nisbet of American University in Washington, D.C. IPCC and Gore may have raised awareness broadly and stoked concern among the already environmentally attentive, but by Nisbet's reading of the polls, the broad support for emissions cuts that will hurt is nowhere near there. Activists, he says, need a new message.


    Chemistry Laureate Pioneered New School of Thought

    1. Robert F. Service*
    1. With reporting by Gretchen Vogel in Berlin, Germany.

    Now that's a birthday present! Instead of receiving the random necktie on his 71st birthday last week, Gerhard Ertl was awarded this year's Nobel Prize in chemistry. Ertl, a physical chemist at the Fritz Haber Institute of the Max Planck Society in Berlin, Germany, won for developing methods that reveal how chemical reactions take place on metals and other surfaces. Those techniques have led to results as diverse as new catalysts that remove poisonous carbon monoxide from car exhaust and an understanding of how stratospheric ice crystals supercharge chlorine's ability to destroy the planet's protective ozone layer.

    “This is really well deserved,” says Ralph Nuzzo, a surface chemist at the University of Illinois, Urbana-Champaign. “Ertl is a titan.” John Vickerman, a chemist at the University of Manchester in the U.K., agrees. “The reactions occurring at surfaces are very difficult to probe because there are so few molecules involved, and they frequently occur very rapidly,” he says. “Furthermore, the scientist has to distinguish what is happening in a layer one molecule thick from the rest of the solid. Ertl developed very sophisticated physical tools to identify the chemistry occurring at the surface.” The Royal Swedish Academy of Sciences, which awards the Nobel Prizes, says Ertl was selected not for developing a particular tool, technique, or discovery, as is often the case, but because “he established an experimental school of thought for the entire discipline.”

    One early example was in figuring out how iron-based catalysts convert hydrogen and nitrogen into ammonia, a critical industrial process for making fertilizers. This conversion, known as the Haber-Bosch process, combines dinitrogen molecules from the air with dihydrogen molecules. Earlier studies had revealed that the slowest step in the process was one in which nitrogen molecules adsorb onto iron particles in a manner that primes them for combining with hydrogen. Researchers didn't know whether the tightly bonded nitrogen molecules reacted with hydrogen intact or whether they broke apart first. Using spectroscopic techniques and other tools, Ertl revealed the complete seven-step process whereby nitrogen and hydrogen molecules land on an iron surface, break apart, and react to form ammonia.

    Many happy returns.

    After Gerhard Ertl won the Nobel on his birthday, colleagues toasted him with champagne and German pretzels.


    After receiving the announcement last Wednesday, about 200 of Ertl's colleagues toasted him with champagne and German pretzels on the shaded lawn of the Fritz Haber Institute. After Ertl fielded a few questions from TV reporters, the crowd broke out in a rousing round of “Happy Birthday to You” (in English).

    In an earlier phone interview with Science, Ertl was quick to offer credit to fellow researchers. His field, he says, was propelled by the parallel development of many surface characterization techniques. And, he adds, many scientists were adept at applying them—including Gabor Somorjai of the University of California, Berkeley, with whom he shared the 1998 Wolf Prize in Chemistry for their work in surface science. “I was a little bit disappointed he didn't share [the Nobel Prize] with me,” Ertl says. Last week, several chemistry bloggers went further, arguing that Somorjai deserved recognition for his vital role in laying the foundations of surface science.

    For his part, Somorjai says simply that he does not understand how award decisions are made. But he notes that in the 1980s, he began steering away from ultrahigh-vacuum surface science to study reactions at solid-liquid interfaces, among other things. By contrast, Somorjai says, “Ertl stayed in there all through his life.”


    Three Economists Lauded for Theory That Helps the Invisible Hand

    1. Adrian Cho

    Scottish philosopher Adam Smith asserted that when everyone acts out of self-interest, everyone will eventually benefit, as if a benevolent “invisible hand” molds the economy. Economists now know that view is naive: In some situations, rational people will act in ways that leave everybody a loser. But such dreary outcomes can sometimes be avoided, thanks to work that earned three Americans the Nobel Prize in economics.

    Leonid Hurwicz of the University of Minnesota, Twin Cities, Eric Maskin of the Institute for Advanced Study in Princeton, New Jersey, and Roger Myerson of the University of Chicago, Illinois, developed “mechanism design theory.” The theory aims to find schemes, or “mechanisms,” that ensure that acting in self-interest will indeed lead to benefits for all. Today, its applications range from how best to auction broadcast rights and other public resources to contract negotiations and elections.

    Everybody wins.

    Leonid Hurwicz, Eric Maskin, and Roger Myerson (left to right) have won the Nobel Prize in economics.


    “At first, I thought it was some kind of a joke,” says Hurwicz, of hearing of his award. At 90, Hurwicz is the oldest person to win a Nobel. He says colleagues had told him that he might win, “but not in recent years.” The prize is well-deserved, others say. “I was riding in the car [and discussing the prize] with somebody yesterday, and these were the three names that came up,” says W. Bentley MacLeod, an economist at Columbia University.

    Mechanism design theory starts with the recognition that unbridled self-interest doesn't always lead to the greater good. For example, if the people of a town were asked to chip in to build a bridge, each person would benefit by underestimating his or her share and letting others bear the cost. So for lack of funds, the bridge would never get built. That sort of a logically unavoidable lose-lose situation is known as a Nash equilibrium.

    In the 1960s, Hurwicz pioneered the study of how to avoid such dead ends by fiddling with the rules of an economic or social interaction so that the most beneficial state and the inevitable equilibrium state are one and the same. “It's a little Machiavellian,” says Gabrielle Demange of the Paris School of Economics. “You design a game so that in the end the Nash equilibrium comes out to be what you want.” For example, each person could be required to pay what others think the bridge is worth, thus eliminating the incentive to lie.

    Maskin, 57, and Myerson, 56, expanded on Hurwicz's work. In 1977, Maskin developed a criterion for determining just when it's possible to find rules that will guide self-interested participants to the desired end. Starting in the late 1970s, Myerson showed that whenever a mechanism exists, it is also possible to find one that gives participants an incentive to tell the truth, an insight that makes it much easier to devise practical mechanisms.

    Relying heavily on game theory, the laureates' work has been largely abstract and formal. “My methodology is to invent simple little worlds in which there is just a bit that we don't understand and can study,” Myerson says. Nevertheless, the theory may play a role in confronting perhaps the most complex and pressing problem facing humanity today, climate change, by helping to set up incentives that encourage consumers and countries to minimize greenhouse gas emissions. “Mechanism design should definitely be pertinent to the problem,” Maskin says. “But first we have to decide exactly what we're trying to accomplish.”


    Natural Selection, Not Chance, Paints the Desert Landscape

    1. Elizabeth Pennisi

    Desert snow, a flower that lives in the Mojave Desert, has a colorful history—literally and figuratively. The five-petaled Linanthus parryae comes in purplish-blue and white varieties; it sometimes carpets dusty landscapes in a single color and sometimes in a blue-white mosaic. Sixty years ago, studies of these patterns provided key support for a powerful evolutionary theory. Now, two evolutionary biologists have found that the theory doesn't hold in this species.

    At issue is the relative role of randomness in genetic differentiation within a population. Did the chance increase in frequency of a new version of a gene—for example, one that tinted desert snow blue—and the luck of the draw result in the blue blooms flourishing in some places and not others? Such serendipity is called genetic drift, and it contrasts with the idea that fitness in a particular environment—natural selection—not chance, is responsible for the successful spread and distribution of these blue and white flowers.

    Researchers began studying Linanthus in the early 1940s, most notably systematist Carl Epling and evolutionary biologists Theodosius Dobzhansky and Sewall Wright. Epling and Dobzhansky, and later Wright, attributed the flowers' distribution to genetic drift: Blue flower seeds happened to land on the far side of a particular ravine, for example, and spread, isolated from the white ones by the forbidding habitat at the bottom of the ravine.

    Epling later decided that natural selection was important, but Wright, based on his continued work with this species, concluded that genetic drift was key. He proposed that the larger a population, the more likely new versions of a particular gene would take hold in a subset of that population, setting the stage for some subsets to head in different evolutionary directions. He called this idea the shifting balance theory. That work has been cited more than 1400 times. Nonetheless, evolutionary biologists have been arguing ever since about how right Wright was.

    In 1988, Douglas Schemske of Michigan State University in East Lansing and Paulette Bierzychudek of Lewis & Clark College in Portland, Oregon, decided to weigh in on the controversy. “Because none of these studies had directly estimated natural selection, we thought it was necessary to mount a long-term field project to resolve the dispute,” Schemske recalls. That year, they started tracking the distribution and fitness of Linanthus.

    They reported in 2001 that natural selection could be intense, playing a larger role in shaping the distribution of flower color than Wright realized. Now, in an early online release of Evolution, Schemske and Bierzychudek have pinpointed strong environmental differences that likely keep blue flowers to one side of the ravine and white flowers to the other. The work “provides a very nice historical perspective on this key system, one that has crept into a lot of textbooks,” notes evolutionary biologist Michael Lynch of Indiana University, Bloomington. “They clearly don't come down on the side of Wright.”

    Desert blooms.

    For decades, researchers have debated why proportions of white and blue Linanthus parryae (top) vary across arid landscapes.


    Schemske and Bierzychudek focused on two 500-meter-long swaths along a 25-meter-wide ravine with blue flowers on the west side and white ones on the east. Over 7 years, they counted the blue and white blossoms and noted changes in the distribution of the two colors. They looked at the distribution of allozymes—different versions of a given protein—in flowers on both sides of the ravine. In addition, they planted some white-flower seeds on the west side and blue-flower seeds on the east and vice versa, monitoring seed production in these experimental plots. Because one year was quite wet and another quite dry, the researchers were able to assess the two colored flowers' fitness relative to precipitation. They also analyzed the makeup of the soil and plant communities on both sides of the ravine, finding big differences in both. “It was rigorous fieldwork and careful analysis, work that addresses important questions with exceptional clarity,” says plant population biologist Vincent Eckhart of Grinnell College in Iowa.

    The sides were more than 95% blue or white. But the distribution of the allozymes did not parallel that of the flower color. Had genetic drift caused the color pattern, the distribution of at least some allozymes should have been skewed as well, Schemske and Bierzychudek note. In the seed-transplant studies, each color flower typically did best on its own turf, indicating that selection played a role. “Our data strongly suggest that it's no accident that there are only blue survivors on the west side and only white survivors on the east side,” says Bierzychudek.

    Furthermore, the soil and community composition of the two sides of the ravine were different—one side had a much higher proportion of creosote bushes, for example—providing strong evidence of environmental differences that could favor one flower color over another.

    “The study shows the unimportance of drift in Linanthus,” says evolutionary biologist Masatoshi Nei of Pennsylvania State University in State College. “In this sense, [the] finding shakes the ground of the shifting balance theory.” But he is cautious about making generalizations, given that other studies suggest otherwise: “The relative importance of selection and drift depends on the genes and populations studied.”


    Coastal Artifacts Suggest Early Beginnings for Modern Behavior

    1. Ann Gibbons

    Modern humans first appear in the fossil record of Africa between 160,000 and 195,000 years ago, with skulls and bones that are virtually indistinguishable from ours. But looking like us doesn't necessarily mean that they acted like us. Indeed, researchers have debated intensely about when Homo sapiens began to act sapient by producing complex tools and manipulating symbols.

    Now, an international team of researchers says that some key elements of modern behavior were in place by 164,000 years ago, pushing back the appearance of some of these activities by 25,000 to 40,000 years. The team found complex stone bladelets and ground red pigment—advances usually seen as hallmarks of modern behavior—coupled with the shells of mussels, abalone, and other invertebrates in a cave in South Africa. These ancient clambakes are the earliest evidence of humans including marine resources in their diet, according to a report in this week's issue of Nature.

    Not everyone agrees that the artifacts add up to a major cognitive shift. But to paleoanthropologists such as Sally McBrearty of the University of Connecticut, Storrs, the package provides “strong evidence” that these people were manipulating symbols. That “supports the gradual rather than sudden or rapid accumulation of more complex behaviors,” adds Alison Brooks of George Washington University in Washington, D.C.

    The team found the shells, tools, and pieces of red ochre cemented in the wall of a cave at Pinnacle Point on the Cape of South Africa, on the coast of the Indian Ocean. Using uranium series and optically stimulated luminescence dating, the team dated the sediments to about 164,000 years, during a glacial period that left Africa cool and dry. These humans might have started to eat marine resources as a “famine food” because of a harsh environment, says team leader Curtis Marean of Arizona State University's Institute of Human Origins in Tempe.

    Room with a view.

    Early Homo sapiens ate shellfish and worked with ochre and stone tools (inset) in this South African cave.


    Although the team found no human bones, the ancient people did leave behind a trail of stone flakes that the team identifies as bladelets, small points used by more recent humans as advanced projectile points. If so, this would push back the appearance of true bladelets by at least 90,000 years. Other researchers caution, however, that the points may have been made by accident rather than on purpose. The pieces of red ochre were worn down, suggesting that these people were using ochre paste as glue to make complex tools or perhaps even as body paint. Says Marean: “You put that dietary, technological, and cultural package together, and all of a sudden it looks like archaeological sites from 2000 years ago.”

    But using “little bits of red ochre” pales in comparison with the advances that appear 50,000 years ago in Europe, when humans began to draw animals, shape beads, and bury their dead in elaborate graves—changes that enhanced reproduction and are linked to dramatic population expansions, says paleoanthropologist Richard Klein of Stanford University in Palo Alto, California. By themselves, the Pinnacle Point artifacts would not confer such a significant reproductive advantage, says Klein.

    Marean, however, thinks the behavioral changes were so important that they might have been one of the catalysts for the birth of our species. He is searching even older sediments to pinpoint when these behaviors emerged.


    Space Sighting Suggests Stardust Doesn't Have to Come From Stars

    1. Govert Schilling*
    1. Govert Schilling is an astronomy writer in Amersfoort, the Netherlands.

    Microscopic rubies and sapphires arise in black hole winds. Using NASA's Spitzer Space Telescope, astronomers spotted the telltale spectroscopic fingerprints of these unpolished microgems in space near a supermassive black hole. Many other dust species also showed up, including crystalline minerals that make up sand, glass, and marble. Team leader Ciska Markwick-Kemper of the University of Manchester, U.K., says the find may help explain the abundance of dust particles in the very early universe.

    “It's a spectacular find,” says astrochemist Rens Waters of the University of Amsterdam in the Netherlands. “If pressures and temperatures in supermassive black hole winds are favorable for dust production, huge quantities of dust could be produced in this way.”

    The universe started out with a mixture of hydrogen and helium, the two lightest elements. Heavier elements such as carbon, oxygen, silicon, and magnesium formed by nuclear fusion in the first generation of extremely massive stars. Supernova explosions then dispersed these heavy elements through space, where some of them condensed into dust particles—the building blocks of planets such as Earth. However, many components of dust form only in the calm outflows of dying sunlike stars. So astronomers have been baffled to observe healthy amounts of dust at a time in the universe's history when sunlike stars were still in their infancy.

    In 2002, astrophysicist Martin Elvis of the Harvard-Smithsonian Center for Astrophysics in Cambridge, Massachusetts, suggested that dust could form in the winds of supermassive black holes that sit in the cores of young galaxies, sucking in matter with their enormous gravity. These gluttonous monsters are “messy eaters,” says Sarah Gallagher of the University of California, Los Angeles, spilling and blowing much of their food into space, including heavy elements from supernovae. The balmy temperatures and high densities in these winds could forge dust particles, including crystalline silicates and tiny rubies, from these elements, Elvis theorized.

    Matter maker?

    Perishable compounds near the heart of a galaxy hint that the universe has more than one way of cooking up cosmic dust.


    Now, analysis of light from a supermassive black hole in a galaxy some 8 billion light-years away supports Elvis's idea. In the Spitzer observations, Markwick-Kemper, Gallagher, and their colleagues detected many mineral species previously seen only in the outflows of dying sunlike stars, such as forsterite (Mg2SiO4), periclase (MgO), and corundum (Al2O3), the mineral that constitutes ruby and sapphire. Because many of those minerals are easily destroyed by energetic radiation from stars or by interstellar shock waves, the observations suggest that the dust has been freshly formed in the black hole winds.

    The case isn't closed. In their paper in the 20 October issue of The Astrophysical Journal Letters, Markwick-Kemper and her colleagues note that part of the early universe's dust could still have come from supernova ejecta. Says Waters: “The origin of dust is still shrouded in lots of mysteries.”


    U.K. Spells Out Boost in Medical Research

    1. Daniel Clery

    In 10 years as the U.K. government's finance chief, Gordon Brown engineered substantial and steady growth in research funding. Now, as prime minister, Brown is continuing that trend. Last week, the government's Comprehensive Spending Review (CSR)—a statement of spending plans issued every 2 or 3 years—signaled a boost of £300 million (about $600 million), to £1.7 billion, in medical and health research over the next 3 years. “This is nothing less than good news,” says Hilary Leevers, acting head of the Campaign for Science and Engineering in the U.K.

    The government had previously announced that it intended to boost the overall level of funding for science and university research from £5.4 billion to £6.3 billion over the same 2008–11 period. CSR reveals how that increase will be divvied up. Around half goes to the U.K.'s seven research councils, which distribute grants to scientists at universities and national labs. They will see their £2.8 billion annual funding boosted on average by 5.4%.

    The emphasis on medical and health research continues a process begun earlier. In 2006, Brown appointed David Cooksey, a venture capitalist who has advised the government on medical research, to figure out the best way of combining all the government's medical and health research spending into a single fund. Last December, acting on Cooksey's recommendations, Brown created the Office for Strategic Coordination of Health Research (OSCHR).

    OSCHR oversees the activities of the Medical Research Council (MRC) and the Department of Health's National Institute for Health Research to promote a new emphasis on “translational” research—taking basic science results and turning them into usable drugs or treatments. CSR—which does not need parliamentary approval—boosts the combined budgets of these two bodies by £300 million. “There's been a need for an increase for some time, and a need for a better connection between the MRC and the Department of Health,” says Michael Rutter, clinical vice president of the Academy of Medical Sciences, although he expressed concern that the emphasis on translational research “doesn't lead to a reduction in funding for basic science.”

    Leevers has similar concerns. The research councils have recently begun requiring information about the economic impact of research on grant applications, a change that some researchers worry would put basic research proposals at a disadvantage. “The government ardently believes in the drive toward innovation,” she says. “But you have to have the bedrock on which to innovate.”


    Location, Location, Location

    1. Daniel Clery

    When nations vie for massive international scientific facilities, science can take a back seat to politics and even sheer chance. Dealmakers say there's no magic formula for getting things right


    Help in storming hijacked Lufthansa flight 181 got Britain an experimental fusion reactor.


    On the night of 17/18 October 1977, a Lufthansa airliner sat on the tarmac of Mogadishu airport in Somalia and the world held its breath. Four days earlier, terrorists from the Popular Front for the Liberation of Palestine had hijacked the Boeing 737 en route from Majorca to Frankfurt and demanded $15 million and the release of 11 members of an allied terrorist group, the Red Army Faction (RAF), who were in prison in Germany. Over the following days, the plane landed in Rome, Larnaca, Bahrain, Dubai, and Aden before coming to a stop in Mogadishu, where the hijackers dumped the body of the pilot—whom they had shot—out of the plane. They set a deadline that night for their demands to be met.

    At 2 a.m. local time, a team of German special forces, the GSG 9, which had been tailing the plane across the Mediterranean and Middle East, stormed aboard. In the fight that followed, three of the four terrorists were killed and one was captured with bullet wounds. All the passengers were rescued uninjured. Far from the action, the resolution of the hijacking had a surprising side effect: the Joint European Torus (JET), an experimental nuclear fusion reactor being planned by European nations, ended up being built in the United Kingdom rather than in Germany.

    On the day the hijacking ended, British prime minister James Callaghan arrived in Bonn for a summit meeting and was met by German chancellor Helmut Schmidt with the words: “Thank you so much for all you have done.” The reason for his gratitude was that Britain's Special Air Service (SAS), the Army's special forces unit, had advised the GSG 9 and provided them with specially designed stun grenades, which the German commandos used to incapacitate the hijackers during the storming of the plane.

    Golden age.

    CERN's dedication in 1955 made the lab a model for big international projects.


    Because of this help, Schmidt settled an issue that had recently divided the two countries: where to build JET. Most of the nine members of what was then the European Economic Community (EEC) supported Culham near Oxford, but Germany was holding out for Garching, home of its own fusion research lab. At a cabinet meeting the day after meeting Callaghan, Schmidt backed Culham, and on 25 October, the site was approved by EEC research ministers.

    It's not often that acts of terrorism play a part in international research collaborations, but there comes a time in the development of many such projects—usually around the issue of choosing a site—when national pride and cross-border rivalries can take over from technical considerations. In such situations, the scientists who have carefully nurtured a project for years become bit players as international power politics is played out.

    When politicians stumble, the process can become so divisive that it threatens the whole project and international relations as well. Such was the case with ITER, a global fusion research project that is the successor to JET. In late 2003, ITER's site-selection process descended into 18 months of mudslinging and frantic shuttle diplomacy. Although an amicable resolution was finally achieved, there were moments when the project's future looked in doubt, and many consider the episode a low-water mark in international scientific collaboration. “I haven't talked with anyone who was happy about the ITER process, even those who won,” says an international official who asked not to be named.

    So is there a better way to choose the site for an international facility? Those projects currently on the drawing board—including the next multibillion-dollar particle physics machine, the International Linear Collider (ILC)—don't seem to have agreed on the best method, but with the scars of ITER still raw, they are treading very carefully.

    Physicists with a mission

    The model for international collaborations, most agree, is CERN, Europe's particle physics lab. Soon after the Second World War, a group of prominent physicists, including Pierre Auger, Isidor Rabi, Eduardo Amaldi, and Lew Kowarski, bullied, coaxed, and cajoled European governments and the continent's physicists into supporting an international particle physics lab. The aim was both to rebuild European science and to foster international cooperation. In February 1952, 11 nations signed up to the provisional CERN and soon four sites were under consideration: Geneva, Copenhagen, Paris, and Arnhem in the Netherlands.

    Undone deal.

    The European Synchrotron Radiation Facility wound up in Grenoble after last-minute political maneuvering changed the site from Strasbourg.


    A site-selection committee began visiting the sites prior to a meeting of the provisional CERN council in October 1952. By this time, Paris had slipped in the rankings because it was considered too big, too expensive, and plagued by labor strikes. Copenhagen was strongly opposed by the French. Geneva made a strong case as an international city: home of the defunct League of Nations, and with good tax and customs terms. Reportedly, on the day the selection committee visited Arnhem, it was pouring with rain. The panel found a town with only two hotels, no university, no international school, and only a few foreign newspapers at the train station newsstand. At the council meeting in October, the delegations lined up behind Geneva.

    The next hurdle was Swiss public opinion. Eastern bloc countries had declined to join the project, and communist politicians in Switzerland exploited the resulting Western bias. They claimed that the lab would become part of the U.S. atomic system, controlled by bomb manufacturers. A heated debate in the Geneva state council spilled out into fistfights in the corridors. Voters in the Canton of Geneva, fearing the health effects of radiation and a threat to Swiss neutrality, petitioned for a referendum on CERN, to be held on 29 June 1953. In the run-up, physicists made a hectic round of speeches and rallies—the city was abuzz with scientific debates. On the day, only 7332 voted against the lab—fewer than had signed the original petition—and 16,539 voted in favor. On 1 July 1953, the provisional council voted CERN into existence.

    The center soon became a model for other cross-border collaborations: the Institut Laue-Langevin (ILL, a neutron source), the European Molecular Biology Laboratory (EMBL), the European Space Agency (ESA), and the European Southern Observatory (ESO). Relatively few sparks flew in the discussions over siting these organizations. ESA is headquartered in Paris, but has other facilities in all its major funding countries apart from the United Kingdom. ESO has its base in Garching, Germany, but its telescopes are all in Chile. “Everyone is happiest when the [location] issue doesn't come up,” says the international official, such as when the best site is not in one of the funding countries.

    But such harmony grew increasingly difficult to maintain as politicians became increasingly interested in scientific facilities for the international prestige they brought and the money they injected into local economies. In the mid-1970s, European researchers identified the need for a large synchrotron radiation source, a provider of intense laserlike x-rays for physicists, materials scientists, and molecular biologists. By the early 1980s, many countries had expressed interest in hosting the machine but it boiled down to horse-trading between the main backers, France and Germany. According to CERN physicist Horst Wenninger, President François Mitterrand and Chancellor Helmut Kohl decided the issue over a breakfast cup of coffee: The site for the European Synchrotron Radiation Facility (ESRF) would be Strasbourg on the French-German border.

    But in 1984, researchers and politicians in the French city of Grenoble began agitating for a rethink. According to current ESRF director Bill Stirling, the then ILL director Brian Fender had suggested a vacant site next door to his facility to build on synergies and common services. Grenoble is also home to a number of French national research centers, and prominent scientists lobbied Mitterrand and other politicians. With elections looming, Mitterrand struck a new deal with the Germans. “They were furious in Strasbourg,” Stirling says. But ESRF's troubles weren't over. The geology of the site was not ideal, and it was surrounded by vibration-causing roads and rivers. After errors in construction, the concrete slabs supporting the beam lines had to be relaid. But after its difficult birth, the world's first third-generation synchrotron was a great success.

    Since ESRF, the movement to build large pan-European labs has faded. These days, it is more common for governments to beef up an existing national lab with new facilities and recruit international partners to help shoulder the burden. Germany is currently starting construction on two such examples: the XFEL x-ray laser at its DESY particle physics lab near Hamburg and the Facility for Antiproton and Ion Research (FAIR) at the GSI heavy ion research lab at Darmstadt.

    The bigger they come …

    In recent years, scientists' ambitions have increasingly taken on a global scale, and as the budgets get bigger, the stakes get higher. The most ambitious project, and the one that really tested the powers of diplomacy was ITER, an experiment designed to prove fusion is a viable source of power for humankind (Science, 13 October 2006, p. 238).

    The ITER project was started in the mid-1980s. After a global design effort, a redesign, the departure of some members, and the arrival of others, the delegations from six partners—China, the European Union (E.U.), Japan, Russia, South Korea, and the United States—gathered in Washington, D.C., in December 2003 to choose between two candidate sites and sign the agreement that would set the construction ball rolling, at a total cost of some $12 billion. “The higher the stakes, the more difficult the decision is,” says Achilleas Mitsos, the E.U.'s former director general of research.

    The political atmosphere at the Washington meeting could not have been worse. The E.U.'s proposed site was at Cadarache in southern France, and relations between France and the United States were subzero following France's opposition to the Iraq War, which had begun earlier that year. According to Mitsos, who was the E.U.'s chief negotiator, the United States was determined to get a result in Washington and was unambiguously in favor of Japan's proposed site, Rokkasho. “Clearly, the game was not going to be easy,” Mitsos says.

    Despite enormous pressure, the E.U. delegation played the long game and convinced the other partners that further technical studies of the two sites were needed. Those studies still failed to signal a clear winner, although European researchers asserted that Rokkasho's position in northern Japan had too high a risk of earthquakes, whereas the Japanese charged that Cadarache was too far from the coast and it would be impossible to move large components that far by road. Japan upped the stakes by offering to pay not the required 40% host contribution but 50%. The E.U., after much handwringing, followed suit.

    The E.U. negotiators realized that in order to win they had to come up with a face-saving formula for the loser. The E.U. opened direct discussions with Japan on a set of extra fusion-science facilities to be built in whichever country did not get the main reactor. Negotiations over this “broader approach to fusion” continued in a theoretical fashion through the second half of 2004 and into 2005—Mitsos says he traveled to Tokyo twice a month while other officials shuttled between other capitals. “Russia and China every day became more pro-Cadarache, and the U.S. and Korea every day became less insistent on Rokkasho,” he says. Finally, in June 2005, Japan agreed to back Cadarache. “The broader approach was the deciding factor. It allowed Japan to not come out as the loser,” Mitsos says.

    How will the next megaproject avoid the pitfalls that ITER stumbled on? “We're trying hard not to duplicate ITER,” says Barry Barish, head of the global design effort for the ILC project, but “if there's a process, I don't know what it is.”

    The ILC is the next big machine on particle physicists' shopping list. Researchers around the world are currently working on a detailed design for the machine and they've done some testing of “sample sites” in the United States, Europe, and Japan. “We're very early in the process, but probably our biggest lesson from ITER is to avoid the ‘all or nothing’ situation,” Barish says. Although the machine has to be in one place, its high-tech components will be designed, built, and tested at sites across the globe, and it will be managed and governed as a global facility.

    Drawing another lesson from ITER, ILC's funders have become actively involved in the planning, even at this early stage. Ian Halliday, former head of the U.K.'s Particle Physics and Astronomy Research Council, helped set up Funding Agencies for the Linear Collider (FALC), which, he says, will allow interested parties to “talk about what everyone wants, identify problems early on, and learn how everyone's funding works.” FALC has already acted to smooth out tensions over issues, such as whether to use superconducting magnets in the accelerator or conventional technology, and who should lead the design effort. “It's a gradual process. We might end up without a shootout, but it's in the lap of the gods,” he says.

    Experts in such international negotiations dismiss the idea that there is some magic formula for resolving disputes. “There isn't such a thing,” says Stefan Michalowski, executive secretary of the Organisation for Economic Cooperation and Development's Global Science Forum, a talking shop for senior scientists and science administrators. “Don't try to create general principles,” he says, but at a certain stage in a project's planning, “get everyone to agree on the rules.” He cites the case of the International Neuroinformatics Coordinating Facility (INCF), a small collaboration for which he was asked to head the site selection committee. All 15 member countries agreed on the criteria for selection beforehand. His committee worked through the process and made its recommendation. “Not everyone was happy, but no bones were broken and the losers got over it.”

    Although there may not be a magic formula, some sort of oversight authority could play a role. “The only thing that will make a difference is a substantial, European-level central fund for facilities,” says Peter Tindemans, spokesperson for the European Spallation Source, a neutron source that has been on the drawing board for more than a decade and will soon be choosing a site. E.U. officials have been thinking along similar lines. When they proposed plans for the latest tranche of the multiyear Framework research program, it contained funds to pay for as much as 20% of the construction cost of pan-European projects. E.U. officials “could participate to provide a package deal, come up with a plan to link projects, and allow everyone to have a stake,” says Mitsos. He adds that they even drew up a table, laying out details of funding and where each future facility would go so that all countries got a fair division of spoils.

    No shootout?

    Negotiations over the International Linear Collider have gone smoothly, so far.


    In the budget negotiations last year for the seventh Framework, the funds for infrastructure were slashed and the program can now only help out with the preparatory stages of projects. But Mitsos believes that, in Europe at least, the E.U. will eventually take on the role of dealmaker and guardian of fairness in international projects. “The possibility to draw such a table exists. I'd be surprised if we didn't try again.” As for global facilities, they'll have to continue to make up the rules as they go along.


    Fresh Evidence Points to an Old Suspect: Calcium

    1. Jean Marx

    Proteins known to contribute to Alzheimer's pathology have been linked to disturbances in calcium ion regulation that could underlie neuronal death in the disease

    Imagine that police discover hundreds of dead bodies over the course of a year and the same suspicious-looking man is standing near each one. A strong circumstantial case for murder, of course. But given that the exact cause of death is uncertain in each case and that no one witnessed the suspect with any obvious weapon, prosecutors would still have a hard time convicting him.

    That's essentially the circumstance facing Alzheimer's disease researchers. For years, they've thought that the protein β-amyloid causes the neurodegeneration underlying the fatal illness, but they remain unsure about how it kills brain cells. Now, the mystery may be beginning to unravel.

    New evidence supports an old, but somewhat neglected, idea: that β-amyloid, perhaps by forming channels in neuronal membranes, slays brain cells by making them unable to regulate their internal concentrations of ions, particularly calcium ions. Such changes can be “ominous,” says Charles Glabe of the University of California, Irvine (UCI). “You just can't go around punching holes in membranes” without endangering the neuron.

    But β-amyloid is only part of the emerging picture. Two additional suspects, known as presenilin 1 and presenilin 2 (PS1 and PS2), have also been linked to Alzheimer's pathology because mutations in their genes can cause the disease. Evidence now indicates that these proteins, too, normally help maintain calcium ion concentrations in neurons and that the disease-causing mutations disrupt this function.

    If so, this would be a new role for the presenilins, which were previously shown to contribute to Alzheimer's pathology by clipping β-amyloid out of a larger precursor protein called APP. But if a calcium imbalance does in fact cause neuron death in the disease, a new therapeutic strategy may be possible. “You might block calcium flux as a way of preventing neurodegeneration,” says Sam Gandy, an Alzheimer's researcher at the Mount Sinai Medical Center in New York City.

    Calcium ion portals.

    Presenilins regulate calcium ion release by the ER into the cytoplasm whereas β-amyloid may form channels that allow the ions in from the cell exterior.


    Calcium overload

    The idea that calcium overload might be the final insult that finishes off brain neurons in Alzheimer's emerged in the mid-1980s, mainly from a hypothesis put forward by Zaven Khachaturian, then director of the Alzheimer's program at the National Institute on Aging (NIA) in Bethesda, Maryland. Khachaturian, who now heads up the Lou Ruvo Brain Institute and Keep Memory Alive in Las Vegas, Nevada, says that he wanted researchers to focus more on finding the underlying mechanisms of neurodegeneration rather than just describing the brain pathology.

    At about the same time, however, much of the Alzheimer's field began concentrating on β-amyloid as the likely nerve cell killer-in part because it's found in the abnormal plaques that stud the brains of Alzheimer's patients. Even more convincing evidence came when researchers found that mutations in APP cause an early onset form of the disease.

    Then in the early 1990s, Nelson Arispe of the Uniformed Services University of the Health Sciences in Bethesda, Maryland, and his colleagues provided a possible link between β-amyloid and the calcium hypothesis. When they exposed artificial membranes designed to resemble the cell membrane to β-amyloid, the protein formed channels in the membrane. “Those channels were very particular,” Arispe says. “They only permitted the flow of cations [positively charged ions],” such as calcium, into the cell. That fits with numerous observations over the years that exposing nerve cells in culture to β-amyloid causes an increase in their internal calcium ion concentrations.

    More recently, Arispe and his Uniformed Services University colleague Olga Simakova provided further support for the idea that calcium disturbances underlie β-amyloid's toxic effects. They found that application of β-amyloid to nerve cells maintained in lab cultures produced an immediate rise in intracellular calcium concentrations followed by the death of the cells. Both effects, they reported in the 9 May 2006 issue of Biochemistry, could be inhibited by a peptide they designed to block β-amyloid calcium channels.

    Arispe isn't alone in reporting that β-amyloid seems to form ion channels. In 2005, Jorge Ghiso of New York University in New York City, Ratnesh Lal of the University of California, Santa Barbara, and their colleagues found that β-amyloid, as well as several other proteins that produce similar deposits in various tissues, form channels in artificial membranes.

    Yet not everyone is persuaded by the channel evidence. Glabe and his colleagues find that β-amyloid increases the permeability of both artificial and normal cell membranes, but this, he says, doesn't seem to depend on the formation of ion channels. In this case, β-amyloid's effects weren't specific; the protein increased the cross-membrane movements of both negatively and positively charged ions.

    Glabe proposes that β-amyloid causes a generalized thinning of neuronal membranes. If that happens, he says, a cell would become leaky and have to work a lot harder to maintain normal internal ion concentrations. This could have a number of harmful effects, including the generation of reactive oxygen species, a normal but nonetheless cell-damaging byproduct of metabolism.

    The discrepancies between the two sets of observations remain unresolved. “I always assume we are both right. We're just not doing the same experiments,” Glabe says. For the time being, other Alzheimer's researchers have taken something of a “wait-and-see” attitude about whether β-amyloid forms membrane channels for calcium ions. “No one has proved it with rigor that would allow it to become dogma, but no one has disproved it, either,” says Gandy.

    But there is another way in which β-amyloid may increase calcium entry into neurons: by altering the activity of the receptors that respond to stimulatory signals. Earlier this year, a team led by William Klein of Northwestern University in Evanston, Illinois, found that β-amyloid increases the calcium influx that occurs when the neurotransmitter glutamate activates the so-called NMDA receptor. Intriguingly, the researchers also found that memantine, a drug designed to inhibit NMDA receptor activity that has been approved for treating Alzheimer's, blocks this action of β-amyloid-an indication that drugs that restore calcium balance in neurons might indeed be therapeutic options for the disease.

    From the inside

    Whereas β-amyloid apparently affects calcium entry through the outer cell membrane, the presenilins exert their effects on an interior membrane. Calcium ions not only enter the cell from outside when a neuron is stimulated, but they are also released into the cytoplasm from internal stores, primarily from a membrane-bound compartment called the endoplasmic reticulum (ER). That's where the presenilins, which are located in the ER membrane, come in. “Presenilin mutations somehow cause a bigger calcium release from the ER when glutamate stimulates a cell,” says Mark Mattson, whose team at the NIA Gerontology Research Center in Baltimore, Maryland, is one of several who made the finding.

    All lit up.

    As indicated by the red color, neurons bearing mutant presenilins (middle and bottom) release much more calcium into the cytoplasm when stimulated than do normal neurons (top).


    This might be because calcium concentrations in the ER are elevated to begin with in cells bearing presenilin mutations. What causes that excessive accumulation has been unclear, but the answer may lie in new work from Ilya Bezprozvanny of the University of Texas Southwestern Medical Center in Dallas, Bart De Strooper of the Flanders Interuniversity Institute for Biotechnology (VIB4) and K. U. Leuven in Leuven, Belgium, and their colleagues.

    In experiments done over the past year or two, both on artificial membranes and on cultured nerve cells, they found that the normal presenilins are membrane channels that allow calcium ions to leak passively from the ER into the cytoplasm. However, presenilins carrying Alzheimer's mutations no longer function as calcium leak channels. Presenilin mutations “overload the ER with calcium, and you get excessive release on [nerve cell] stimulation,” Bezprozvanny proposes. To Mattson, this sounds plausible. These results, he says, “seem to provide a molecular explanation for what we saw.”

    Early proponent.

    Zaven Khachaturian is a long-time advocate of the calcium hypothesis.


    Other researchers, however, contend that presenilin mutations alter calcium handling in a different way. Frank LaFerla and his colleagues at UCI have looked at how presenilin mutations alter calcium release from the ER through two previously identified ion channels, known as the ryanodine and IP3 channels because they are activated by those chemicals. “When you stimulate either of them, you get a lot more calcium release in [PS] mutant cells than in normal cells.” says LaFerla.

    Through studies of mice genetically engineered with PS1 and other genes to develop Alzheimer's-like brain pathology, LaFerla, Grace Stutzmann, then a postdoc in his lab, and their colleagues found changes in the ER's handling of calcium occur in neurons even before the animals' brains developed the plaques and tangles characteristic of Alzheimer's. This finding, reported in the 10 May 2006 issue of the Journal of Neuroscience, indicates that the calcium changes might play a primary role in triggering neurodegeneration.

    Some of the increased calcium release from the ER in PS-mutant cells may be due to greater expression of the ryanodine receptor, the LaFerla team has found. In as yet unpublished work, the researchers also observed that the presenilins are needed for the normal operation of the SERCA pumps that move calcium ions back into the ER after a neuron has fired. Not yet known is whether PS mutations affect SERCA pump operation. But if they increase it, the ER could become loaded with excess calcium ions.

    Mutations in the APP and presenilin genes together account for less than 10% of all Alzheimer's cases. The other 90%, mostly of the late-onset variety, fall into the so-called sporadic category, meaning that their causes aren't known. There are, however, indications that changes in calcium handling by neurons could be contributing to Alzheimer's susceptibility as we grow older. Some of this evidence comes from Olivier Thibault, Philip Landfield, and their colleagues at the University of Kentucky College of Medicine in Lexington.

    In work reported early last year in the Journal of Neuroscience, these researchers looked at several indicators of calcium function in neurons obtained from the brains of rats at ages ranging from 4 to 23 months. Beginning at 12 months, which is middle age for rats, the neurons underwent several changes that should make them hyperexcitable, a response similar to that seen in cells with presenilin mutations. Changes such as these “could conceivably set the stage for Alzheimer's by making neurons more vulnerable to further insults,” Landfield says. Those insults could include the increase in β-amyloid deposits that also occurs with age or membrane damage caused by reactive oxygen species.

    Proving that similar calcium changes occur in humans could be diff icult as researchers can't perform the same experiments on human brain neurons that Thibault and Landfield performed on rats. Consequently, the acid test of the calcium hypothesis in Alzheimer's disease will likely await possible clinical trials of drugs that inhibit calcium movements into the cytoplasm. That's “the only way to test cause and effect in sporadic Alzheimer's,” Bezprozvanny says. Although researchers are beginning to test inhibitors of calcium release on cells in culture and animal models of Alzheimer's, it's still too early to tell whether they will find agents suitable for trials in humans.


    Dirty Science: Soil Forensics Digs Into New Techniques

    1. Krista Zala*
    1. Krista Zala is a freelance writer in Los Angeles, California.

    Geologists, chemists, and other scientists are developing better ways of matching soil samples to help catch and convict criminals

    Case closed.

    Scientists traced soil on this shovel to the burial site of two murder victims.


    A woman and her mother are reported missing from a township east of Adelaide in South Australia. The next day, the woman's car is found 160 kilometers away with a dirty, bloody shovel in the trunk. When her son shows up in a nearby town and tries to get assistance for the broken-down car, police arrest him. But the suspect refuses to talk, and with no bodies to provide evidence or even prove someone is dead, the desperate police seek help.

    They call in a team of forensic soil scientists to analyze the shovel. The minerals, acidity, and moisture level of the soil on the shovel lead the team to suggest that the police search a gravel quarry in the Adelaide Hills, where days later a fox uncovers a body. The next day, the second body is found near the first. The son confesses to killing his mother and grandmother and is sentenced to 18 years in prison.

    Although it could be a television episode of CSI, the case was real—and so were the soil scientists, who now work at the Centre for Australian Forensic Soil Science (CAFSS) in Adelaide, created in 2003 following the team's successful intervention in this 2000 double homicide. CAFSS analyzes soil for investigations from murder to environmental pollution, helps train new forensic scientists, and conducts research on new soil-analysis techniques. It has become well known among Australian detectives. “Ten years ago, police wouldn't have wanted to talk to us,” says Rob Fitzpatrick, the center's director. “Now we can't cope with the number of cases.”

    Soil evidence has been used to link criminals to crime scenes for more than a century. But in Australia and elsewhere, the recent automation of techniques and the ability to get information from smaller samples have made soil forensics an increasingly popular tool in criminal investigations. Scientists are now also exploring new ways of applying microscopy to dirt and of analyzing the plant waxes and microbial DNA within it.

    Traditionally, soil forensics has been vulnerable to legal attack by defense lawyers because expert witnesses can testify only to whether samples are similar, versus the more absolute nature of a DNA or fingerprint match. Although some protocols are well-established—a soil sample is always sealed and locked, for example, and at least two people must be present while it's being analyzed—the field has yet to settle on the best means to analyze each soil type, explains Lorna Dawson of the Macaulay Institute in Aberdeen, U.K. One project aimed at standardizing old methods and validating new ones is the SoilFit project, led by Dawson and her colleagues. The effort also aims to provide a systematic database of soil fingerprints across the United Kingdom.

    Reflecting the growing interest in applying new scientific techniques to soil, forensics researchers in Perth, Australia, last year hosted the first international conference on the topic, drawing several dozen attendees. This month, a second meeting in Edinburgh, U.K., is expected to bring together between 100 and 200 researchers, crime investigators, and forensic experts. “There's a lot of information in soil,” says Dawson.

    Fertile ground

    Grounds for conviction?

    Scanning electron microscopy images of soil found on a suspect (right) and from a control (left) sample reveal differences on the microscale.


    Analyzing soil samples has a distinguished history in literature and real life. Sherlock Holmes uses soil to deduce Dr. Watson's peregrinations based on the dirt of his shoes in the 1890 work The Sign of the Four. A decade later, in the first known instance of soil evidence being used in a criminal investigation, German chemist Georg Popp helped authorities obtain a confession in a murder case near Freiberg, Germany. Popp connected dirt from the trouser cuffs and fingernails of the main suspect to the crime scene.

    Matching soils is no small task. Soil is dynamic and part alive: A teaspoonful holds more than a million organisms, and soil microbes are constantly dying out or exploding in number. Water also leaches away compounds and introduces others as it trickles through. And soil is sensitive. Disturbing dirt—even by scooping a sample—changes it: Drying it alters its chemistry, exposing it to wind rounds out sharp edges on grains, and sealing it, such as in an evidence bag, can prompt a flurry of fungal growth. Such delicacy means that soil can only be pronounced in court as similar to or dissimilar from a possible source. Still, combining a few dirt characteristics can offer a compelling case for, say, linking a sample on a shoe to one in the back garden.

    For the past few decades, soil scientists have used a variety of tools in criminal investigations. Ground-penetrating radar is able to pinpoint burial sites for individual bodies as well as mass graves. X-ray diffraction can uncover the minerals of the soil, infrared spectrometry determines the chemical pedigree, and analysis of diatoms and pollen provides biological clues to dirt's provenance.

    Not all of those techniques can be applied to a given soil source, however. And others often require a greater sample size than the crime scene investigators can produce—hence the push for new, robust ways that require less dirt with which to work. As a visiting research fellow at CAFSS a few years ago, geologist Duncan Pirrie of the University of Exeter, U.K., saw how an automated scanning electron microscope could boost the availability and effectiveness of soil forensics. About 20 minerals occur in most soils, he explains, but what makes each sample identifiably distinct is the relative abundance of each mineral.

    The CAFSS microscope, called QEMSCAN, finds both the mineral composition and its relative abundance from just 10 mg of dirt—50 times less than previously required. A similar instrument was originally developed for mining applications by Australian scientists, and the design was then adapted for use in forensic applications. QEMSCAN will analyze in 1 hour what would take a mortal days, and the scope's objective analysis triumphs over simple visual analysis of soils by people.

    For a murder case in 2003, Pirrie hauled soil evidence from the United Kingdom to Australia for analysis, then promptly set up a QEMSCAN at his own university. Pirrie, who also conducts research on climate change in cretaceous Antarctica and on the effects of mining on coastal zones, says his lab is the only one in Europe with such a forensic scope. Today, the lab is called on about once a month to analyze traces of soil for murder and assault cases.

    Tiny clue.

    New methods can link a soil sample to its source using a small fraction of a gram.


    Several new soil-analysis techniques remain a topic of lab research rather than court cases—at least for now. Organic substances among a soil's minerals can also offer an opportunity to match samples. One of Dawson's projects funded under the Soil-Fit umbrella looks at profiling soils by the mementos plants leave behind. Plants have a waxy covering to keep them waterproof. The mix of organic compounds—alkanes, acids, sterols, and other alcohols—is unique to each species and persists in the soil, sometimes for thousands of years. Dawson and colleagues are now refining a means of extracting the waxes to identify plants.

    Jacqui Horswell, a soil microbiologist at the Institute of Environmental Science and Research in Porirua, New Zealand, is pursuing another means of matching soil samples: DNA. Millions of species of fungi and bacteria form complex communities in dirt, yet most remain unknown to scientists. Fewer than 1% of bacterial species can be cultured in the lab, she explains. But by applying a technique that chops DNA at specific target sequences and analyzes the length of the segments, Horswell can profile most of the bacteria in 200 mg of soil. The method doesn't identify individual species. Instead, without the need to culture any microbes, it produces a DNA signature for the organisms within the soil. Horswell and her research team published their first DNA soil profiles in 2001, and they hope that in another 5 years their database of soil DNA signatures will be large enough to be useful in court.

    From science to law

    Indeed, getting a new forensic technique established well enough for courts to recognize it can be a challenge. The SoilFit project, started in 2005 with funding from the U.K.'s Engineering and Physical Sciences Research Council (EPSRC), is one effort to give soil-matching more reliability as evidence. For prosecutors to better survive legal challenges in court, “we need a comprehensive survey of soil types in the United Kingdom” to substantiate the conclusions of an expert witness, says Derek Auchie, director of undergraduate law programs at Aberdeen Business School. To that end, EPSRC gave Dawson's team a £350,000 grant to analyze all feasible combinations of soil types—such as loams, peat, and alluvial soils—and vegetation such as grassland, heather, and forest. To date, they have tested an array of analysis techniques on all 120 combinations and are now comparing each technique's accuracy to work out which ones work better for which soil combinations.

    EPSRC funded SoilFit under its Crime Initiative, which seeks to bridge crime-fighting services and academic research to benefit U.K. citizens. The project is “developing a community of researchers active in [fighting] crime,” says Peter Hedges, head of EPSRC's Economy, Environment and Crime Team. Dawson predicts that the Soil-Fit database will be ready for detectives and prosecutors in 2008. Sherlock Holmes would be pleased.


    In Search of the World's Most Ancient Mariners

    1. Michael Balter

    Researchers debate the capabilities of the first human voyagers, who traveled the waters of Southeast Asia at least 45,000 years ago

    Maritime feat.

    The first boats may have been made of bamboo.


    CAMBRIDGE, U.K.—We humans are terrestrial animals, yet we spend a lot of time gazing wistfully over bodies of water. We flock to the seashore or the lakeside at the slightest sign of mild weather and celebrate the romance of the sea in art and literature. Early seafaring was central to the spread of civilization, and today thousands of vessels ply the world's oceans, searching for fish and hauling billions of tons of cargo.

    Despite the importance of seafaring to culture, however, archaeologists are not sure how, when, and why humans first ventured into the oceans. The earliest known boats, hollowed out logs found in the Netherlands and in France, are at most 10,000 years old. And the earliest indirect evidence for sea crossings in Europe—human occupation of Cyprus and the Greek island of Milos—dates to only 12,000 to 13,000 years ago. Yet ancient archaeological sites in present-day Australia, Indonesia, and other Southeast Asian islands suggest sea crossings at least 45,000 years ago, soon after modern humans first left Africa.

    At a meeting here last month,* three dozen archaeologists and maritime historians sifted through the evidence for seafaring through the ages. They debated, sometimes sharply, whether the earliest mariners crossed the sea purposely or by accident. “There is a danger in accepting either of these extreme positions,” says William Keegan, an anthropologist at the Florida Museum of Natural History in Gainesville. “But I have no problem believing that people who were exploiting coastal resources had developed the ability to cross the water gaps in question by 50,000 years ago.”

    The meeting also heard dire warnings that rising sea levels—which are already at least 50 meters higher than when modern humans first took to the oceans—might put evidence crucial to resolving these questions out of reach. “There are drowned terrestrial landscapes that were occupied by our ancestors,” says archaeologist Jon Erlandson of the University of Oregon in Eugene. “But we know almost nothing about them.”


    This log boat from the Netherlands is nearly 10,000 years old.


    Blown about in a bamboo boat?

    Although most archaeologists have assumed that seafaring was invented by cognitively advanced modern humans, one earlier hominid seems to have jumped the gun. In 1998, a team led by archaeologist Michael Morwood of the University of New England in Armidale, Australia, dated stone tools on the Indonesian island of Flores to 800,000 years ago, when Homo erectus was known to inhabit the Southeast Asian mainland. The occupation of Flores almost certainly required a sea crossing, and Morwood suggested at the time that the cognitive abilities of H. erectus might be “due for reappraisal” (Science, 13 March 1998, p. 1635.)

    Yet the lack of other evidence anywhere near so early suggests to many researchers that this was a fluke that did not require technology. Perhaps a small band of hominids was blown out to sea on floating vegetation, as occasionally happens to other mammals who then found island populations. The possibility that H. erectus evolved in isolation on Flores for thousands of years, eventually becoming the tiny H. floresiensis, a.k.a. the Hobbit, supports the rarity of traveling to or from Flores.

    “Flores is the exception that proves the rule in terms of when seafaring really began,” says Atholl Anderson, a prehistorian at the Australian National University (ANU) in Canberra. Erlandson agrees: “Otherwise, H. erectus should have colonized Australia and the surrounding islands.” Yet although the trek to Australia could be accomplished by relatively short hops across a multitude of islands, there is no evidence that H. erectus ever made that journey. Modern humans were the first hominids in Australia, arriving no earlier than 60,000 years ago, and many archaeologists are skeptical of dates earlier than 45,000 years. Even then, it's hard to differentiate true seafaring from a bit of boating gone wrong, says archaeologist Geoff Bailey of the University of York in the U.K. “It remains an open question whether the move into Australia was a purposeful, high-tech exercise in skilled navigation or a low-tech process of almost accidental drift that resulted in the opening up of a maritime universe.”

    Both viewpoints were in evidence at the meeting. In her talk, ANU archaeologist Susan O'Connor argued that modern humans did not necessarily require sophisticated seafaring skills to colonize Australia and nearby islands. She proposed that early humans traveled by simple bamboo rafts—probably already used to explore rivers and estuaries—then drifted out to sea and were blown about by the monsoon.

    And island hopping was easier in the past. About 45,000 years ago, sea levels were roughly 50 meters lower than they are today. As a result, Australia, New Guinea, and Tasmania formed a single continent known as Sahul, whereas Borneo, Java, and the Malay Peninsula were joined together in a continental shelf called Sunda (see map). Although the earliest dates for modern human occupation of Sahul are controversial, excavations on several islands north of Sahul have produced radiocarbon dates of up to 45,000 years ago—including O'Connor's own excavations at Jerimalai Cave on East Timor, which recently clocked in at 42,000 years. If Sahul was colonized as early as 60,000 years ago, O'Connor contended, then humans' fairly leisurely spread supports a more accidental than purposeful journey.

    Changing seascape.

    Lower sea levels exposed more land during glacial periods (shown here at 22,000 years ago) and made ocean crossings easier.


    O'Connor concluded that when the colonizers did venture farther out to sea, traveling 180 kilometers to the islands of Buka by 28,000 years ago and 230 kilometers to Manus by 21,000 years ago, their earlier seafaring experience might have “preadapted” them to later innovations in boating technology, including larger vessels made of wood and the use of sails. Nevertheless, O'Connor and others stressed, there is no direct archaeological evidence for the use of sails that early, indeed none at all before about 7000 years ago in the Near East.

    The short chronology

    O'Connor's scenario, which archaeologists call the “long chronology” for the colonization of island Southeast Asia, was challenged at the meeting by archaeologist James O'Connell of the University of Utah in Salt Lake City. In the last few years, O'Connell, together with archaeologist Jim Allen of La Trobe University in Bundoora, Australia, has argued from a detailed analysis of radiocarbon dates for a “short chronology” that puts the occupation of Sahul no earlier than about 50,000 years ago. He pointed out that by 45,000 years ago modern humans had colonized a number of islands between Sunda and Sahul, called the Wallacean Archipelago, which stretched at least 1000 kilometers even when sea levels were at their lowest. Reaching many of these islands required sea crossings of 30 to 70 kilometers, sometimes against the currents. Most animals from Asia never achieved these crossings, implying that humans must have used technology to do it. That 5000 years of colonization, O'Connell said, represented a relatively short “archaeological instant.” Rather than drifting, O'Connell argued, early seafarers must have had “marine-capable watercraft” and keen navigation skills.

    To bolster his argument, O'Connell pointed out that remains of open-ocean fish, including tuna and sharks, have been found at numerous island sites dating more than 40,000 years ago, an indication that the colonizers already had boats capable of deep-sea fishing.

    O'Connell also cited recent demographic simulations by anthropologist John Moore of the University of Florida in Gainesville and others, suggesting that successful colonizations require a minimum founder group of 5 to 10 women of reproductive age and a similar number of men. “The odds that the members of a small group cast adrift by chance, then tossed up on an isolated shore, could generate a successful population are long indeed,” O'Connell concluded.

    The conflicting talks drew varied reactions. “My tendency would be to side with [O'Connell],” says Keegan. “For me the issue is what was socially possible. Humans live in groups, and successful colonists tend to reproduce those groups. They have a better chance of survival if they can maintain contact with their parent community,” for example, by making return sea voyages back home. But Anderson counters that the relatively mild, tropical conditions around Sahul 45,000 years ago and the abundance of species of giant, wide-diameter bamboo, perfect for making rafts, ensured that accidental voyagers would survive at sea. “If people were habitually using bamboo [rafts] to explore coral reefs and lagoons, and if they did so as family groups, then the chance of an accidental passage was always there.” Moreover, Anderson says, even such simple craft were capable of carrying a “viable colonizing group of 5 to 10 people” and could be blown across the sea “within a few days.”

    Bailey notes that “island Southeast Asia offers all the right conditions for just such a gradual process,” including warm seas and “lots of very productive marine resources like fish, sea mammals, turtles, and shellfish, which would have encouraged exploration of offshore islands.”

    Indeed, Bailey suggests that the special conditions in Southeast Asia might explain why the earliest evidence of seafaring is there rather than in the Mediterranean, where seafaring only shows up about 13,000 years ago—even though modern humans occupied southern Europe beginning at least 40,000 years ago. “The Mediterranean offers a stark contrast,” Bailey says. “When it comes to marine fertility and productivity of offshore resources, it is very nearly at the bottom of the world league, with little tidal movement … and temperature gradients that trap nutrients on the seabed below the zone of photosynthesis.” Erlandson agrees: “One of the take-home messages of the meeting was that the development of seafaring capabilities was not universal, but was contingent on a variety of ecological and cultural conditions.”

    The other take-home message, Erlandson says, is that the current rise in sea levels caused by global warming, and the accelerated erosion of coastlines, “is threatening our best source of information about such conditions.” Because ancient boats would have been launched from shores now underwater, the best chance of finding evidence for them lies in exploring coastal sites where the ancient shoreline is near the present one, for example, where the land falls off steeply into the sea. Yet most of these sites, Erlandson says, “are actively eroding and countless others have already been destroyed. Enormous amounts of information will be lost in coming decades unless we find, date, and excavate them.”

    • Global Origins and Development of Seafaring, Cambridge, U.K., 9–12 September 2007.