News this Week

Science  05 Jul 2002:
Vol. 297, Issue 5578, pp. 26

You are currently viewing the .

View Full Text

Log in to view the full text

Log in through your institution

Log in through your institution


    Were 'Little People' the First to Venture Out of Africa?

    1. Michael Balter,
    2. Ann Gibbons

    Not long ago, most paleoanthropologists thought that intercontinental travel was reserved for hominids who were big of brain and long of limb. Until very recently, the fossil evidence suggested that early humans did not journey out of Africa until they could walk long distances and were smart enough to invent sophisticated tools. Then, 2 years ago, a team working at Dmanisi, Georgia, shook up those ideas. It reported finding two small skulls dated to a surprisingly ancient 1.75 million years ago and associated with only primitive stone tools (Science, 12 May 2000, p. 948).

    Now, on page 85 of this issue, the Dmanisi team reports another major discovery: an equally ancient skull that is the smallest and most primitive ever found outside Africa. The fossils, researchers say, appear to bury the notion that big brains spurred our first exodus from Africa, and they raise questions about the identity of the first long-distance traveler. “This is a really cool find,” says Ian Tattersall of the American Museum of Natural History in New York City. And Daniel Lieberman of Harvard University predicts that the skull will throw “a monkey wrench into many people's ideas about early Homo migration out of Africa.”

    Hominid haven.

    Lordkipanidze (top center) helps excavate a third skull (shown here on left), the smallest and most primitive yet, from Dmanisi.


    Taken together, the three Dmanisi skulls suggest that our ancestors left Africa earlier, and at an earlier stage of evolution, than had long been assumed. But where exactly do the Dmanisi remains fit on the hominid family tree—and do they represent one or more species? Those questions are sparking much debate, and they might have important implications for our understanding of how humans expanded out of Africa in the first place

    The team—composed of David Lordkipanidze at the Georgian State Museum in Tbilisi, Georgia, Philip Rightmire of the State University of New York, Binghamton, and other co-workers from Europe and the United States—unearthed a beautifully preserved cranium last August about 20 meters from the site of the original discovery. The cranium, which is much more complete than the previous two were and was found with an associated lower jaw, was in the same stratigraphic level and so is about the same age as the previous finds, says Lordkipanidze.

    Overall, the new skull roughly resembles the other two, says Rightmire. “If you line them up together on a table, it would be hard to put them into two groups,” he says. Thus, the team classifies the new skull, like the other two, as Homo erectus, a long-legged, big-brained species considered to be the first to leave Africa; the African members of this species (sometimes called H. ergaster) are thought to be ancestral to our own lineage. If the new fossils do represent H. erectus, they are the smallest, most primitive specimens seen yet, and they provide some badly needed clues to the evolution of that species.

    In fact, some features of the diminutive new skull also resemble H. habilis, an African hominid that some believe was ancestral to H. erectus. Indeed, says Rightmire, if the researchers had found these bones first, they might have called the fossils H. habilis.

    For example, the skull has thin but well-developed brow ridges that slope gently upward above the eye sockets. Moreover, with a cranial capacity of about 600 cubic centimeters (cc)—compared to about 780 cc and 650 cc for the other two Dmanisi skulls—the new skull is “near the mean” for H. habilis and “substantially smaller” than expected for H. erectus, the authors report. (Modern human braincases are about 1400 cc.) “The Dmanisi fossils are really the first group that is intermediary between H. habilis and H. erectus,” says Lordkipanidze.

    Thus, the team suggests that the Dmanisi hominids might be descended from H. habilis-like ancestors that had already left Africa. If so, then soon after our genus arose from presumed australopithecine ancestors—before its members had reached the brain size and stature of H. erectus—it was already pushing into new continents. Such a primitive traveler also raises the heretical possibility that H. erectus itself evolved outside Africa, long considered the cradle of human evolution, notes Tim White of the University of California, Berkeley.

    Another possibility raised by some researchers is that more than one hominid species was on the move. Thus, Jeffrey Schwartz of the University of Pittsburgh contends that the three Dmanisi skulls might actually represent two or even three different species. And Tattersall won't classify the new finds as either H. erectus or H. habilis. “This specimen underlines the need for a thoroughgoing reappraisal of the diversity of early … Homo,” he says.

    Yet, other scientists think that the differences among the skulls simply show how much intraspecies variation there was at this stage of human evolution. “The authors wisely refrain from identifying multiple taxa,” says Eric Delson of the City University of New York. Alan Walker of Pennsylvania State University, University Park, notes that brain size varies within living humans by about 15%, so variable brains are also to be expected in H. erectus. And Milford Wolpoff of the University of Michigan, Ann Arbor, declares there is “not a chance” that more than one species is represented at Dmanisi. Instead, he suggests that the new skull, which appears to be that of an adolescent, might resemble H. habilis simply because it was still growing.

    Understanding which species were present at Dmanisi—and their biology—might eventually be key to figuring out how they left Africa in the first place, researchers say. For example, although skeletal remains from African H. erectus indicate that this long-legged species was a sturdy walker, the so-far-scanty evidence from H. habilis specimens in Africa suggests that it was relatively short legged.

    “If the new skull is associated with bones indicating H. habilis-like body proportions, we're really going to have to rethink ideas about how the oldest humans spread beyond Africa,” says Richard Potts of the Smithsonian Institution in Washington, D.C. “Either this new thing was simply spreading as African habitats spread north into the Caucasus, or a H. habilis-like species was present in Eurasia early.”

    Some of the questions swirling around the new skull might be answered if the site yields leg bones or other skeletal parts. Word is already leaking out from Dmanisi that a few such fossils were found last year. One thing is already clear, says Rightmire: “It wasn't a full-blown Homo erectus and a big brain … that enabled people to push out of Africa. The first pushing was done by little people.”


    Acrylamide in Food: Uncharted Territory

    1. Giselle Weiss*
    1. Giselle Weiss is a writer in Allschwil, Switzerland.

    More questions than answers emerged from a high-profile group of food experts who met in Geneva last week to consider what should be done about acrylamide. This compound, identified long ago as a potential industrial hazard, has now been found in many cooked foods. The World Health Organization (WHO) responded by sponsoring a safety data review. At the end of a 3-day closed meeting, the WHO experts issued an urgent call for more research, but the most striking aspect of their 12-page summary report, released 28 June, might be how little new information it gives on health risks.

    Acrylamide has been used since the 1950s to make paper and dyes and to purify drinking water. Its only known adverse effect in humans—most of whom were exposed in the workplace—is neurological damage. But because it can induce cancer and heritable mutations in lab animals, acrylamide is classed as “probably carcinogenic to humans” by the International Agency for Research on Cancer (IARC) in Lyon, France. Considering the available evidence, “we have to think about the possibility that this could be a human carcinogen,” said Swiss health official Dieter Arnold, who chaired the meeting, at a press conference in Geneva.

    The news that acrylamide is pervasive came as a shock. In April, Swedish researchers announced that the compound was present at high levels in starch-based foods cooked at high temperatures, such as potato chips and certain breads. Initially dismissed as a food scare, the issue took on new urgency when British, Norwegian, and Swiss scientists obtained similar findings in cereals, French fries, and cookies. To find carcinogens in food is not new, says Arnold. But it is new to find such high levels of a cancer-causing substance—and in staples. The expert group also considered risks for children, who may “take in more acrylamide per kilogram of body weight,” says Peter Farmer, a toxicologist at the University of Leicester, U.K.

    Risk unknown.

    High-temperature cooking can produce acrylamide in starchy foods.


    Acrylamide binds to nerve cell proteins, interfering with transport of essential materials, says Peter Spencer, a neurotoxicologist at the Oregon Health and Science University in Portland. In rats, he says, protein binding might also be responsible for injury to testes, whereas heritable mutations and tumors are likely related to DNA damage through another mechanism. But extrapolating from animal studies is difficult, and there is no solid evidence of acrylamide-related cancer in humans. The question is whether the levels of acrylamide found in foods pose a serious risk over time.

    Although scientists “understand how to measure acrylamide in foodstuffs now,” says Laurence Castle of the Central Science Laboratory in York, U.K., they have not set rigorous analytical methods. And almost nothing is known about how acrylamide is formed through cooking, except that it develops at temperatures above 120°C, and amounts increase with cooking time.

    Nor is food the only source: Tobacco smoke and environmental exposure are two others. Acrylamide could even be generated naturally in the body. Although minute amounts of acrylamide are present in drinking water, “it is most unlikely that anyone would consume dangerous amounts of acrylamide by drinking tap water,” according to Jerry Rice of IARC. Estimating exposure is also tricky, because diets vary among people and across cultures.

    For now, no one is calling for a change in dietary habits. The WHO expert committee has recommended cooking food thoroughly but not excessively, eating a balanced and varied diet, investigating ways of reducing acrylamide levels, and setting up an international network to share information. The U.S. Food and Drug Administration (FDA) has developed a methodology and begun testing a limited set of foods, according to an FDA official. McDonald's Corp., meanwhile, has issued a statement claiming that its French fries have been unfairly targeted.

    Acrylamide in food has probably been around since fire, says Farmer: “I think it's an achievement of toxicological science to have discovered it now.” But the precautionary principle dictates that once you have established the presence of a known carcinogen, you are bound to investigate it. “What we know today may change tomorrow,” says Rice.


    Scientists Pan Plans for New U.S. Agency

    1. David Malakoff

    The U.S. science community has begun putting proposals to create the new Department of Homeland Security (DHS) under the microscope. In a string of hearings last week, research leaders told Congress there were serious flaws with the plans for the department's science and technology programs.

    On 6 June, President George W. Bush unveiled a hastily written outline for the new $37 billion antiterrorism agency that made vague references to various government research and development (R&D) programs (Science, 14 June, p. 1944). Two weeks later, when White House officials delivered a more detailed legislative proposal to Congress, they had dropped controversial ideas such as stuffing the Department of Energy's Lawrence Livermore National Laboratory in California into the proposed department. And more changes are likely. “This is very much a work in progress,” acknowledges White House science adviser John Marburger.

    Both the White House plan—and an alternate blueprint put forward by Senator Joseph Lieberman (D-CT)—include plenty of provisions that make researchers nervous. Many biomedical scientists, for instance, oppose giving an agency with a strong focus on border security control over bioterror research, response, and regulatory programs that are now at the National Institutes of Health (NIH) and the Centers for Disease Control and Prevention (CDC). “I'm skeptical that such an odd coupling will work,” Tara O'Toole, head of the Johns Hopkins Center for Civilian Biodefense Strategies in Baltimore, Maryland, told the House Energy and Commerce Committee. “It is a very tall order to ask a single agency to develop national security strategy and … create a sophisticated R&D capability.”

    Others questioned how the new agency would manage research. Both Lieberman and the White House have presented plans that are “unworkable,” science policy guru Lewis Branscomb of Harvard University told the Senate Government Affairs Committee. He was particularly skeptical of Lieberman's idea for a multiagency committee to dole out DHS science funding. “I have never seen an interagency committee in the federal government capable of administering anything,” said the one-time head of the National Bureau of Standards.

    Legislators seemed to relish such blunt talk. Lieberman said he was already thinking about reworking his bill's R&D provisions to accommodate SARPA—a Security Advanced Research Projects Agency modeled after the Pentagon's agile Defense Advanced Research Projects Agency. And Representative Sherwood Boehlert (R-NY), chair of the House Science Committee, said that critics have convinced him that the White House proposal “simply does not give R&D a high enough profile.” Boehlert is especially keen for the agency's research portfolio to be directed by a single manager, an idea backed by a new report from a panel that Branscomb co-chaired (Science, 28 June, p. 2311).

    All these ideas will go into the congressional blender, which is expected to spit out a final plan before the end of the year.


    Disease Gene Research Heats Up in the Desert

    1. Mari N. Jensen*
    1. Mari N. Jensen is a science writer in Tucson, Arizona.

    A new genomics complex with big ambitions got a boost on 26 June when Arizona lured geneticist Jeffrey Trent, scientific director of the National Human Genome Research Institute (NHGRI), back home. Earlier this month, the Phoenix area landed a genomics center that will identify genes active in cancerous tissue. Now, Trent has announced that he will leave NHGRI to head a complementary, newly formed research institute aimed at turning such data into treatments.

    Sunny future.

    Geneticist Jeffrey Trent and two research institutes are setting up shop in Phoenix.


    The Translational Genomics Research Institute (TGRI) was formed to provide the research base needed to convince Trent, a senior science adviser of the nonprofit International Genomics Consortium (IGC), to locate the consortium in the Phoenix area. IGC's goal is to determine patterns of gene expression in cancer tissue and put that information in the public domain. Biomedical researchers could then use the information to identify specific cancer-causing genes and ultimately develop drug therapies targeting those genes.

    IGC, now located in Scottsdale, Arizona, had been courted by cities with strong biomedical research institutions, including Atlanta and Houston. To get IGC to Arizona, the governor, the city of Phoenix, and private donors put together a start-up package of $92 million for TGRI and persuaded Trent to head it. Arizona had an advantage: Trent grew up in Phoenix, got his Ph.D. at the University of Arizona in Tucson, and once worked at UA's Arizona Cancer Center.

    Trent says the new institute will be freestanding but have ties to the state's universities, much like the Fred Hutchinson Cancer Research Center in Seattle or the Salk Institute for Biological Studies in La Jolla, California. Although Trent is the only scientist on board now, the ambitious timetable laid out for the new institute calls for hiring 15 to 25 research staff in the first 100 days and 100 researchers by the end of the first year. “The goal is to put together a collection of individual scientists who have an interest in as quickly as possible moving from targets to treatments,” he says. The institute will initially focus on melanoma and pancreatic cancer and then expand to other diseases, including diabetes.

    Francis Collins, director of NHGRI, says, “The opportunities in translational research are incredibly broad right now, and entities such as TGRI will play a critical role in that future.”


    Spintronics Innovation Bids to Bolster Bits

    1. Robert F. Service

    By just about any measure, technologists pushing to cram more data onto computers' magnetic hard disks have been on a roll. Over the past 4 decades, companies have gone from storing a few thousand bits of data per square inch of disk space (the standard industry measure) to tens of billions of bits in the same space today. That's driven the cost of storing each bit down by orders of magnitude, savings that have fueled the explosive growth of the Web, among other things. Now, a team of researchers at the State University of New York, Buffalo, reports an innovation that could keep the data-density gains rolling in for years to come.

    In the 1 July issue of Physical Review B, materials scientists Harsh Deep Chopra and Susan Hua report passing electrons through a cluster of magnetic atoms that bridge two magnetic wires. When the magnetic orientation of those electrons, also known as their spin, is the same as the magnetic orientation of the two wires, the electrons travel effortlessly through the cluster, a phenomenon known as ballistic magnetoresistance (BMR). But when the magnetic orientations of the wires point in opposite directions, electrons moving through the cluster from one wire to the other must quickly flip their spin. Because that's hard to do in the nanosized clusters, Chopra and Hua found that the measured electrical resistance jumped over 3000%, the largest such effect ever seen (see figure).

    A related effect, known as giant magnetoresistance, forms the basis for the magnetic read heads found in nearly all computer hard-disk drives. As a read head moves above bits of magnetic data, changes in the magnetic orientation of those bits alters the electrical resistance of electrons flowing through the sensor, translating the magnetic data into a stream of electrical pulses.

    Those changes in magnetic orientation produce only about a 100% change in resistance in the read head. The larger BMR effect could lead to smaller and more sensitive read heads capable of reading smaller magnetic bits. And that, in turn, could allow diskmakers to boost the storage density of disk drives to a staggering 1 trillion bits per square inch.

    Tough going.

    Electrons breeze between two wires with the same magnetic orientation (top) but face resistance when the orientation of one is reversed (bottom).


    “This is a great discovery,” says William Egelhoff Jr., a physical chemist at the National Institute of Standards and Technology in Gaithersburg, Maryland. “It's exactly what the disk-drive industry needs if it wants to maintain the growth rates in data-storage density.”

    Chopra and Hua weren't the first to spot BMR. Nicolás García and colleagues at the Consejo Superior de Investigaciones Científicas in Madrid, Spain, first described the effect in 1999. At the time, they saw only a 200% change in resistance, a number they have subsequently raised to 700%. García's team produced the effect by positioning two magnetic wires close to each other in the shape of a “T.” They then used standard techniques to deposit magnetic atoms from a solution, forming a nanobridge between the two wires. Egelhoff says that García's team has done beautiful work in demonstrating the effect, but he says that their technique for making the bridges is “somewhat crude.”

    That's where Chopra and Hua come in. Before depositing the metal bridge, they sharpened the tip of the wire, bisecting the top portion of the “T” to form an ultrafine point just 40 nanometers across. That allowed the bridge to meet the wire at a single, well-formed contact. Just why that should produce a higher magnetoresistance effect remains unclear, however.

    Whatever the mechanism, Egelhoff notes that there is still a long way to go before the effect has a shot at revolutionizing data storage. Most important, he says, researchers must still learn to harness BMR to create magnetic read-head sensors. He is collaborating with García's team on the initial steps needed to do just that, a goal that Chopra's team is pursuing as well. If successful, the technique could extend conventional disk-drive technologies to storage densities that some labs are pursuing by much riskier, more exotic approaches.


    Ecologists See Flaws in Transgenic Mosquito

    1. Martin Enserink

    WAGENINGEN, THE NETHERLANDS—If a small band of molecular biologists has its way, the next few years might bring field tests of “designer mosquitoes,” genetically modified so that they are unable to transmit diseases such as malaria. The goal would be to replace the natural mosquito populations ravaging developing countries. But at a workshop here last week,* 20 of the world's leading mosquito ecologists said, “Not so fast.” Although lab science might be thriving, they said, huge ecological questions remain—and it's time funding agencies, which have enthusiastically endorsed the transgenic mosquito plan, started devoting attention and money to answering them.

    Gathering in this Dutch university town, the group outlined a sweeping ecological research agenda, ranging from baseline population genetics to an emergency plan in case the transgenic critters run amok. Many of these issues have been deferred or overlooked by the molecular biologists developing the disease-fighting mosquitoes, said meeting organizer Thomas Scott of the University of California, Davis.

    Still a stretch.

    Making transgenic mosquitoes has become relatively easy—this larva carries the green fluorescent protein gene—but ecologists say this strategy is a long way from driving down malaria.


    At least five U.S. and three European research groups are working on transgenic mosquitoes, with support from the U.S. National Institute of Allergy and Infectious Diseases (NIAID), the World Health Organization (WHO), and the MacArthur Foundation. After a slow start, the field took off in 1998, boosted by new genetic engineering techniques (Science, 20 October 2000, p. 440). As they reported in the 23 May issue of Nature, Marcelo Jacobs-Lorena of Case Western Reserve University in Cleveland and his colleagues recently inserted an extra gene into Anopheles stephensi, a mosquito that transmits malaria in India, that made the insects resistant to mouse malaria. Others are tweaking the genes of Aedes aegypti, the mosquito that transmits dengue.

    But the ultimate target is Anopheles gambiae, the main vector of the deadliest malaria parasite, Plasmodium falciparum, in Africa. Researchers hope to make resistance genes spread through natural mosquito populations by hitching them to a selfish piece of DNA called a transposon or to a strange bacterium called Wolbachia that sweeps through insect populations by manipulating its host's sex life. If this works, they will have created golden bugs that could save millions of lives—at least in theory.

    At the meeting, ecologists came up with a discouraging list of hurdles that could easily sink the plan. For example, will the new mosquitoes be able to compete for partners with their natural counterparts? (Past studies have shown that spending a few generations in the lab diminishes their sexual attractiveness.) How long would it take for a new resistance gene to penetrate the population, and would it be 100% effective in mosquitoes that carry it? (If not, the transgenic bug would barely make a dent in malaria incidence, suggests a model by Christophe Boëte and Jacob Koella of the Pierre and Marie Curie University in Paris.) In areas with multiple malaria vectors, would all the species need to be “treated”? And would P. falciparum develop resistance to the new genes, as it has to many drugs? Or could this be prevented if multiple antiparasite genes were used?

    Studying many of these issues is problematic. Most researchers agreed that after cage experiments, some sort of pilot trial would be needed. But where? It must be a place from which mosquitoes can't escape. Saõ Tomé, one of a handful of islands that form a republic off the east coast of Africa, has been suggested, and one meeting participant floated the idea of creating artificial “oases” in the Sahara desert. Even more vexing are some of the ethical and regulatory issues. Although it's unclear who would set the rules, a field test would have to meet safety standards as strict as those for vaccine trials, said entomologist Yeya Touré, malaria coordinator at WHO's Special Programme for Research and Training in Tropical Diseases—or perhaps even stricter, as it would expose people who had not agreed to participate.

    Feeling “a bit like a ham sandwich on Passover,” the only molecular biologist at the workshop, David O'Brochta of the University of Maryland, College Park, admitted that he and his colleagues have given little thought to these issues. But that reflects a lack of expertise rather than concern, he said, urging ecologists to join the work.

    In the past, said meeting host Willem Takken of Wageningen University, granting agencies have not been impressed by old-style fieldwork such as counting mosquitoes or studying their feeding behavior. But at least NIAID is now convinced that the ecologists' input is urgently needed, says Kate Aultman, program manager for vector biology. Some at the meeting said that they were uncomfortable allying themselves too closely with a research program that faces such major problems. But most still preferred joining it to trying to beat it—if only because the research might be valuable regardless of whether transgenic mosquitoes ever take wing.

    • *“The Ecology of Transgenic Mosquitoes,” Wageningen University and Research Centre, 26–29 June.


    Mixed Schools a Must for Fish?

    1. David Malakoff

    Fish markets teem with neatly iced schools of similarly sized fish. The marked uniformity is often the result of two forces: customer demand for pan-sized portions and fishing regulations that limit harvests to older fish to preserve populations. But some biologists fear such selective culls could permanently alter the genetic makeup of wild fish stocks.

    A matter of scale.

    Selective catches of Atlantic silversides (above) left genetic legacies.


    Now, two scientists have gone fishing in their laboratory to test that idea. And on page 94, they say they've netted data suggesting that fisheries managers should rethink their rules if they want to prevent some stocks from swimming down dangerous evolutionary paths. “Managers haven't focused enough on the long-term, Darwinian consequences of selective harvest,” says one author, ecologist David Conover of the State University of New York, Stony Brook. Some biologists, however, say the lab-based results lend little to the current debate over how best to protect teetering populations.

    Scientists have already suggested that some fish populations are evolving rapidly in response to heavy fishing. Several cod and salmon stocks, for instance, appear to have shifted to smaller, earlier maturing fish as fishers systematically removed larger and older specimens. But wild populations can be difficult to study, so fishing's genetic impacts have remained in dispute.

    To get a clearer view, Conover and graduate student Stephan Munch moved to the more manageable confines of the lab. Four years ago, using eggs collected from wild stocks, they hatched six captive populations of Atlantic silversides. Then they went fishing. From each school of 1000, they removed 90%. In two of the tanks, they took the largest fish; in two others, the smallest; and in the remaining control tanks, the harvest was random. After allowing each school to rebound to its original size, they repeated the process for four generations, charting how the size, weight, and growth rates of the populations changed over time.

    The results were dramatic. Population characteristics in the random-catch tanks, as expected, stayed relatively even. But the balance disappeared with other methods. Taking the bigger fish produced a catch that was initially heavier than in controls. The average weight of individual fish soon shrank, however, and by the fourth generation, the catch was substantially smaller than in controls. In contrast, taking the smaller fish produced a catch that was lighter at first, but the hauls and the individuals grew heavier over time.

    The rapid shifts in the selectively culled populations were due to inherited genetic changes, the authors say. The same thing is happening in the wild, they speculate, although on a much slower timetable, due to the greater size and age diversity of natural stocks. As a result, “management practices meant to maintain [robust catches] may be having the opposite effect over the long run,” says Conover. The pair recommend establishing more reserves that are off-limits to anglers and regulations that protect larger fish as well as smaller ones.

    Several fisheries scientists, including Felicia Coleman of Florida State University in Tallahassee, say the findings suggest that those ideas are on target. But others, including Carl Walters of the University of British Columbia in Vancouver and Ray Hilborn of the University of Washington, Seattle, say the experiment is far too limited to support major management changes. “All they have done is show that growth rates are heritable; what they haven't done is see what the impact of this would be on a realistic fishery,” says Hilborn.


    Collective Effort Makes the Good Times Roll

    1. Adrian Cho*
    1. Adrian Cho is a freelance writer in Boone, North Carolina.

    Two wrongs don't make a right, but two dozen of them might. A pair of physicists has found that groups of imprecise clocks can collaborate to tell time with remarkable accuracy. Their findings might one day help computers tackle tough problems as a team.

    Scientists and engineers know the difficulty of extracting accurate information from a collection of imperfect devices, such as a clutch of clocks. Every clock gives a slightly different reading, and the problem is how to combine those readings to get the best estimate of the time. The most obvious solution is to average readings from all the clocks—a strategy once employed by sailors at sea—but the inaccuracy decreases only slowly as the number of clocks increases. For example, to get an estimate 10 times more accurate than that of a single clock, a timekeeper would need about 100 clocks.

    On time.

    Working together, imprecise clocks can keep good time.


    A far better way is to read only some of the clocks, report physicists Damien Challet and Neil Johnson of Oxford University. In a computer study, the researchers simulated collections of clocks with readings distributed around the correct value according to a bell curve. They then took the reading of each clock individually, the average reading for each possible pair of clocks, the average reading for each group of three, and so on. By trying every subset, Challet and Johnson found that they could usually identify a combination containing roughly half the clocks whose average reading was far closer to the correct time than the simple average of all the clocks, as they report in the 8 July issue of Physical Review Letters. For example, starting with 20 clocks, they typically found a subset of about 10 whose inaccuracies compensated for one another almost perfectly, so that their average was 100,000 times more accurate than was the average of all the clocks.

    Moreover, Challet and Johnson proved mathematically that in certain cases, it is relatively easy to compute the best combination, given the amount by which each clock is fast or slow. That implies that a technologist should be able to figure out how to cobble together a nearly perfect machine from a basket of faulty parts after simply checking the inaccuracy of each part.

    The work is an important step in the study of “collectives,” groups of autonomous agents that conspire to achieve a common goal, says David Wolpert, an expert in complex systems at NASA's Ames Research Center in Moffett Field, California: “Things become really interesting when the agents aren't little clocks but computer chips.” Such studies will be crucial, Wolpert adds, as computers evolve from machines that perform specific tasks, by following strict rules, to more adaptable entities that can work together and find their own ways to solve larger problems.


    Winning Streak Brought Awe, and Then Doubt

    1. Robert F. Service

    Bell Labs researchers' now-clouded string of papers set other scientists agog over results that promised to revolutionize several fields. Then the storm broke

    Even in the best of times, 95% of Jan Hendrik Schön's experiments fizzled out. It's the other 5% that made him one of nanotechnology's brightest stars and the envy of physicists worldwide. In paper after paper, he and his collaborators reported that a simple turn of a dial could transform normally poor electrical conductors into semiconductors, metals, or even superconductors, a malleability never seen before. That opened up entirely new vistas for exploring the physics of materials—the stuff of Nobel Prizes.

    Today, though, few researchers would trade places with Schön, a physicist at Bell Laboratories, the research arm of Lucent Technologies in Murray Hill, New Jersey. On 10 May, Bell Labs officials launched an investigation of Schön's work, after outside researchers revealed what appears to be duplication of data in multiple papers (Science, 24 May, p. 1376). Schön is the lead author on all the papers under scrutiny and the only author whose name appears on all. The investigation is the first of its kind in the 77-year history of Bell Labs, the world's most famous corporate research outfit.

    Schön says he stands behind his measurements and is doing everything he can to cooperate with the inquiry, which is being conducted by an outside committee and is expected to be completed by the end of the summer. But, until then, a cloud hangs over a spectacular body of work.

    Some researchers say that the suspect data have cast doubt on all of Schön's results. “I can't trust any of the work,” says Harvard University chemist and nanotechnology expert Charles Lieber. Others, however, point out that much of the disputed data seems to be supporting material, not the primary results in each paper, which detailed the observations of everything from high-temperature superconductivity to quantum-mechanical signatures never before seen in organic materials.

    Unfortunately, Schön's most provocative results have not been independently verified, despite years of effort by other labs and tens of millions of dollars spent on research in the area. Even before the storm of controversy broke, other scientists were starting to raise questions about how Schön and his colleagues achieved their stunning results and why no one else has been able to repeat them.

    In interviews conducted over the past 6 months—most of them before the investigation began—the Bell Labs team and others in the field retraced the whirlwind trajectory of the work and weighed its enormous promise against those simmering questions. The answers, when they come, will have enormous significance not just for the fate of one bright young researcher but also for scientists around the world trying to follow his lead, and for the future of one of the hottest ventures in condensed-matter physics.

    A question of speed

    That venture got its start in Bell Labs' room 1E318, a somewhat dingy, crowded lab located one floor below the birthplace of the transistor. The room was the longtime lab of superconductivity physicist Bertram Batlogg. One day in the mid-1990s, Batlogg and his colleagues were brainstorming ideas about work on plastic electronics when he hit on one that he just had to try.

    High flyer.

    Jan Hendrik Schön dazzled physicists with results many have tried to emulate.


    A Bell Labs team led by physicist Ananth Dodabalapur, now at the University of Texas, Austin, had succeeded in making field effect transistors (FETs) using a variety of organic materials laid down in thin films. FETs are the bedrock electrical switches of computer circuitry. In a typical version, a pulse of electrons sent to one electrode, called the “gate,” creates an electric field that repels electrons sitting in the semiconductor lying directly below, effectively spiking it with positive charges. These charges boost the conductivity of this semiconductor “channel,” making it easier for electrons to flow through this channel between two other electrodes. And, presto, the device switches from off to on (see diagram, below).

    But electrons don't move at the same rate through all semiconductors. Dodabalapur's organic transistors weren't about to give Intel's inorganic ones a run for their money. They were painfully slow. Electrical currents crept through the organic channels at a pace orders of magnitude below their speed through even the worst silicon-based devices. The team didn't know whether organics were inherently slow conductors or whether the problem lay in the way the devices were constructed.

    Batlogg suggested a way to find out. In the thin organic films that Dodabalapur's team was using, the organics invariably organized spontaneously into tiny crystallites, like gravel on a path. It was possible that charges were zipping through the perfectly ordered organics within each crystallite but were getting hung up at the ragged borders as they hopped from one crystallite to the next. Batlogg suggested growing larger single crystals and using them to measure the speed of electrons. Because single crystals don't have grain boundaries, the researchers would see what the materials could really do.

    The catch was that making high-quality single crystals out of organics is much more easily said than done. “Organics are synonymous with crappy stuff,” Batlogg said in February. Even the best organic crystals typically harbor 1% to 2% impurities, often solvent molecules left over from their original synthesis. Their presence can disrupt the regular crystalline order of the material enough to make it impossible to grow a single crystal.

    Dodabalapur's team members were too busy with their thin-film transistors to steer their research in a new direction. So, Batlogg offered to help. In 1997, he and Bell Labs chemist Bob Laudise recruited Christian Kloc, a chemist then based in Konstanz, Germany, who was an expert at growing crystals. Kloc quickly hit upon a new strategy for both purifying organics and growing crystals at the same time. Kloc's progress meant Batlogg needed another set of hands to put the crystals through their electrical paces. His longtime friend (and Kloc's former boss) Ernst Bucher in Konstanz recommended Schön, who jumped at the opportunity and left for New Jersey even before finishing his Ph.D.

    Schön set up shop in Batlogg's lab, and the results came quickly. When Schön slapped electrodes on a variety of different organic crystals, none came even close to matching the speed of the standard crystalline silicon semiconductor. The best organic, called pentacene, just kept pace with low-grade amorphous silicon, a semiconductor commonly used in solar cells. Dodabalapur's team had already achieved similar speeds with their thin-film FETs—devices that were chock-full of grain boundaries.

    This was bad news for organic-crystal research. If grain boundaries didn't hinder the flow of current, then there wasn't much anyone could do to improve the crystals' plodding speed. Plastic electronic devices, it seemed, were destined to be slow.

    Up for a challenge

    The science of working with single crystals of organics, however, was just picking up speed. Horst Stormer—a Nobel Prize-winning physicist who was then at Bell Labs but has since moved to Columbia University in New York City—spurred the team on by issuing a challenge. “He said [that] if any semiconductor is decent, you can make it into a transistor,” Batlogg recalled. Transistors had already been made with thin films of organics. But those are relatively simple devices to make. Researchers place metal electrodes on a wafer of inorganic crystalline silicon, then add a layer of the soft organic material atop the wafer's tough ceramiclike surface.

    What Stormer was proposing was much harder: starting with one of Kloc's fragile, millimeter-sized single crystals of organics as the substrate and planting metal electrodes on top. The upside-down approach was necessary because organics grown on wafers spontaneously form tiny grains, or crystallites. As a result, single-crystal FETs must be built from the crystal on up. “The difficulty was the prospect of putting down hard materials on soft materials, held together only with weak van der Waals bonds,” said Art Ramirez, a physicist at Los Alamos National Laboratory in New Mexico, during an interview in March. “You have to do the deposition very, very carefully.”


    In an organic FET, electrons sent to a gate electrode induce positive charge in a semiconductor such as pentacene, causing current to flow.


    To succeed in making a single-crystal organic transistor, Schön needed another key ingredient: a thin insulating barrier to prevent charges from shuttling back and forth between the electrodes when they're not supposed to do so. All transistors rely on such insulating barriers, which come in a wide variety of chemical flavors.

    Schön decided to make his insulator out of aluminum oxide, a decision that became the key to the group's biggest successes and its greatest mystery. Not long after joining Bell Labs, Schön flew from New Jersey to Konstanz to finish up work on his Ph.D. “I got back to Konstanz and was sputtering aluminum oxide for solar-cell coatings,” he recalled in February. “No one else was using the machine, so I decided to try it out” for the organic transistors. The machine coated the organic crystals with a neat insulating layer.

    Later, Schön used a low-temperature scheme to deposit the gate electrode on top of the aluminum oxide while protecting the fragile organics. When he hooked up the electrodes to a power supply and flipped the switch, he recalled later, the results jumped off the screen. Not only had the fragile organic crystals not cracked, broken, or turned to ashes, but they had changed from insulators to semiconductors, conducting current when prompted by the gate voltage.

    The result was a paper published in the 11 February 2000 issue of Science (p. 1022). “That was the first time the community took notice of the crystals,” Batlogg said.

    The paper caused a sensation in the condensed-matter physics community, because it held out the prospect that researchers could make electronic devices out of an enormous variety of organic materials. Those wouldn't necessarily be any better than silicon FETs for computer circuitry. But they would give researchers a new way to track how electrical charges move through a wide variety of materials. According to the Institute for Scientific Information, the paper has since been cited 130 times, making it not only Schön's most highly cited paper but also one of the top 0.01% of all physics papers published in 2000.

    Schön, Batlogg, and Kloc were just getting warmed up. And Stormer was ready with a new challenge. “Horst said, ‘Any real semiconductor has a quantum Hall effect,’” Batlogg noted. The effect, a stepwise change in voltage that occurs when a semiconductor studded with electrodes is placed in a magnetic field, is a hallmark of the quantum-mechanical behavior of electrons. Most physicists thought it could be observed only in materials so pure that electrons move through them without scattering off obstacles. Because organic crystals normally harbor so many impurities, “I never thought we'd see it,” Schön said.

    But, on 23 December 1999, just before taking off to Germany for the holidays, Schön ran the experiment and reported seeing the telltale voltage steps. “We showed the result to our [Bell Labs] colleagues, and everyone thought we were joking,” Batlogg said. Added Schön: “It was a nice Christmas present.” Later, Schön also noted that he had witnessed a related effect called the fractional quantum Hall effect, the effect for which Stormer had shared his Nobel Prize in 1998. “Wow. I thought this was fantastic,” Stormer recalled in an interview a week before news of the disputed figures broke.

    Superconductors and beyond

    The Bell Labs trio didn't revel in its success for long. By now, Kloc was churning out high-quality crystals of a variety of organics, including ones made of C60, the soccerball-shaped carbon molecule also known as a buckyball or buckminsterfullerene. Back in 1991, a Bell Labs team that included Ramirez had turned the normally insulating C60 into a superconductor by spiking it with potassium atoms. The potassiums harbor extra electrons, which could move around through the crystal and pair up as they go, a signature of superconductivity. Theoretical results suggested that if researchers could add three extra electrons for each C60 molecule in the crystal, they could get it to superconduct without potassium atoms.

    Where to get those electrons? Schön, Batlogg, and Kloc wondered whether they could use the electric fields produced by their FETs to yank them from the FET electrodes and shunt them into a channel made from C60. If the density of charges got high enough, perhaps the material would resemble a metal like copper, or, if they got really lucky, perhaps even a superconductor.

    But that wasn't a simple proposition. The density of free electrons in a metal is at least 1000 times that of a semiconductor. “If you want to go to very high [electron] concentrations, you have to apply a very high field” to the gate electrode, Schön said. Normally, that turns the organic to ash. So Schön needed not only to build FETs atop C60 but also to have the organics and the aluminum oxide insulator withstand withering electric fields. Nearly all his attempts failed. But in a few cases, Schön's seemingly magic layer of aluminum oxide somehow handled the high currents. In the 28 April 2000 issue of Science (p. 656), Schön and his colleagues reported that they had coaxed potassium-free crystals of C60 to superconduct at 11 degrees above absolute zero.

    Not bad for starters. But they dreamed of achieving even higher superconducting temperatures. Theorists had suggested that C60 could reach such temperatures if it could be made to conduct positively charged “holes” instead of electrons. Holes are vacant electron sites and can move through a material just as electrons do. But, although chemists could add extra electron-carrying atoms such as potassium into a C60 crystal, there was no chemical method for adding holes.

    FETs, however, can manage the task handily. All the researchers had to do was simply reverse the polarity on the electrodes to pull electrons off the C60's. In the 30 November issue of Nature, Schön, Batlogg, and Kloc reported that the scheme worked just as theory said it should, allowing a C60 crystal to superconduct at 52 kelvin. Less than a year later, they reported in Science that they had tweaked the C60 crystals to push the superconducting temperature up to 117 K (Science, 31 August 2001, p. 1570).

    The papers not only shattered the record for superconductivity in C60 but also offered researchers the heady prospect of finding high-temperature superconductors without painstakingly doping each material with different impurities. “They defeated chemistry,” Princeton University physicist Bob Cava said in February.

    More marvels were to come. In 2000 and 2001, papers by the trio were flooding the journals. Schön and collaborators both inside and outside of Bell Labs reported turning organics into everything from light-emitting lasers to light-absorbing photovoltaic devices. They made plastic, a notoriously messy organic compound, superconduct. And they showed that their high-field FETs could work just as well with inorganic superconductors, a development that promised to revolutionize the field of high-temperature superconductivity.

    As if that weren't enough, while Schön was rewriting the textbooks on condensed- matter physics, he was also busy pioneering a separate field: molecular-scale transistors. In the 18 October 2001 issue of Nature, he and Bell Labs colleagues Zhenan Bao and Hong Meng reported making a novel type of transistor in which the key charge-conducting layer was composed of a single layer of an organic conductor. They followed that with a report in the 7 December issue of Science (p. 2138) describing how they diluted the charge-conducting layer with nonconducting insulating molecules, allowing them to track the conductivity in a transistor through a single molecule. Together, the results were hailed as a triumph of molecular-scale electronics.

    Through the beginning of this year, Schön had racked up 15 papers in Science and Nature, as well as dozens in other journals. Between 1998 and May 2002, he published more than 90 papers and was lead author on 74 of them, a staggering level of productivity. Of his top 20 papers, all are in the top 10% of physics papers for their number of citations, with eight in the top 0.1% (see table). Many of those citations are no doubt Schön and colleagues citing their own previous work. Nevertheless, the work clearly captured the imagination of others in the community. “Hendrik has magic hands,” Ramirez said in the March interview. “Everything he does seems to work.”

    View this table:

    But, even before the revelations in May, questions began to swirl around Schön's work. His magic, other researchers noted nervously, didn't seem to work for anyone else. Over the past 2 years, efforts to slap high-field FETs on different materials have become one of the hottest endeavors in condensed-matter physics. The U.S. Department of Energy, for example, has helped fund a new group specifically to try to reproduce and extend the Bell Labs results.

    Crystal balls.

    Bell Labs team reported new records for superconductivity in C60.


    So far, however, the well is dry. “It's very unusual to have a result that is 2 years old that hasn't been reproduced,” Richard Green, a physicist at the University of Maryland, College Park, said during an interview in February. Added Robert Dynes, a physicist and chancellor of the University of California, San Diego: “Some people are frustrated and discouraged.”

    Some researchers also complain that Schön's early papers left out key details needed to reproduce the work. “There is an uneasy feeling around the community,” said Teun Klapwijk, a physicist at the Delft University of Technology in the Netherlands on 2 May, just before the discovery of the apparent duplication of data. “Why do the papers have so little detail? It's such a unique case of a whole string of papers where each paper shows you the beautiful result you want to see: the Hall effect, lasing, the quantum Hall effect, superconductivity. People kept feeling, ‘Is that possible? How do you produce so many results? Is that physically possible?’”

    Team effort.

    Bell Labs colleagues such as Ananth Dodabalapur, Zhenan Bao, and Christian Kloc were among Schön's many collaborators.


    Much of the concern boiled down to Schön's aluminum oxide. Making the electron barrier is a piece of cake. You just vaporize the materials in an apparatus called a sputtering machine and let the vapor rain down on your sample. Add the electrodes, and you're in business. “You really need very little to get into the game,” said Ramirez. But when most researchers start turning up the electric fields to drive electrical charges into their organic materials, they can create fields of only about 10 million volts per centimeter before the aluminum oxide starts to bubble, turn black, and vaporize, taking the fragile organics along with it. Somehow, Schön's aluminum oxide gets up to 45 million volts/cm—nearly five times higher than anyone else. “What is the trick?” asked physicist Arthur Hebard of the University of Florida, Gainesville. IBM physicist John Kirtley agreed: “That's the $64 million question.”

    Magic box

    In an interview last February, Schön, Batlogg, and Kloc said they were as eager to find out the answer as everyone else. “I wish we knew,” Batlogg said. Added Schön: “If we knew, then we wouldn't have to waste 18 samples out of 20.” Schön said he had looked at the aluminum oxide layers under ultrahigh magnification but found nothing remarkable—just a noncrystalline amorphous layer of aluminum oxide. Whatever the secret, it seems unique to Bucher's sputtering machine in Konstanz, the only place where Schön has managed to grow aluminum oxide layers that withstand the high fields.

    Sputtering machines are commonplace in the world of semiconductor electronics, and Bucher's is a run-of-the-mill one at best. The machines vaporize their targets in a vacuum to ensure that outside compounds don't find their way onto a sample. But the vacuum in Bucher's machine is “lousy,” said one researcher, capable of reaching a pressure of 10−6 torr. By contrast, state-of-the-art molecular-beam epitaxy machines—devices used to lay down vaporized materials one atomic layer at a time—can reach 10−12 torr.

    The upshot, says one researcher, is that Bucher's machine isn't just laying down aluminum oxide: “It has everything in it—your breath, water, other gases.” That rain of mystery compounds, the Bell trio speculates, might somehow toughen the material against meltdowns, perhaps by plugging defects that would otherwise snag electrical charges.

    Even if other groups manage to make aluminum oxide that's stable in high electric fields, high-field FETs will face further hurdles. For the devices to work, both the underlying crystals and the interfaces between the different layers of semiconductors, insulators, and metals need to be nearly perfect to prevent charges from getting hung up as they travel between layers and burning out the device. As a result, Texas's Dodabalapur said at a meeting of the American Chemical Society (ACS) in April, high-field FETs are so fragile that getting them to work “requires the skill of a jeweler, the persistence of a saint, and the background of a physicist.”

    Even so, some teams believe they're making progress. Ramirez's group at Los Alamos has made the most headway. At the March meeting of the American Physical Society in Indianapolis, Indiana, Ramirez reported that when he and colleagues ran currents through FETs they had created using C60 crystals made by Kloc, they saw signs of the organic's behaving like a metal—although not a superconductor.

    Clouded prospects

    That was where Schön's saga stood in early May. Since then, the latest chapter—revelations of possible duplication of data—has cast doubt on all that went before. It would make matters far simpler if Schön could submit his best FETs for independent testing or invite other researchers to make measurements on his equipment. But that's not possible. The small fraction of Schön's FETs that did work in the past were either fried in the process or have degraded, he says. Worse, Schön's magic in making his high-strength aluminum oxide seems to have evaporated, as even he has been unable to reach the high electric fields for about the last 6 months. “We have the same problems now as everyone else,” Schön said at the April ACS meeting. “It has been frustrating. We can empathize with what others have been going through.”

    Now, one of the most exciting strings of results in modern physics is under a cloud. It's impossible to say which work will stand the tests of time and intense scrutiny. “Maybe some of the most dramatic stuff is right,” says James Heath, who heads the California NanoSystems Institute at the University of California, Los Angeles. “What's harder to believe: that everything is wrong, and they made up all of the data, or that some of it is real? It's easier to believe that there are some legitimate results.”

    If Schön's results hold up, they would point the way to exciting physics and novel devices. If not, the loss could be devastating—not just for the careers of those directly involved but for the credibility of Bell Labs, condensed-matter physics, and science as a whole. Research on organic electronics would of course press on. But, it would march less resplendently than it did before Hendrik Schön set foot in Bell Labs 4 years ago.


    Graph Theory Uncovers the Roots of Perfection

    1. Dana Mackenzie*
    1. Dana Mackenzie is a writer in Santa Cruz, California.

    A newly minted proof tells how to recognize which arrangements of points and lines are the crème de la crème

    To some, perfection is priceless. But for four graph theorists, it has a very specific value. If their solution to one of the oldest problems in their discipline—a classification of so-called perfect graphs—holds up, they will reap a $10,000 bounty.

    The strong perfect graph conjecture (SPGC) has perplexed mathematicians for more than 40 years. “It's a problem that everyone in graph theory knows about, and some people in related areas, particularly linear programming,” says Paul Seymour of Princeton University, who announced the proof at a meeting of the Canadian Mathematical Society last month. Its solution might enable mathematicians to quickly identify perfect graphs, which have properties that make otherwise intractable problems involving networks easy to solve.

    The graphs in question consist of nothing more than dots and lines. Each line connects exactly two dots, or nodes. The SPGC grew out of mathematicians' fascination with coloring graphs in such a way that no two nodes of the same color are connected, a problem rooted in the real-world business of coloring maps. When Wolfgang Haken and Kenneth Appel proved the famous Four-Color Theorem for planar maps in 1976, they did it by means of graph theory.

    Coloring problems make sense for other kinds of graphs as well. In a cell-phone network, for example, the nodes are transmitters, the lines connect any two transmitters whose ranges overlap, and the colors correspond to channels. Coloring the network amounts to assigning channels so that no adjacent transmitters broadcast on the same channel. Of course, the phone company would want to use the smallest possible number of channels, which is called the chromatic number chi (c) of the network.

    It's easy to see that any group of nodes that are all connected to one another must all be different colors. Graph theorists call such a dense web of nodes a clique. Thus, in any graph, chi has to be at least as large as the size of the biggest clique, a number known as omega (w). In a perfect graph, in fact, chi and omega are exactly equal. More than that, they stay equal no matter how many nodes you knock out of the graph, as long as the remaining nodes keep their links intact. A cell-phone network based on a perfect graph could run with optimum efficiency even if some of its transmitters were knocked out (although it would cover less area).

    A perfect graph is like a perfect chocolate cake: It might be easy to describe, but it's hard to produce a recipe. In 1960, however, Claude Berge, a mathematician at the Centre National de la Recherche Scientifique in Paris, did just that. He noticed that every imperfect graph he could find contained either an “odd hole” or an “odd anti-hole.” An odd hole is a ring of an odd number (at least 5) of nodes, each linked to its two neighbors but not to any other node in the ring. An anti-hole is the reverse: Each node is connected to every other node in the ring except its neighbors.

    Berge boldly conjectured that any graph that avoided these two flaws (such graphs were later named “Berge graphs”) would be perfect (see figure). But he couldn't prove it, and his speculation became the SPGC. At the same time, Berge also ventured a less definitive pronouncement known as the “weak” perfect graph conjecture.

    Net watch.

    In a web of cell-phone transmitters based on a perfect graph, the smallest number of channels needed to avoid interference (chi) equals the largest number of interconnected nodes (omega). Adding two transmitters that create an “odd hole” (arrows) makes the graph imperfect.


    Seymour, along with co-authors G. Neil Robertson of Ohio State University in Columbus and Robin Thomas of the Georgia Institute of Technology in Atlanta, began working on the SPGC in 2000. They were motivated in part by a grant from the privately funded American Institute of Mathematics in Palo Alto, California, which promotes work on high-profile unsolved problems. Although mathematicians have identified a dizzying variety of perfect graphs—96 types at last count—the three researchers focused on just two types, called bipartite graphs and line graphs, as well as their antigraphs. Following a strategy suggested by another perfect-graph aficionado, Gerard Cornuejols of Carnegie Mellon University in Pittsburgh, Pennsylvania, they proved that any Berge graph that is not one of these types can be decomposed into smaller pieces that are.

    Cornuejols had done more than just suggest a strategy. He put his money where his mouth is, offering $5000 for the proof of a particular step that had eluded him, as well as $5000 for completing the proof of the full SPGC. To collect the prize, Seymour and his collaborators will have to get their work published, which might take a year or more while other graph theorists scrutinize a proof that will likely run to 150 or 200 pages.

    The early betting is that they will collect the prize. “I don't know the details of the proof, but I trust that it is basically correct,” says László Lovász of Microsoft Research, who proved the weak perfect graph theorem in 1972. “In the first version of a complicated and long proof like this, there are always some gaps and, at the same time, often substantial possibilities for simplification. But I do know the general plan, and I have no doubt that it is now working.” Late last month, Seymour and his student Maria Chudnovsky presented more details at a workshop in Oberwolfach, Germany. András Sebö of the Institut d'Informatique et Mathématiques Appliquées in Grenoble, France, who co-organized the workshop, says the excellent track record of the authors, and the ease with which Seymour and Chudnovsky fielded all questions, leaves “not much doubt” that the proof will hold up.

    Before the workshop, Seymour e-mailed news of the proof to Berge, who proposed the problem. He later heard that Berge, who is seriously ill, had the message read to him in the hospital. “He was happy,” Seymour says.


    Beautiful Bioimages for the Eyes of Many Beholders

    1. Vivien Marx*
    1. Vivien Marx is a freelance writer based in Boston, Massachusetts, and Frankfurt, Germany. Her latest book is The Semen Book.

    A handful of image-sharing databases and software systems is becoming available, and these projects might change the way biologists look at their own and other researchers' data

    For some biologists, seeing is believing. They apply fluorescent tags to track proteins, use multidimensional microscopes to watch embryonic development unfold, or spy on endangered species' mating habits using video. Yet, only a fraction of these images finds its way into publications, and then only in a static, two-dimensional format.

    Many labs have treasure chests of images never seen by colleagues, and researchers are now trying to find ways to share the wealth. It's not strictly a philanthropic impulse: Sharing images would save time, allowing researchers to compare and build on one another's findings. It might even lead to research projects that would be impractical for just one lab. “In today's lab meeting, someone showed an image—[the person] had spent 8 hours at the microscope just collecting the data on this one image,” explains Scott Fraser, director of the Biological Imaging Center at the Beckman Institute of the California Institute of Technology (Caltech) in Pasadena. “It would save others a lot of time if [those] data could be used more than once.”

    Several projects now coming online aim to make images accessible to all. To do this, researchers are developing new ways to store, link, search, and retrieve biological data. The situation today is like “a whole bunch of blind people all feeling the same elephant—and we don't really realize what the others are about,” says G. Allan Johnson, director of the Center for In Vivo Microscopy at Duke University. “The elephant we are all connected to is biology.”

    The obstacles these projects face loom large. Aside from the technical difficulties of creating user-friendly databases and interconnected networks of images in the scientific literature, there are pesky legal and ethical questions. It's not clear who will own the copyright to images in a database. And how do you give credit to the people who generate an image if your work builds on theirs? “It is easy for someone ancient like me to be generous, because I do not have to worry about getting the next job, but the younger scientists are concerned,” says Fraser.

    The big picture

    A project called BioImage, funded by the European Commission (EC), leads the effort to store and mine images in the scientific literature. It was thought up in the mid-1990s by a handful of scientists in Germany, Switzerland, the United Kingdom, and Spain; the project is now run mainly by cell biologist David Shotton of Oxford University. Together with Oxford's Steffen Lindek, a consulting biophysicist, and other colleagues, Shotton is developing what will be a Web-accessible database including everything from three-dimensional confocal microscopy images to time-lapse videos of cells to wildlife photos and videos. The goal of the project is to develop sophisticated image bioinformatics that will, in Shotton's words, “ensure that our image repositories become knowledge resources rather than data graveyards.”

    BioImage, which is part of an EC research initiative called Online Research Information Environment for the Life Sciences, obtained a 3-year, 3 million Euro grant in January. BioImage builds on a prototype image database created by a related EC project. Now, Shotten and others are improving user interfaces and gutting the scaffolding of the database. By September, they expect, they will be able to incorporate not only microscopy images but also wildlife photographs and videos.

    The first contributions to the new database will be images from the Journal of Microscopy. BioImage set up an agreement with the journal to bank the images associated with biological papers accepted for publication, including some images that won't appear in the journal. “We hope that a culture will develop for submission of images to the BioImage database, [one] that resembles the present culture of submission of sequence data and crystallographic information to the public databases,” Shotton says.

    BioImage won't warehouse all the images, though. In many cases, it will store descriptions and low-resolution visual previews and will direct searchers to sites where the high-resolution versions are parked. Most journals are expected to keep copyrighted images on their own servers rather than storing copies in the BioImage database.

    Because images will need searchable tags, they will be equipped with metadata such as a list of authors, a description of the experiment or observation, and details such as the microscope's aperture and exposure. BioImage will hold the copyright to the metadata, but the image copyright may be elsewhere—for example, with a publisher. The metadata and images stored in BioImage will be freely available for noncommercial research and educational uses. It's not yet clear what fees commercial researchers will have to pay. And all users will have to contact original copyright holders for images not kept in the database. BioImage researchers are also hoping to convince publishers to add tags to individual images within papers, to facilitate image searches.

    Lindek, who developed the database software, foresees a time when researchers will do new virtual experiments using banked images. In accessing raw three-dimensional data from a confocal image, for instance, a researcher might download a stored image and reevaluate it from a new perspective or apply a different algorithm, perhaps obtaining publishable results. “In this sense, BioImage may allow a new kind of science to occur,” says Shotton.

    Professor Picasso's fluorescent period

    BioImage has a broad mission, but other groups are setting up databases with more limited objectives. In the United States, the National Institutes of Health is sponsoring several focused projects, including one looking at genes active in a mouse's nervous system and another looking at slices of human brains. Another effort tracks cells as they move.

    To explore how the nervous system develops, Nathaniel Heintz and Marybeth Hatten of Rockefeller University in New York City, along with Alexandra Joyner at New York University, are creating thousands of transgenic mice. They plan to insert fluorescent markers into genes. Then, they will capture images showing when and where the genes are active in neurons at embryonic and postnatal stages of growth. The tag they're using, green fluorescent protein, makes lovely images: “With this marker you can see the morphology of each neuron and tell the kind of neuron that is developing,” explains Joyner.

    This project, dubbed the Gene Expression Nervous System Atlas, will expand the number of people with access to these expensive, high-resolution pictures. Hosted by the National Center for Biotechnology Information, the database is expected to come online this year.

    Another collection will hold high-resolution images, at a range of scales and collected using a variety of methods, of human brains. Work on the Biomedical Informatics Research Network (BIRN) began last fall. The initiative, which focuses on brain disorders, started with a $20 million grant to Duke University, Harvard, Caltech, the University of California (UC), San Diego, and UCLA, to create an information infrastructure that permits large-scale data exchange. “Neuroscientists studying such devastating diseases as Parkinson's disease and schizophrenia [will be able] to integrate information ranging in scale from the whole brain down to a single neuron,” explains Duke's Johnson, a principal investigator of a related project called the Mouse Brain Imaging Research Network (MBIRN).

    Technicolor trajectories.

    Studies of genes active during development will illuminate images of embryonic mice.


    For both BIRN and MBIRN, which will focus on mouse models of human neurologic diseases, researchers will store images at their own institutions. But their troves will be linked by a so-called storage research broker, a program currently being developed at the UC San Diego Supercomputer Center. Much as in BioImage, a query will take a researcher to wherever the images are, in something of a scientific equivalent of an Internet search engine.

    Fleeting data.

    Sophisticated software is necessary for quantifying and keeping track of the changes visible as cells divide.


    Right now, BIRN scientists are hammering out ways to manage these specialized databases and their connections. As Johnson explains, they're trying to develop the infrastructure with a constant eye toward biological research, basing their networks on, for example, a particular mouse model of a human disease that a lot of researchers are using. Says Johnson, “We [hope we] have the right collection of neuroscientists and geeks pounding on the same problem.”

    Meanwhile, another group is developing a small software project with a large scope. Open Microscopy Environment (OME) is the brainchild of three scientists: Ilya Goldberg and Peter Sorger of the Massachusetts Institute of Technology and Jason Swedlow of the Wellcome Trust Biocentre at the University of Dundee, U.K. This project will allow researchers to keep track of data that amass during observations of cells and their behavior. The team's open-source software, now in beta testing, is free to download.

    Pointing to a blue and green glowing, diamond-shaped structure on his computer screen, which depicts a cell going through mitosis, Swedlow says, “Just about anyone can fix a cell, label what is of interest, and create these kinds of images. Even labeling a protein in live cells and getting those images is not really the problem anymore.”

    What is now of interest, explains Swedlow, is to go beyond the pretty pictures to probe the quantitative information in the image. For example, it is routine to record time-lapse movies of many cells at once, by quickly focusing and refocusing on different ones. A researcher might be looking at how different drugs or different mutations affect the cells. The challenge, Swedlow says, is “finding what has changed, what has moved, measure in what way fluorescence has shifted, and evaluate those differences.”

    Popular algorithms are now available for analyzing images, but the summary results—the metadata—tend to get buried by technological change. Each time a researcher uses a different software package, some of the old data are left behind. There are no accepted standards for storing the image data, a basic problem that, Swedlow says, “we want to solve.” OME establishes a local database management system that stores both the data and metadata. “By preserving the relationships between them, it becomes easier to evaluate large numbers of images.”

    Like BioImage and BIRN, OME draws on the concept of federated databases, enabling information from several sources to be linked and exploited. “Creating these links is the only way to allow this collection to be queried for a search,” explains Swedlow. This and other new strategies might soon be ways to raise biological images from obscurity back into the realm of the visible and findable.


    Data Treasures of the Test Ban Treaty

    1. Richard Stone

    A network of sensors meant to detect nuclear blasts could help scientists study everything from whales to volcanoes. But will they be allowed to use it?

    VIENNA—At the height of Mount Etna's eruptions in July 2001, magma tore from the cones at terrific speeds, unleashing booms that thundered in the villages at the volcano's base. Although the sound-and-light display was for locals only, scientists hundreds of kilometers away were tuning in to Radio Free Etna. The shaking mountain, with its roiling ash cloud, acted like a gigantic transmitter, triggering pressure waves that undulated through the atmosphere. These waves, with wavelengths measured in hundreds of meters, were below the threshold of human hearing. But, in the Netherlands, a new infrasound detector array registered the waves and pinpointed their origins to Sicily.

    The Etna data are the latest in a series of dazzling observations suggesting that a global surveillance network, designed to eavesdrop on clandestine nuclear tests, could become a powerful new tool for the scientific community.

    To verify that countries adhere to a test ban, the Comprehensive Nuclear Test Ban Treaty (CTBT) of 1996 mandates stringing Earth with listening posts for the telltale signatures of nuclear blasts. According to Peter Marshall, a forensic seismology expert at the Atomic Weapons Establishment in Aldermaston, U.K., the 321-station network of detectors for seismic, infrasound, and hydroacoustic waves and radionuclides, called the International Monitoring System (IMS), will provide “a view of the globe we have never had before.”

    That is, if the network's political masters allow that to happen. About a third of the planned $250 million IMS network is now up and running, but data are being fed only to government-run centers in a few dozen nations that have signed the treaty. The Etna findings, which came from the Royal Netherlands Meteorological Institute's infrasound station—one of the few that's not part of the IMS network—only hint at the IMS gold mine that's tantalizingly out of reach.

    Wiring the world.

    Test ban treaty staff check out an infrasound station in Greenland (top) and lay cables for a hydroacoustic station in the Indian Ocean.


    Even government researchers at labs in CTBT countries must jump through a series of hoops to get their hands on the data. “Access to data is very cumbersome,” says Domenico Giardini, director of the Swiss Seismological Service in Zurich. Only the CTBT parties can authorize the data's release, on a case-by-case basis. Negotiations under way on a common release policy are contentious.

    But the scientific community at large has an unlikely ally in the quest to plunder this data treasure trove: uncertainties about the future of the CTBT itself. For the treaty to come into force, 44 countries that possess nuclear weapons or research reactors must ratify it. So far, 31 have, but the present government of one dominant member of this elite group—the United States—opposes ratification, leaving the treaty in limbo. With realization dawning that the IMS might not be called upon to verify treaty compliance for years to come, scientists both inside and outside the CTBT Preparatory Commission are building a case for civilian and scientific uses for the network, which is being funded by the treaty parties in anticipation of eventual ratification. Possible uses range from tracking whale populations to providing early warnings of volcanic eruptions that threaten airplane routes.

    Casting a wide net

    Expected to be fully working by 2007, the IMS is shaping up to be the most extensive network of geophysical stations ever built (see map below). Currently, a few dozen stations each day pump about 4 gigabytes of data to a processing center at the CTBT office in Vienna, Austria. Here, analysts screen out false hits and waveforms that are clearly of natural origin—such as earthquakes that occur so far below Earth's surface that they couldn't conceivably be clandestine tests—then compile daily reports on about 50 geophysical disturbances. These “events” include everything from moderate-sized earthquakes to streaking meteors. The young network, along with independent seismic stations, also spotted the nuclear tests conducted by India and Pakistan in 1998.

    Hard to get past the sensors.

    Hundreds of stations will monitor waves in air, ground, and water; others will sniff for telltale products of nuclear fission.


    One of the mightiest assets of the full IMS will be coverage so complete it would make the likes of Motorola salivate. “This is the first time we'll have a global seismic network in real time,” says Sergio Barrientos, chief of seismic monitoring at the CTBT Preparatory Commission. Because the most likely place for a country to try to hide a nuclear explosion is underground, the treaty calls for a globe-girdling web of 50 seismic wave detectors, some in extremely remote locations. If any detector gets a hit, a computer in Vienna springs into action, requesting supplementary readings from the nearest detectors in an auxiliary network of 120 existing stations.

    In a closed meeting held in London last May to discuss civilian and scientific applications of the IMS, experts agreed that the network could provide valuable data indeed. Real-time IMS seismic data “would greatly improve the accuracy and timeliness of reports on earthquake location and magnitude” “would greatly improve the accuracy and timeliness of reports on earthquake location and magnitude” “would greatly improve the accuracy and timeliness of reports on earthquake location and magnitude” “would greatly improve the accuracy and timeliness of reports on earthquake location and magnitude” “would greatly improve the accuracy and timeliness of reports on earthquake location and magnitude” “would greatly improve the accuracy and timeliness of reports on earthquake location and magnitude” “would greatly improve the accuracy and timeliness of reports on earthquake location and magnitude” “would greatly improve the accuracy and timeliness of reports on earthquake location and magnitude” “would greatly improve the accuracy and timeliness of reports on earthquake location and magnitude” and can help direct relief efforts after large earthquakes by estimating the size and frequency of aftershocks, according to a meeting summary obtained by Science. Seismologists are hungry for data—in particular, from the subset of IMS stations set up as arrays, in which a number of identical sensors are positioned hundreds of meters apart. This slashes noise dramatically, giving a precise location of the origin and direction of a seismic wave. Such data can help map the geometry of faults and perhaps lead to insights about how they rupture. “All the classical fields of research in seismology are interested in these data,” says Michel Granet of the Institut de Physique du Globe in Strasbourg.

    Existing seismic networks simply can't match the IMS for its global coverage, real-time data retrieval, and the tens of millions of dollars being spent on detectors and installation. “We have always struggled to get instrumentation like this,” says Giardini. The technical specs are truly impressive, he adds: “If they drill a hole, they drill it deeper than other networks [do].”

    Demand is already high for data from the hydroacoustic sensors, which can do more than just listen for muffled undersea explosions. Scientists with the Acoustic Thermometry of Ocean Climate (ATOC) project—which uses low-frequency pulses to measure ocean temperatures—are now negotiating with CTBT officials over access to high-resolution hydrophone data that can help refine their ocean temperature measurements. And two IMS hydrophones operating in the Indian Ocean last March picked up the underwater groans of the Larsen B ice shelf before a chunk bigger than Luxembourg broke away. The hydrophones and t-phase stations—designed to detect the conversion of hydroacoustic energy to seismic energy when underwater sound waves impinge on a land mass—also listen in on the chatter of marine mammals and thus offer a way to track population groups, says Marta Galindo, a hydroacoustic expert with the CTBT Preparatory Commission. “We detect a lot of biological activity,” she says. Emergency officials also could use the data for early warnings of tsunamis and to warn ships about the activity of underwater volcanoes.

    Perhaps the greatest uncharted horizons lie in the exploitation of infrasound. Such waves escaped detection well into the 20th century, but they are now known to emanate from everything from nuclear explosions and airplanes to earthquakes and auroral displays. Researchers want to use the strong infrasonic waves generated by severe storms to refine models of how hurricanes and typhoons develop. Crude infrasound stations were set up in the 1950s and 1960s to detect atmospheric bomb blasts, but they were neglected after atmospheric testing was banned. “A lot of knowledge was forgotten,” says Thomas Hoffmann, an infrasound specialist with the CTBT Preparatory Commission.

    The main challenge for the infrasound arrays is latching onto the airborne waves—“a very strange beast,” says Hoffmann. Windows and building walls, for example, are practically transparent to the waves, which are perturbed easily by wind and atmospheric disturbances. The detectors are composed of vertical inlet pipes laid out in cloverleaf formations on the ground, as if they're catching water. Design improvements and better analytical computer programs have helped home in on the origins of signals, as in the Etna observations. Infrasound data could help steer planes clear of volcanic plumes, particularly in remote regions such as the Russian Far East. “There's an interest in the aviation industry in knowing where these ash clouds go,” says Hoffmann.

    Moreover, Hoffmann adds, “there are a lot of applications that nobody has thought of before.” For instance, because the upper atmosphere reflects infrasonic waves, these can be used to probe conditions in a region much higher than where weather balloons can reach. And infrasound sensors in Canada, Germany, California, and Hawaii last year detected the titanic explosion of a huge meteor over the Pacific.

    The CTBT parties hold the key to probing many of these promising research avenues. They're due to discuss data access issues at a CTBT conference in Vienna next month, although a policy is not expected to materialize imminently. While opposed to the treaty, the Bush Administration backs the IMS. “It's a fantastic resource, and we're committed to seeing it completed and the data available to the public,” says an official at the U.S. State Department. It's in U.S. interests to sustain the IMS, point out observers, as the network is a unique source of data from sensitive regions such as China and Russia.

    But China and a handful of other countries argue that IMS data should generally be reserved for their intended uses and should not be distributed to states that are not CTBT parties, except for relief after disasters such as large earthquakes. China has argued that data release could compromise national security, says Oliver Meier, an arms control expert at the Verification Research, Training, and Information Centre in London. Other nations have expressed concern that nongovernmental groups could use data to level false allegations at states. So far, China has blocked release of IMS data beyond governmental data centers in each of the 53 countries now registered to receive data. Parties that back open scientific access will have to overcome these objections before a common policy emerges, as the CTBT parties make decisions by consensus.

    Perhaps the most sensitive information is in the radionuclide data. These 80 planned detectors, which act like vacuum cleaners sucking up particulates, are designed to collect smoking guns of a treaty violation: short-lived radioisotopes that could be spawned only in a fission reaction. Providing free access to such data is unlikely, as it could tip off watchdogs to nuclear accidents or routine pollution. “Governments want to know about the data before it comes to public light,” says Joachim Schulze, chief of radionuclide monitoring at the preparatory commission.

    The nuclear power industry is wary as well. “A director of a nuclear facility wouldn't want to be confronted with data,” Schulze notes. Nevertheless, says one official, this information “would be of great value if there were another Chernobyl-type disaster.” But scientists are clamoring for at least one set of radionuclide data: levels of the noble gas xenon. Because xenon doesn't readily react with other molecules, the data could come in handy for calculating the wind patterns that sweep molecules across the globe.

    The IMS might indirectly help overcome opposition to the treaty itself by building confidence in the technology. The Bush Administration and other critics argue that the treaty is not verifiable, as the IMS network might miss very low-yield nuclear tests or blasts that are “decoupled” from the environment, say in an underground cavern built to dampen a blast's energy. But, although the treaty has stringent minimum requirements for IMS sensors—the network is designed to detect explosions of at least 1-kiloton yield—the IMS has shown that it is sensitive enough to detect energy releases on the order of tens of tons. And there's nothing stopping the preparatory commission from upgrading the IMS to even sharper eared sensors. Even without upgrades, says Marshall, scientists have much to look forward to. “It's just mind-boggling what might be achieved,” he says.


    Comet Chasers Get Serious

    1. Govert Schilling*
    1. Govert Schilling is an astronomy writer in Utrecht, the Netherlands.

    Comets, once considered bad omens, might hold the secret of where Earth's water, and even life, came from. Researchers should soon know much more about these visitors from the outer solar system

    Three years from now, on Independence Day 2005, a NASA spacecraft will attack another body in the solar system. No, this is not the plot of the latest summer movie about interplanetary warfare. Researchers hope that by firing a projectile at comet Tempel 1 and studying the resulting crater and debris, they will learn what it is made of.

    It won't be the only intense encounter with a comet in the next few years. Scientists are in the process of launching an unprecedented clutch of missions to understand these spectacular visitors from the outer solar system. By the time Tempel 1 comes under attack, a spacecraft scheduled for launch earlier this week, called Contour, should be in the midst of a solar-system-wide tour of two or three other comets; dust captured from yet another comet should be on its way back to Earth for analysis; and the first comet lander in history should be headed for its target. “We're about to enter into a golden era” of cometary research, says Colleen Hartman, director of NASA's Solar System Exploration Division.


    Comets are the afterbirth of the solar system: fluffy aggregates of ice and rock just a few kilometers across that were never swept up by forming planets. Normally, they orbit the sun at extremely large distances, far beyond the outermost planets, but once in a while a comet gets knocked into a highly elliptical orbit that brings it swooping into the inner solar system, where it can be captured in a much smaller orbit.

    Once it is exposed to the heat of the sun, the comet's icy nucleus begins to evaporate, and Earth-bound observers can be treated to a spectacular light show of glowing and fluorescent tails of gas and dust. After many centuries, the comet ends up as a dark, porous cinder that resembles a rocky asteroid.

    In the past, the sudden appearance of a bright comet was generally regarded as a bad omen, but for present-day astronomers, these capricious objects are time capsules that may shed light on the formation of the planets and on the origin of life.

    “We want to know how comets work and what they are made of,” says Joseph Veverka of Cornell University in Ithaca, New York. “They may have brought water and organic molecules to Earth. We may really be the progeny of comets.”

    View this table:

    Astronomers got their first close-up view of a comet in 1986, when the European Space Agency's (ESA's) Giotto spacecraft sped past Halley's Comet (named after 18th century English astronomer Edmund Halley, who first calculated a comet's orbit and predicted that it would reappear at regular intervals). Giotto flew to within 500 kilometers of the comet's nucleus with a relative speed of 68 kilometers per second.

    Despite being bombarded and sandblasted by dust and grains from the comet, Giotto managed to take the first-ever pictures of a comet's nucleus. Halley turned out to be a very dark, irregular body, dotted with patches of geyserlike activity. Last September, NASA's Deep Space 1 obtained comparable results for comet Borrelly (Science, 5 October 2001, p. 27).

    These brief flybys have only whetted astronomers' appetites. They would love to study comet dust in the lab, learn about the difference between old and new comets, peer into their interiors, take surface samples, and witness a comet heating up as it gets closer and closer to the sun.

    All these goals should be fulfilled in the next few years by a remarkably diverse suite of American and European spacecraft. “We really want to go down to the key issues of the formation of the solar system,” says Gerhard Schwehm of ESA's technology center in Noordwijk, the Netherlands. “Our main science goal is to understand how planetary systems evolve.”

    Those hopes rest on four spacecraft: Contour, Stardust, and Deep Impact—all from NASA—and the Rolls-Royce of the bunch, ESA's Rosetta. According to Deep Impact principal investigator Michael A'Hearn of the University of Maryland, College Park, there is “a fair bit of collaboration” among the four projects. “Some researchers are in the science teams of two or even three missions,” he says.

    StardustCREDIT: JPL/NASA

    Contour, short for Comet Nucleus Tour, is going for the big picture. As this issue went to press, the spacecraft was set for launch 3 July on a sightseeing trip around the solar system. Contour is scheduled to visit three comets in 5 years, provided it gets a hoped-for extension beyond its core 4-year mission. The first planned stop, in November 2003, is an encounter with Encke, an old comet in a 3.3-year orbit that has been observed since 1786. Two and a half years later, in June 2006—after repeated flybys of Earth to change trajectory—Contour should swing past the much more active comet Schwassmann-Wachmann 3 (its nucleus broke into at least three pieces in 1995). “These are two comets that could not be more diverse,” says Donald Yeomans of NASA's Jet Propulsion Laboratory in Pasadena.

    Learning about cometary diversity is “an essential next step in the exploration of comets,” says Veverka, Contour's principal investigator. Because Schwassmann-Wachmann 3 broke up very recently, the spacecraft might have a rare chance to peek into the interior of a comet nucleus. Contour will fly much closer to the comet, and at slower speeds, than Giotto did in 1986. This should give astronomers their best-ever images of cometary nuclei, with details as small as a few meters and more data on the composition of their extended “atmospheres.”

    If fuel supply and budget permit, a third comet will be added to Contour's itinerary. This could be comet d'Arrest (pencilled in for August 2008), or maybe an as-yet-unknown comet that enters our planetary system for the first time. A third option, according to mission director Robert Farquhar of Johns Hopkins University in Baltimore, is to send Contour to Wilson-Harrington, an essentially “dead” comet that now resembles an asteroid.

    As Contour sets off on its tour, another NASA craft is already well on its way. Stardust, launched in February 1999, is on track for an encounter with comet Wild 2 in 18 months. Wild 2 used to roam the outer parts of the solar system, says principal investigator Donald Brownlee of the University of Washington, Seattle, but it was deflected by a close encounter with Jupiter in 1973 and now orbits much closer to the sun. This gives astronomers a chance to study its surface before it gets blasted too much by the sun's heat. “It's actually very exciting. It may be covered with craters” that have not yet been obliterated by cometary activity, says Brownlee.

    On 2 January 2004, Stardust will pass through Wild 2's coma: the cloud of gas and dust surrounding the solid nucleus. Using paddle-shaped dust collectors made of lightweight aerogel, the spacecraft will catch microscopic comet dust that will be returned to Earth 2 years later.

    “We need all the analytical techniques available in ground-based laboratories to study these samples,” says Brownlee. He hopes Wild 2's sheddings will provide clues to the links between interstellar matter, protoplanetary disks, comets, and interplanetary dust particles.

    On the same day that Stardust encounters Wild 2, NASA is scheduled to launch its third comet mission, Deep Impact. The spacecraft will carry a 370-kilogram impactor, made largely of copper, that should slam into the nucleus of comet Tempel 1 on 4 July 2005. “The comet may temporarily become 100 times as bright, so it will be visible in binoculars and maybe even with the naked eye,” says principal investigator A'Hearn. “We assume that every ground-based telescope will be pointed at the comet” around the time of impact.

    Deep ImpactCREDIT: NASA

    Tempel 1 is a fairly large comet, so it won't be deflected or destroyed by the impact. A'Hearn expects to create a crater the size of a football field and as deep as a seven-story building. However, he says, it all depends on the density and the internal structure of the nucleus. Scientists hope the crater will be deep enough to reveal parts of the comet that have not yet been affected by the sun's onslaught. As the spacecraft continues to fly past Tempel 1, it will study in detail the crater and the debris from the impact, to learn more about the comet's properties and chemical makeup.

    Humankind's first assault on a comet is certain to generate a lot of publicity. “The 4th of July was a conscious choice,” says A'Hearn. “Celestial mechanics forced us to encounter the comet somewhere between mid-June and mid-July, and then we thought, ‘Why not?’” However, he says the name for the mission had been chosen before the release of the Hollywood blockbuster Deep Impact, about a comet that hits Earth. “The movie company is now happy to collaborate,” says A'Hearn. “They may even rerelease the movie in 2005.”

    Exciting though these missions are for comet scientists, the true Holy Grail would be to bring a sample of a nucleus back to Earth for analysis. Just such a mission was considered as a NASA-ESA collaboration in the mid-1980s, according to ESA's Schwehm. But some 10 years ago, NASA pulled out of the project, and ESA had to rethink its plans. The result is Rosetta; at nearly $700 million, it costs more than the trio of NASA comet missions put together.

    RosettaCREDIT: ESA

    Instead of bringing comet samples back to Earth, Rosetta will fly a laboratory to comet Wirtanen for an extended visit, says Schwehm. The instrument package contains sensitive cameras, mass spectrometers and gas analyzers, and even an atomic force microscope to study dust particles. “It's a wonderful mission,” says Brownlee.

    Rosetta is scheduled to be lofted by an Ariane 5 launcher on 13 January 2003. After a series of flybys of Mars and Earth (to gain enough speed to reach the distant comet), and encounters with asteroids Otawara and Siwa, Rosetta should rendezvous with Wirtanen in November 2011. For 18 months, the spacecraft will orbit the comet's tiny nucleus as it gets closer and closer to the sun. During this period, Rosetta will deploy a lander to study the surface close up.

    “The Rosetta stone was the key to deciphering the origins of the Egyptian hieroglyphs,” says Schwehm, Rosetta's project scientist. “Likewise, Rosetta will be the key to deciphering the origins of the solar system.” The suite of high-precision instruments on board the spacecraft and the lander might reveal whether most of the water on Earth was delivered by comets, and what role comets played in seeding our planet with prebiotic, organic materials.

    “Ultimately, Rosetta holds the largest promise for comet science,” says A'Hearn. “But the nice thing about this whole suite of cometary missions is that each addresses very different problems. They are really very complementary.”

    With so much expectation hanging on Rosetta and the other missions, Schwehm is acutely aware of all that could go wrong. For the moment, he is focused on overcoming the first hurdle: getting the craft up and deploying its solar panels and communications antenna. “These are the most critical parts [of the mission],” he says. “It's a little bit nerve wracking.”