News this Week

Science  08 Aug 2003:
Vol. 301, Issue 5634, pp. 742

    Sparks Fly Over New Report on Hiroshima Bomb Radiation

    1. Dennis Normile

    TOKYO—A new analysis of the radiation delivered by the atomic bomb dropped on Hiroshima has created some bitter fallout. The work, published last week in Nature (31 July, p. 539), helps resolve long-standing uncertainties over the estimated doses of neutron radiation. But the publication doesn't mention that the work was part of a pending broader international study, leaving other contributors fuming.

    “This undermines a very excellent cooperative relationship between American and Japanese scientists,” says Robert Young, a radiobiologist retired from the Defense Nuclear Agency who heads the U.S. side of the Joint U.S.-Japan Working Group on Reassessment of A-bomb Dosimetry, which oversees the study. The letter “robs our work of its significance,” complains Masaharu Hoshi, a radiation biophysicist at Hiroshima University and a member of the working group. But the lead author of the paper, radiobiologist Tore Straume of the University of Utah School of Medicine, says he was simply following the rules as he understood them. “Due to [the larger study] not yet being completed,” says Straume, “we were not permitted to include [its] calculations in the Nature paper.”

    The study, called Dosimetry System of 2002 (DS02), was launched in 2000 to resolve uncertainties stemming from previous studies, particularly one conducted in 1986, of the Hiroshima and Nagasaki bombings. That study contained a discrepancy between calculated and measured effects of neutron radiation, which comes from both high-energy “fast” neutrons and low-energy “thermal” neutrons. (Gamma rays were the source of most of the bomb's radiation.) The dosimetry is combined with epidemiological data to create a database on radiation exposure in humans. The data are “the basis for radiation-protection guidelines used throughout the world,” says Burton Bennett, an environmental health scientist who heads the Hiroshima-based Radiation Effects Research Foundation (RERF).

    Ground zero.

    Researchers have reconstructed neutron doses from irradiated materials at Hiroshima.


    DS02, funded by the United States and Japan with small contributions from the European Commission and Germany, consists of 30 to 40 researchers at nearly a dozen institutions. The collaboration is revising the Hiroshima bomb's estimated yield, detonation height, and ground zero, recalculating the shielding provided by terrain and tall buildings, and analyzing materials retrieved from the rubble for clues to the radiation they received. Its 800 pages of findings will be published as a monograph later this year.

    Measurements of fast-neutron radiation shortly after the 1945 bombings were based on an analysis of sulfur, which decays rapidly. That analysis provided data only up to 700 meters from ground zero, leaving researchers to make inferences for greater distances based on measurements of other isotopes produced by thermal-neutron radiation. Straume's group, which included 13 scientists at five institutions in the United States and Germany, developed new methods of chemical extraction and atomic mass spectrometry to measure the conversion of copper atoms to an isotope of nickel (63Ni) by fast neutrons, providing a direct indication of fast-neutron exposure. A Japanese group in the collaboration also measured 63Ni using a liquid scintillation technique.

    The Straume group's letter claims that its nickel isotope measurements confirm earlier estimates of fast-neutron exposure. But neither the paper nor a commentary by Mark Little, an epidemiologist at the Imperial College Faculty of Medicine in London, mentions the DS02 effort. “The nickel study was an important part, but only a part, of the whole effort,” says Young. Not mentioning DS02 “created the impression that [the Straume study] was the sole solution to the neutron discrepancy,” he says. Hoshi says that Straume's work contains large margins of error, making claims of resolving the neutron discrepancy “scientifically fuzzy.” He says that achieving the necessary precision in estimating fast-neutron doses requires combining Straume's measurements with the measurements made by the other teams.

    The paper thanks two Japanese researchers for providing samples, but Straume says his team decided that their contribution fell short of the “significant intellectual contributions” needed to become an author. Young disagrees, saying that the pair deserves co-authorship because finding such samples, calculating their distance from the blast, and determing whether there was intervening shielding is an important contribution.

    Straume declined to discuss the matter by telephone but answered questions by e-mail. He says he was abiding by a decision made at a January 2003 meeting that allowed his group to publish “as long as we did not compare our measurements to the DS02 calculations. So, we compared to DS86 calculations.” Young admits that the DS02 calculations are still being finalized, but he says that the collaboration never prohibited any mention of DS02. “At the least,” adds Hoshi, “we should have been notified [that data based on Japanese samples] was going to be used in a separate paper.”

    RERF's Bennett views the episode as an unfortunate blemish on an otherwise successful collaboration. “It was just a slight that we all regret, and hopefully once everyone apologizes, we'll get on with one another again,” he says. But that might not be so easy. Hoshi says that although work on the DS02 monograph will continue, he and his colleagues are no longer interested in any other joint publications. And Hoshi says that Japanese researchers are likely to be more circumspect in the future about sharing samples. “Without the samples, there would have been no nickel studies,” he says.


    Basic Research Chief Given the Boot

    1. Barbara Casassus*
    1. Barbara Casassus is a freelance writer in Paris.

    PARIS—Last week, just as much of the country was heading off for summer vacation, the French government fired Geneviève Berger, director of the country's main basic research agency, CNRS, and shuffled two other senior research posts.

    A statement from the research ministry announcing Berger's ouster suggested that she was not the right person to implement major reforms of French science that the government is expected to propose in the coming months. Research minister Claudie Haigneré, says a spokesperson, “wanted a new man for a new, flexible policy”: mathematician Bernard Larrouturou, head of INRIA, the information technology research agency. But some bemused observers are suggesting darker reasons for Berger's precipitous downfall and what it could mean for CNRS. Jacques Fossey, a CNRS chemist and head of the research union SNCS, says the ouster may amount to “a political settling of accounts,” because Berger's voice was part of a high-profile chorus condemning cuts in French research. For now that voice is mum: Berger did not return calls from Science.

    Berger, appointed by the previous Socialist government, was anything but a rebel for much of her 3 years in office. She was unpopular with scientists, notes Fossey, earning a reputation as a “yes-woman” who was only too happy to follow the research ministry's desires to focus on trendy areas such as nanotechnology and life sciences at the expense of physics, chemistry, and mathematics. Even the finance minister told Berger last year to distance herself from the research ministry, Fossey maintains.

    Victim of politics?

    Berger had assailed CNRS cuts.


    She took that advice to heart during the recent rumpus over the budget. Like many other top science officials, Berger vigorously and publicly condemned a crippling 33% cut in CNRS's $3 billion budget for 2003. The CNRS board backed her at a meeting at the end of June, when it too sounded an alarm over the health of the agency's finances. (Cash remains a major concern as the government prepares to release next month its draft budget for 2004. Ministry officials say that research funding is slated to increase by 3.9%.) Ironically, some of Berger's colleagues were just beginning to warm to her when the ax fell. She made a “real effort to develop interdisciplinary research and collaboration with other agencies, which has never been done before,” says the head of another research agency, who says he was surprised by Berger's dismissal. “Something must have happened,” he says.

    Berger's successor is a rising star in French research management. In the statement announcing the switch, Haigneré lauded Larrouturou, 44, for his success in coaxing industry to pick up on research results and for spinning off a pair of companies, Simulog and INRIA-Transfert. Whether his promotion will mean a greater emphasis on industrial ties at CNRS remains to be seen; Larrouturou was not available for comment.

    The research ministry also announced that Bernard Bigot, chief aide to Haigneré, is moving to the atomic energy agency CEA to replace René Pellat as chief scientific adviser and controller. (Pellat died suddenly on 4 August, 4 days after the changes were announced.) Philippe Braidy, financial director and board member of the French space agency CNES, comes in as former astronaut Haigneré's chief aide. The true significance of the midsummer shuffle should become clear after the research ministry this autumn rolls out a draft manifesto on the future of French research. The manifesto could herald big changes for a scientific community already undergoing serious belt-tightening.


    Sniffing Out Martian Hospitality

    1. Richard A. Kerr

    NASA this week named Phoenix the winner in its first open competition to send a mission to Mars. Aptly named, Phoenix will land an instrument package that rises from the wreckage of the 1999 Mars Polar Lander. It will reuse designs from that spacecraft, which is presumed to have smashed into the planet, as well as hardware from a subsequent cancelled mission. Phoenix will scratch through martian soil to recently discovered ice beneath the northern polar plains. The ice may hold clues to the history of water on Mars or hints of an environment conducive to life in the not-too-distant past. It might even pick up lingering organic traces of martian life.

    Phoenix beat out three other finalists in the Scout Program, which is modeled after NASA's Discovery Program of cost-capped planetary missions each led by a principal investigator (Science, 15 November 2002, p. 1320). The competition was tight, according to William Boynton of the University of Arizona in Tucson, an instrument leader on Phoenix. One competitor would have skimmed through the martian atmosphere to return dust and gas samples, another would have flown a rocket plane within a kilometer of the surface to map mysterious magnetic regions, and the third would have detected biogenic gases from orbit.

    Mars rerun.

    Phoenix will be a mosaic of hardware from failed or cancelled Mars missions.


    Phoenix will carry instruments for teasing out the history of water and climate and searching for a zone where soil meets underlying ice that once could have been habitable. The tools include a stereo panoramic camera; a robotic arm for trenching as deep as 1 meter and collecting samples; and a suite of instruments for soil and ice analysis, including an atomic force microscope.

    Phoenix is right at the Scout cost cap of $325 million, so NASA officials were not going for the cheapest mission. They will get the maximum bang for the buck, though, in that most of the Phoenix hardware was either already designed for the Mars Polar Lander or actually built for the canceled 2001 Mars Surveyor Lander. And Phoenix “would have a lot of public appeal,” says Boynton, especially for its ability to identify any organic remains of bacterial life. Such life may have thrived just tens of thousands of years ago when regional climate change brought on by the planet's greater tilt might have allowed the subsurface ice to melt. “We're hopeful we'll find signs of liquid water in the past there,” says Boynton.

    The last factor favoring Phoenix may have been safety, says Boynton: “It's a relatively low-risk mission.” Using instruments from previous missions and salvaging the lander itself from the cancelled 2001 mission allowed greater confidence, he says. Some of that confidence was not easily gained, notes former NASA official Noel Hinners of the University of Colorado, Boulder. The selection of Phoenix “is a vote of confidence in the review and analysis” of the 1999 mission, which indicated that the lander's retrorocket shut off prematurely, Hinners says. “There's a belief that the potential causes of the failure are understood.” Time will tell.


    Passages Found Through Labyrinth of Bacterial Evolution

    1. Elizabeth Pennisi

    Five years ago, genome sequencers made a disturbing discovery about microbes. The bugs' newly deciphered genomes were much more extensively contaminated with DNA from other species than had ever been expected. Through “lateral transfer,” many genes had jumped between species, sometimes distantly related ones. Taxonomic chaos resulted. Prominent evolutionary biologists concluded that the microbial tree of life was really a bunch of intertwined vines and that ancestries would be difficult, if not impossible, to untangle (Science, 1 May 1998, p. 672).

    But the problem may not be as insurmountable as it seems, says Vincent Daubin, an evolutionary computational biologist at the University of Arizona (UA) in Tucson. On page 829, he and his colleagues report that they've identified plenty of genes that have been faithful to their genomes, making bacterial classification feasible. The group contends that when evolutionary relationships prove elusive, researchers have been too ready to blame lateral gene transfer.

    Daniel Dykhuizen, a microbial population geneticist at the State University of New York, Stony Brook, is pleased that order is emerging. “There is an integrity in bacterial genomes and in bacteria that has been missed,” he says. But W. Ford Doolittle, an evolutionary molecular biologist at Dalhousie University in Halifax, Nova Scotia, argues that the team “goes too far in minimizing the influence of [lateral] transfer.”

    Researchers have known for decades that bacteria can adopt new genes, possibly by absorbing DNA from their environments or engulfing or merging with other microbes. Many pathogens have procured genes that enable them to withstand antibiotics. More disturbing, some harmless bacteria have turned harmful by taking on virulence genes.

    Even before microbial genome sequences revealed the extent of such gene sharing, researchers in systematics worried that foreign genes might muddy the waters. In 1993, J. Peter Gogarten of the University of Connecticut, Storrs, hypothesized that bacteria didn't have clear ancestries. Instead of a branching tree, he suggested, a net was a better metaphor for their evolutionary history.

    Other researchers joined in. Last year, for example, Robert E. Blankenship of Arizona State University in Tempe and his colleagues concluded that lateral transfer explained why various researchers got different results when they tried to reconstruct the evolution of photosynthetic microbes (Science, 22 November 2002, p. 1616).

    Such conclusions didn't sit well with Daubin and UA evolutionary biologists Nancy A. Moran and Howard Ochman. Ochman and other colleagues had documented the prevalence of alien genes in microbial genomes and were concerned that this information was being used to argue that the ancestries of bacteria could never be sorted out. “Lateral gene transfer should not serve as the default explanation for [the] lack of matching trees,” he says. Ochman, Moran, and Daubin suspected instead that the systematists were finding it too difficult to sort out branches on the bacterial evolutionary tree, possibly because people had used the wrong methods or were relying on genes that had been exchanged.

    Tracking kin.

    Building microbial evolutionary trees should be possible despite lateral transfer.


    To determine whether they could build reliable family trees, the trio analyzed the genomes of a few closely related bacterial species whose phylogenies are known based on ribosomal RNA genes. These genes are considered a reliable indicator of species' relationships if a common ancestor dates back millions rather than billions of years. The researchers compared each species to others in the same or different genera, looking for orthologs, or nearly identical genes present in each species. In other studies, they compared various strains of a single species.

    One set of four species shared 400 genes, and another set of four had 1715 in common. In their comparison of Escherichia coli strains, the researchers found 2200 orthologs that they could use. Daubin then built an evolutionary tree for each ortholog, arranging the branches based on how similar the genes were in the various species and strains.

    Depending on the set of microbes under comparison, between 93 and 1589 of the gene trees matched accepted trees based on ribosomal RNA. And in only 0 to 149 instances did the tree put the wrong species together—a sign of lateral gene transfer. Many orthologs did not yield clear branching patterns. But because the correct trees far outnumbered the mixed-up ones, the lineages of microbes are discernible, the team concludes.

    The paper has its critics. Gogarten thinks the group looked at too few genes and species. And Doolittle and Marshall Bern, a bioinformaticist at the Palo Alto Research Center in California, question whether the technique is applicable to phylogenetic relationships that extend back many hundreds of millions of years. “Lateral transfer may have been more common, or even the norm, early in life's history,” Bern points out, a problem that Daubin and his colleagues concede they have not adequately addressed.

    But others contend that at least most bacterial relationships can be sorted out. “I think it will be more difficult with bacteria than with birds or frogs, but it's not going to be impossible,” says Dykhuizen.


    Ancient Trackways in Strip Mine Threatened by Reburial

    1. Erik Stokstad

    A group of amateur paleontologists is racing to save a site extraordinarily rich in animal tracks from the time of the early diversification of reptiles, some 310 million years ago. Experts say the site, part of an open-face coal mine about 50 kilometers northwest of Birmingham, Alabama, is the best one known among rocks that old. “The productivity has been amazing,” says Anthony Martin of Emory University in Atlanta. “This does seem to be the most prolific Carboniferous trackway site in the world.” The owners of the mine want to donate the site, but last week a state commission ordered them to begin bulldozing it within 30 days to comply with a federal law requiring that abandoned surface mines be restored.

    The tracks record the movements of amphibians, millipedes, horseshoe crabs, fish, and perhaps reptiles in a shallow estuary. “It's at a crucial time in vertebrate evolution, because reptiles are first appearing,” Martin says. The site was first investigated in 2000, after the grandson of the mine owner mentioned the trackways to his high school science teacher. The teacher belongs to a group of fossil enthusiasts, now called the Alabama Paleontological Society, which alerted the state geological survey and other professionals. Together they have collected more than 1600 slabs from piles of rock at the mine.

    Search and rescue.

    Amateur paleontologists are trying to prevent the destruction of this fossil site, full of tracks (bottom) made by amphibians and other animals.


    At a series of “Track Meets,” the group has photographed and cataloged the specimens—most of which belong to the amateurs, who say they would eventually like to give them to a museum. The society has posted more than 2000 pictures on its Web site,* hosted a scientific meeting on the trackways in May, and is preparing a monograph. The site continues to yield specimens, says Ron Buta, an astronomer at the University of Alabama, Tuscaloosa, and a member of the society. “There's a lot there. It's really amazing,” says Buta, who has been to the mine more than 20 times. Because so many specimens have been found, the site could help researchers discover how the tracks were made, says Hartmut Haubold of Martin Luther University and Geiseltal Museum in Halle, Germany. “It will open the door for a realistic understanding of hitherto enigmatic fossil footprints,” he predicts.

    Reclamation would stop that work in its tracks. Under the Surface Mining Control and Reclamation Act of 1977, the mining company must restore the landscape to its former condition as much as possible. That would mean bulldozing the site, covering the fossil-rich layer with 10 meters of fill. Since mining ceased in 2000, the company has done that with other areas, leaving aside about 3 hectares, so the paleontologists could continue collecting. Meanwhile, the society began lobbying to protect the site. Representative Robert Aderholt (R-AL) introduced a bill on 19 June that would allow the company to donate the land to the Department of the Interior and exempt it from reclaiming the mine. With no companion measure in the Senate, the future of the bill is uncertain.

    Last week, after a local landowner argued that the cliff face is a public hazard, the commission ordered the company to reclaim the fossil-rich area. The society says that safety isn't an issue. “It would be fenced off,” says Prescott Atkinson, a pediatrician at the University of Alabama, Birmingham, who is leading the society's lobbying efforts. The company plans to appeal the decision this week, says Billy Orick, general manager of the New Acton Coal Mining Co. in Cordova, Alabama. But if they don't hear back by the end of the month, he says, “we're going to have to begin surface reclamation.” If the appeal isn't granted, the amateurs plan to ask the circuit court to intervene.


    Souped-Up Archimedes Equation Torpedoes Submarine Paradox

    1. Charles Seife

    Think twice before you crank up the propellers on your submarine. If you travel too close to the speed of light, you will wind up in Davy Jones's locker—according to the theory of general relativity. In the July issue of Physics Review D, a Brazilian physicist has extended Archimedes' law of buoyancy to objects that are moving very fast, thereby solving a long-standing paradox. Physicists hope that the relativistic Archimedes' principle might yield insight into the laws of thermodynamics, the behavior of black holes, and even the growth of crystals.

    Einstein's theory of relativity is rife with seeming paradoxes—twins who age at different rates, spears that simultaneously fit and don't fit inside barns, scissors that turn rubbery when they close too fast—but any relativistic physicist can untangle these puzzles with little effort. Not so the submarine problem, which was apparently posed in the late 1980s by James Supplee of Drew University in Madison, New Jersey. “I thought it was a scandal that there was no answer to this paradox,” says George Matsas, a physicist at São Paulo State University in Brazil. “So I decided to waste time on it.”

    Imagine a submarine at rest that's exactly the same density as water, neither floating nor sinking. No paradox there, until the submarine begins to move very fast. Objects moving close to the speed of light get more massive and shrink, so from the perspective of an observer at rest, a relativistic submarine will pack more mass into a smaller package; the sub will become denser than water and sink. On the other hand, Captain Nemo aboard the submarine feels that the sub is at rest while the water rushes by at near light speed. Because the water is moving so fast, the individual molecules gain mass and squeeze into a tighter spot; the density of the water increases, so the sub should float. Obviously, the sub can't sink and float at the same time. Either Captain Nemo or the stationary observer must be mistaken.

    Sink or swim.

    Should a superfast submarine surface or submerge? A South American scientist solved the stumper.


    When a student asked Matsas how to resolve the paradox, Matsas was stumped. “It was embarrassing,” he says. “I had to know.” So he plugged the moving submarine into the equations of general relativity. After juggling forces and equations for several pages, he had the answer: The sub sinks. Captain Nemo is wrong and the stationary observer is correct. But why?

    Buoyancy is a function of gravity, and gravity, like size and time, is affected by rapid motion through space. As the sub zooms through the water, Earth exerts more force on the moving sub. “Because the sub is moving, the gravitational field increases, compensating for any increase in the water's density,” says Matsas. Whereas the stationary observer thinks that the sub sinks because of the sub's enhanced density, Nemo will think that the sub sinks because of Earth's enhanced gravity. But they will both agree that the sub sinks.

    By extending Archimedes' description of buoyancy to high-velocity or high-gravity conditions, Matsas thinks that the equation might shed light on the inflow of fluids around neutron stars or black holes. “It might be important for understanding different layers of gas in relativistic stars,” he says, adding that studying the buoyancy of a box descending into a black hole might yield new insight into the laws of thermodynamics.

    Other scientists are intrigued by the discovery. Yale University's John Wettlaufer, who studies the thermodynamics of crystallizing materials, has derived an equation that describes the buoyancy forces along interfaces between solids, liquids, and gases under different types of fields. According to Wettlaufer, the similarity between his own equation and the one for relativistic buoyancy hints at something more profound. “They bear a lot in common mathematically, so I want to look at what's under the hood,” he says. “There might be some deeper connection.”


    'Terrorism Futures' Could Have a Future, Experts Say

    1. Charles Seife

    It was tasteless, even for an agency not known for its tact. FutureMAP, a Pentagon project to encourage gambling about the likelihood of events such as terrorist attacks, drew withering criticism from appalled legislators. The Defense Advanced Research Projects Agency (DARPA) scuppered the plan last week just a day after the storm of outrage broke. Yet many scientists and economists see the project as a clever scheme to harness the expertise of hundreds or thousands of people, essentially creating a social-science supercomputer out of flesh rather than silicon. And many fear that the worthwhile ideas behind “information markets” such as the Pentagon project will be lost in the political fallout.

    “Information markets in general have a PR problem” because they are popularly viewed as little more than gambling, says Michael Abramowicz, a law professor at George Mason University in Fairfax, Virginia, who studies information markets. “They sound crazy.” Yet, he adds, the idea is very powerful.

    In 1776, English economist Adam Smith noted that a market made up of individuals, each acting in his own self-interest, tends to behave in a manner that's wiser and more farsighted than the individuals themselves. It is as if an occult “invisible hand” guides investors to make the right decisions, he suggested.

    In the past half-century or so, economists have built upon this idea with the “efficient market hypothesis.” “In essence, it says that market prices [take] almost all the things you can think of into account,” says Robin Hanson, an economist at George Mason. The cost of an object on the market, whether it's a stock certificate or a chunk of silver or a prediction of future sales of orange juice, tends to reflect all the information available about the stock, or silver, or orange juice. In that case, researchers reasoned, an efficient way to harness all the information available to a group of people is to set them up in an artificial market. And, in fact, this is precisely what economists have been doing for years—very successfully.

    Since 1988, the University of Iowa's Henry B. Tippie College of Business has been predicting the outcomes of elections with a futures market. About 7500 participants around the globe buy and sell shares whose values depend on the percentage of the popular vote each candidate gets. As election night approaches, the prices of those shares become a very accurate predictor of who will win. “Our average error is about 2.5%,” says economist Robert Forsythe of the University of Iowa in Iowa City. “We typically do better than polls.” The Iowa research group has also set up markets that accurately predict the box office returns of movies.

    “In the end, these markets aren't going to magically get things correct,” says Abramowicz. Yet, theoretically, information markets tend to be self-correcting; knowledgeable people make money and increase their standing in the market, while clueless people lose their money and cease to have any influence.

    Launched by DARPA in 2001, FutureMAP attempted to apply such information markets to national security matters by funding a number of research programs to test the idea. “Our markets were set up primarily as a research tool, not to predict but to see how well markets like this predict,” says Hanson, who, along with the San Diego, California-based company Net Exchange, received two grants totaling $850,000 to set up a prototype market designed to predict future economic, military, and political conditions in Middle Eastern countries.

    That is where the taste issue arose. To work properly, a market has to quantify what it's trading in, says Charles Polk, president of Net Exchange. “Nobody's going to trade in bushels of corn if you can't define what a bushel of corn is,” he says. And a good measure of political stability would take into account factors such as suicide bombings, political assassinations, and unrest. “To test, say, whether the Mideast road map is going well, you trade in specific event securities that are correlated to the road map's going well,” says Polk. “You can have one on whether Hamas joins the Palestinian Authority's government structure, or how many suicide bombings there are inside the green line.” So speculating on political stability is, implicitly, placing money on whether events—some regrettable—will occur. “It looks like it's in bad taste, but intelligence, by its nature, is in bad taste,” says Hanson. “You pay people to tell you bad things and sometimes to do bad things. Any market dealing with intelligence is going to be in poor taste; that's unavoidable.”

    When DARPA's project hit the news on 28 July, the outrage was so intense that DARPA killed FutureMAP a day later. Hanson and Net Exchange lost their grant, driving Net Exchange out of the information markets business, at least for intelligence. “Unless it comes with a pile of money and an extra-special media shield, I'm not going back,” says Polk.

    Others are a little more optimistic. Neoteric Technologies in Huntsville, Alabama, lost a $750,000 FutureMAP grant, but its vice president, William Adkins, still hopes that the Department of Defense will find use for information markets. His company is involved in a project that will use a market to measure the progress of a Pentagon weapons project. “It'll give evidence whether the project will be on time or whether the program manager is whistling in the dark,” says Adkins, who notes that the market could break bad news to officials without whistleblowers having to risk their careers. Adkins has also been working on a market of epidemiologists that he hopes might give early warning of an epidemic, such as a resurgence of SARS.

    Despite the promise of information markets, the handling of FutureMAP has damaged the field, says Hanson. “It's a setback,” he says. “I think it's a promising enough idea that it will win out eventually, but in economics, it may take 50 years.”


    Ultracold Atoms Spark a Hot Race

    1. Adrian Cho*
    1. Adrian Cho is a freelance writer in Grosse Pointe Woods, Michigan.

    A much-anticipated atomic soup might lay bare the inner workings of high-temperature superconductors, neutron stars, and primordial matter—and perhaps win its creator a Nobel Prize

    In science as in movies, a sequel can be just as compelling as the original tale. Eight years ago, atomic physicists wowed the world by coaxing thousands of ultracold atoms into a single quantum wave. Known as a Bose-Einstein condensate (BEC), that weird state of matter fulfilled a 70-year-old prediction; opened the way to atom lasers and other fanciful technologies; and, in 2001, won its discoverers a Nobel Prize (Science, 19 October 2001, p. 503).

    The mad dash to make a BEC seemed to set a standard for drama and gravity that atomic physicists could not hope to meet again, as three teams produced the strange stuff at nearly the same time. Yet, less than a decade later, researchers are closing in on an even more elusive and perhaps more revealing state of matter. And this time, the competition is even fiercer.

    Physicists are struggling to make atoms in a gas pair like the electrons in a superconductor. Achieving such “Cooper pairing” could lead to new and useful quantum effects. The strongly interacting atoms should also mimic the behavior of electrons in high-temperature superconductors, protons and neutrons in atomic nuclei, neutrons in neutron stars, and even the quarks in primordial matter known as a quark-gluon plasma. So the odd atomic soup might make the perfect tool for teasing out the principles that unify seemingly disparate phenomena, says John Thomas of Duke University in Durham, North Carolina. “All of a sudden, you have a desktop experiment that cuts across all fields of physics,” Thomas says.

    Cold warrior.

    Randall Hulet of Rice University (right) leads one of six teams vying to make atoms pair like electrons in a superconductor.


    Six groups— including some that vied to discover the BEC—have already reached a key milestone on the road toward Cooper pairing (see box below). And now, a powerful technique may take them the rest of the way. Researchers are intently tweaking the elaborate assemblages of lasers and magnetic coils that adorn their hydra-like vacuum chambers, in hopes of producing a microscopic puff of Cooper-paired gas. Although no one is willing to make a firm prediction, some suspect that the discovery—if it's possible—could come within months. “It's a horse race,” says Randall Hulet of Rice University in Houston, Texas. “We're all moving along at a good pace, and there's no overall leader at this point.”

    Atoms, odd and even

    Achieving Cooper pairing is even trickier than making a BEC because the atoms that can pair are inherently less cooperative than are those that can form a condensate. Atoms and particles generally act like little gyrating tops, and their sociability depends on how much spin they have. Some, called bosons, have spin equal to a multiple of a fundamental dollop known as Planck's constant. At temperatures close to absolute zero, identical bosons prefer to snuggle into the single lowest-energy quantum state to make a BEC, as physicists observed in 1995 when they chilled a gas of bosonic atoms to less than a millionth of a kelvin.

    All other atoms and particles have spin equal to a multiple of Planck's constant plus an extra half and are known as fermions. Protons, neutrons, and electrons are all fermions with half a unit of spin. Consequently, any atom with an even number of protons, neutrons, and electrons is a boson; any atom with an odd number is a fermion.

    Whereas bosons are gregarious, fermions are loners. A fundamental principle of quantum mechanics forbids two identical fermions to occupy the same quantum state. This “exclusion principle” explains why electrons fit into distinct shells in atoms and why atomic nuclei and neutron stars don't implode into infinitesimal knots.

    In the same way, when cooled below their so-called Fermi temperature, identical fermions stack one by one into their lowest-energy quantum states—like starlings alighting one to a rung on the bottom of a ladder (see figure, right). Such stacked fermions are said to be degenerate.

    Joiners and loners.

    Near absolute zero, identical bosons pile into the least energetic quantum state (left), whereas identical fermions stack into low-energy states one by one.


    Even degenerate fermions exert forces on one another, however, and they can get together if conditions are just right. At a temperature even lower than the Fermi temperature, particles that attract each other can pair up like shy people circling each other in a crowded room (see figure, below). That happens only to the highest-energy fermions, which flit about in the energy levels just above the stack.

    Tango or twist?

    In a magnetic field, atoms in different spin states can form molecules (left). Vary the field, and they might also form loose-knit Cooper pairs.


    Although nebulous, these couplings exert a powerful effect: Nothing can deflect and slow either partner without breaking the pair. At such extremely low temperatures, energy is too scarce for that to happen, so the pairs can glide along without drag. That is what happens to the electrons in a superconductor.

    Physicists would like to see the same sort of pairing in a gas of fermionic atoms, in which case the paired atoms should flow without viscosity, a phenomenon known as superfluidity. But nature has thrown them a nasty curve ball: Fermionic atoms simply don't like to get cold.

    Hit or miss

    The key technique invented to cool bosons doesn't work for identical fermions. To reach the rock-bottom temperatures needed to make a BEC, physicists trapped bosonic atoms in a bowl-like magnetic field and blew off the most energetic ones with radio waves, thereby siphoning away heat in much the same way that blowing on hot soup will cool it. But that technique works only if the atoms jostle one another, constantly transferring energy among themselves so that some will be nudged into higher energy levels and be blown away. Unfortunately, that doesn't happen with identical fermionic atoms. Because of the exclusion principle, the atoms avoid one another instead of colliding.

    To overcome this problem, researchers devised two main schemes to make fermions collide. First to succeed and reach degeneracy was Deborah Jin of JILA, a laboratory run jointly by the National Institute of Standards and Technology and the University of Colorado, Boulder (Science, 10 September 1999, p. 1646). Jin and colleagues trapped fermionic potassium-40 atoms, which contain 19 protons, 21 neutrons, and 19 electrons. By applying the right combination of laser pulses, magnetic fields, and radio waves, the researchers ensured that the nuclei of some of the atoms spun in a different direction from the others. The no-longer-identical atoms could then collide, share energy, and cool one another. In 1999, Jin and her team chilled their atoms to less than a millionth of a kelvin to achieve the stacked state, now with one atom from each spin state in every energy level.

    In contrast, 2 years ago Rice's Hulet mixed bosonic lithium-7 atoms with fermionic lithium-6 atoms, which contain one fewer neutron. The bosons collided with the fermions, and when Hulet and colleagues tuned the radio waves to blow away the bosons, the evaporating atoms drew heat away from the fermions. Since then, four other groups have also reached degeneracy using techniques similar to Jin's or Hulet's.

    From the way station of degeneracy to the destination of Cooper pairing still seemed to be a long trek. Atoms generally attract one another so feebly that the critical temperature for Cooper pairing is a tiny fraction of the Fermi temperature—typically a few billionths of a kelvin. But physicists may have found a way to bring the critical temperature up to a more manageable level.

    The trick is to trap atoms in two different spin states in laser light and then apply a hefty magnetic field. If the field is tuned to just the right strength, when atoms in different states collide, they may almost bind to form a molecule, in a phenomenon known as a Feshbach resonance. That fleeting embrace effectively amplifies the force between the atoms, says Murray Holland, a theorist at JILA. “It leads to a very large amount of time in which the atoms are involved in this scattering process,” he says. “It's like a very strong interaction.” In the right magnetic field, theorists predict, atoms may form Cooper pairs at temperatures as high as a quarter of the Fermi temperature, a readily achievable mark.

    No one knows precisely how to use a Feshbach resonance to achieve Cooper pairing, or which atoms will prove most cooperative. So researchers are striving to figure out just the right plan of attack and to implement it in their intricate and persnickety rigs. “There are a variety of approaches, and one of them may work,” says Christophe Salomon of the École Normale Supérieure in Paris. “I wish I knew which one.”

    Some physicists, however, doubt that Feshbach resonances hold the key to high-temperature Cooper pairs. The hopes rest on calculations that assume that each atom collides with the others one at a time, says Henk Stoof, a theorist at Utrecht University in the Netherlands. But very near a Feshbach resonance, the interactions are so strong that each atom feels a tug from many others all at once, Stoof says, and such many-body effects may keep the pairing temperature unattainably low. But others hope many-body effects may prove more of a help than a hindrance. “I'm always optimistic,” says Wolfgang Ketterle of the Massachusetts Institute of Technology in Cambridge. “If nature is kind to us, these effects may be just what we need to see Cooper pairing.”

    Even if Ketterle's optimism proves justified, physicists will still face a daunting challenge: proving that they've reached their goal. When bosonic atoms pile into a single quantum state, physicists can practically see the matter wave they create. They need merely shine a laser on the cloud and take a picture of the shadow it casts. The distinctive shape of the matter wave stands out like Mount Kilimanjaro rising above the plains. In contrast, Cooper pairing in atoms should produce no such spectacular effect, because only a small fraction of the atoms will join in the process and the pairs will signal their presence in much more subtle ways.


    JILA's Deborah Jin first stacked cold fermionic atoms into the “degenerate” state.


    Some researchers envision shaking their cloud of atoms, in which case the atoms' superfluidity should enable the cloud to jiggle almost indefinitely, like an indefatigable blob of gelatin. Others hope to use radio waves to flip atoms from one spin state into another. If those atoms are tied into Cooper pairs, then the radio waves will have to pack extra energy to break the pairs as well. But even these experiments may be hard to interpret, warns Massimo Inguscio of the University of Florence, Italy. “All of these ideas to test the superfluidity of a fermionic gas are not black and white,” he says. “One must be very careful.”

    The N-word

    Ask physicists whether the observation of Cooper pairing will merit a Nobel Prize, and they tend to get cagey. It depends on how many avenues of exploration the new state of matter opens, says Ketterle, who shared the 2001 Nobel Prize in physics for the discovery of the BEC with Carl Wieman and Eric Cornell of JILA and the University of Colorado.

    Strongly interacting fermions—quarks, protons and neutrons, or electrons— underlie some of the most important unanswered questions in cosmology, astrophysics, and solid state and nuclear physics. So many researchers expect that cold fermionic atoms will provide an extremely useful model of these systems. Moreover, experiments on cold fermionic atoms might span the conceptual gulf between Cooper pairs and BECs. A Feshbach resonance can also be used to generate true molecules, so by tuning the magnetic field through a resonance, researchers might transform a gas of paired fermionic atoms into a BEC of bosonic molecules. “My guess is, yeah, the impact is going to be large enough that [Cooper pairing] will be worth a Nobel,” Jin says cautiously. “Of course, I'm probably of the opinion that more things are worth the Nobel than it's given out for.”

    Nobel or no, Cooper pairing will likely prove more important scientifically than the BEC, says Rice's Hulet, a veteran of the race for the BEC. “I think that when we look back 20 years from now, we'll find that the fermions had a lot bigger impact than the bosons,” he says. With advanced billing like that, it's no wonder that so many physicists want to be the first to see this next cool state of matter.


    The Contenders

    1. Adrian Cho*
    1. Adrian Cho is a freelance writer in Grosse Pointe Woods, Michigan.

    Six teams have stacked fermionic atoms into their lowest-energy states and are straining to see the atoms pair like the electrons in a superconductor. Deborah Jin and colleagues at JILA in Boulder, Colorado, use fermionic potassium-40 atoms in two different spin states to cool each other. Randall Hulet and his team at Rice University in Houston, Texas, refrigerate fermionic lithium-6 atoms with bosonic lithium-7 atoms, as do Christophe Salomon and his group at the École Normale Supérieure in Paris.

    John Thomas's group at Duke University in Durham, North Carolina, chills two different spin states of fermionic lithium-6, and Wolfgang Ketterle and colleagues at the Massachusetts Institute of Technology in Cambridge cool lithium-6 with bosonic sodium-23. Massimo Inguscio and researchers at the University of Florence, Italy, cool potassium-40 atoms with bosonic rubidium-87—and they might leave the bosons in the mix to act as a kind of glue between the fermions.


    Poor Job Market Blunts Impact of New Master's Programs

    1. Yudhijit Bhattacharjee

    The Sloan Foundation believes that industry needs graduates with both scientific and professional skills. But tradition and a poor economy pose difficult obstacles

    Everett Salas thought that enrolling in a new professional master's program in computational biology at the University of Southern California (USC) in Los Angeles would lead to a sure-fire job in the biotech sector. After all, the novel 2-year program was specifically tailored to train science graduates for careers in industry. But this spring, as graduation neared, the 28-year-old found out that few companies were hiring and that none seemed interested in his degree. So Salas, who had earlier earned a master's degree in pathology, stepped back onto the academic treadmill, signing up for the university's Ph.D. program in geobiology.

    He's not alone. In fact, seven of the first 15 students in USC's professional science master's (PSM) program have decided to stay in school. Only three have joined industry.

    Those numbers are not what the Alfred P. Sloan Foundation had in mind when it began backing such programs in 1997. At the time, the biotech industry was booming, and doctoral students were peering down a long road with uncertain job prospects. The solution, to Sloan and many others, seemed to be a degree between that of a technician and a full-fledged researcher that would lead directly to a desirable job.

    Over the next 4 years, Sloan handed out grants ranging up to $450,000 to seed 64 programs at 30 universities. Offering a mix of courses in science, business, and law, many were billed as an alternative to the MBA and Ph.D. The idea was to bridge the gap between the bench and the boardroom by training students proficient in the skills needed by both worlds.

    Two cohorts have now completed their training. But a sluggish economy and the reluctance of employers to hire scientists without Ph.D.s have forced many graduates to continue their education or accept academic jobs. “We didn't have our eyes on the pin that was going to deflate the biotech bubble,” says USC statistician Michael Waterman, who helped start the program. “Our expectation was that it would be easy to place our graduates in industry.”

    Instead, the proportion of PSM graduates entering industry has been small. That's especially true in the bioscience-related programs, which account for more than half of the 631 students so far enrolled or graduated under the Sloan initiative (see graphic). For example, out of 11 graduates in computational biology at New Jersey Institute of Technology in Newark, only three work at drug companies. Five have remained in academia, two pursuing Ph.D.s and three working in labs. Similarly, seven of 10 graduates from Pennsylvania State University's PSM in biotechnology are still in academia, including four who are continuing their studies. Only three have joined industry.

    Seeking mastery.

    Bioscience is the most popular field for students in PSM programs.


    Those graduating from programs in the mathematical and physical sciences are faring somewhat better. Nine of the first 15 graduates in industrial mathematics at Michigan State University are working in industry, with only four in the Ph.D. pipeline. The split is five and two for graduates of the financial and industrial mathematics programs at Worcester Polytechnic Institute in Massachusetts and seven and four for the University of Arizona's applied physics and mathematics programs. In contrast, only two of six graduates from Arizona's biosciences program have landed jobs in industry.

    Paula Stephan, a labor economist at Georgia State University in Atlanta, warns against reading too much into these initial placement figures. “Don't forget that most PSM programs were started in the past 3 years, a time of considerable economic downturn,” she says. With fewer jobs available, it's not surprising that more graduates are staying in ivy-covered settings.

    But a sluggish economy isn't the only factor in the slow start of PSM programs. They are also fighting the accepted wisdom that a Ph.D. is needed to oversee other industrial researchers. “One of the justifications for PSMs was that they would be able to fill industry's need for professionals much more quickly than a Ph.D. program,” says Jung Choi, coordinator of the PSM in bioinformatics program at Georgia Institute of Technology in Atlanta. “But when our students do internships in industry, the advice they hear from their supervisors—who are mostly Ph.D. scientists—is ‘Get a Ph.D.’” Of the 23 bioinformatics professionals graduated by Georgia Tech since 2001, seven have stepped onto the doctoral track and only six have joined company payrolls. Four others have become academic researchers, and two have joined federal labs.

    Some of the smaller PSM programs have come closer to meeting the original goals of the new degree. One reason is the presence of midcareer professionals, who have a clearer idea of what they want from the program. The Physics for Entrepreneurship program at Case Western Reserve University (CWRU) in Cleveland, Ohio, for example, has so far produced five graduates, four of whom have gone into industry. One, Marc Umeno, 35, has started a biomedical imaging company called NeoMed Technologies. Another graduate, Fraser Hewson, 32, who spent 5 years as an engineer before joining the CWRU program, is now a technical product manager for GenVac Aerospace, a Cleveland-based thin-films company.

    “I get to wear many hats, from chief IT guy to managing maintenance of equipment,” says Hewson, who adds that the professional science master's course was better preparation for his current job than a Ph.D.—“I didn't want to be an engineering drone”—or an MBA—“I wanted to stick to my technical roots.”

    But unlike Hewson, most PSM graduates in the biotechnology and pharmaceuticals industry aren't performing management tasks. Instead, they are working at the lab bench under the supervision of Ph.D. scientists or providing technical services. Of the 25 students who earned a master's in bioscience (MBS) last year from the Keck Graduate Institute of Applied Life Sciences (KGI) in Claremont, California, for example, only three wound up in management positions and four are doing planning and business development. The rest are working in research or sales.

    “In the future, we expect most of our graduates to gravitate to small biotech companies, where the barriers between the research and management sides are not so rigid,” says T. Greg Dewey, KGI's dean of faculty. Although KGI was set up independently of the Sloan initiative, its MBS program has received funding from Sloan.

    Spread thin.

    Fraser Hewson enjoys filling many roles as technical product manager at thin-films maker GenVac Aerospace in Cleveland.


    Sloan officials remain optimistic that the concept will eventually catch on. But nobody knows how long it will take. One stumbling block, according to Sheila Tobias, outreach coordinator for the Sloan initiative, is that large companies are often structured in a way that “the familiar MBA, engineer, and Ph.D. slots” are the only jobs available. “I don't know what it will take to change the recruitment habits of industry,” she says.

    Sloan program officer Jesse Ausubel says industry isn't the only destination for PSM graduates. He's not surprised that a number of bioscience professionals coming out of PSM programs are joining academic or federal labs. “After all,” he says, “those are the places where most of [the National Institutes of Health's] $27- billion-a-year budget is going. A science-rich economy like ours is going to have lots of well-paid jobs outside of industry for science professionals.”

    On the other hand, some suspect that the PSM may never be more than a niche degree. Alicia Loffler, who runs a biotechnology MBA program at Northwestern University's Kellogg School of Management in Evanston, Illinois, predicts that science professionals will find a home in a maturing biotech industry but that top positions will always require “a deeper knowledge of the science than is possible through a professional master's. If you look at the CEOs of biotech companies,” she notes, “they are mostly Ph.D.s or M.D.s.”

    In the end, that's the type of degree Salas decided he wanted, too. “The bad job market made me reflect on why I'd started in science to begin with,” says Salas, who plans to apply the gene-sequencing techniques that he learned in his PSM program to his research in geobiology. “At my lab, we are doing stuff that nobody else has done. And honestly, I've never felt better about my job prospects.”


    Shape Shifters Tread a Daunting Path Toward Reality

    1. Dana Mackenzie*
    1. Dana Mackenzie is a writer in Santa Cruz, California.

    New designs of robots built from cell-like modules are learning to walk, slither, roll, flow, and reinvent themselves on the fly

    From Star Trek to Star Wars to Terminator, science-fiction movies and TV series teem with morphing robots. If that vision is indeed a sneak preview of things to come, then the future of robotics is lying face down on a table in Mark Yim's laboratory at the Palo Alto Research Center (PARC) in California. Or rather, was lying face down, because now it's trying to stand up.

    First, the 30-centimeter-long, vaguely humanoid robot pushes itself up on all fours into a head-down, yogalike pose. Next, because the motors in its waist aren't strong enough to lift its torso from that position, it places its head on the table. This frees the arms, which swing back behind the legs, giving it a more favorable mass distribution. Finally, the torso straightens up, and “Terminator 0.001” is ready for action.

    At the moment, standing up is about all the action this particular robot can handle. Unlike its cinematic kin, PARC's Terminator cannot shoot people, take over the world, or (as one wag has suggested) run for governor of California. But what it does share with movie robots is the potential to change form. It is composed of 15 identical palm-sized modules, called PolyBots, versions of which are learning to reassemble themselves into sundry snake, insect, and wheel configurations and—most important—move around.

    From one cell …

    Mark Yim's “Telecube” robots are made of modules that can double in size by extending the sides outward.


    Yim's group at PARC, which consists of six full-time researchers and an ever-changing cadre of college and even high school interns, is one of several teams in North America, Europe, and Japan trying to build modular, self-reconfigurable robots. Like living organisms, these machines would be constructed out of many “cells” of a few basic types. They would be able to change their body plans and detach or attach extra modules without outside assistance. In theory, they would be more versatile, more robust, and cheaper than special-purpose robots (assuming that the modules could be mass-produced). In practice, modular robots have yet to reach most of these goals, to say nothing of the more far-fetched capabilities of their fictional counterparts. Nevertheless, the field has moved ahead rapidly since 1989, when Toshio Fukuda of Nagoya University in Japan built the first prototypes. Modular robots may soon get their first chance to prove themselves in the real world.

    Dockers and sliders

    At this point, there are more designs for reconfigurable robots than labs working on them, each with its own strengths and weaknesses. Lattice robots, such as Yim's “Telecube” and the “Crystal” robots built by Daniela Rus of Dartmouth College in Hanover, New Hampshire, offer perhaps the ultimate in morphing ability. The robots are stacks of cubical modules. By extending or retracting their faces (see figure), the modules can move through the stack like the squares in a two-dimensional sliding-tile puzzle. Yim's group has proven that, given enough modules, lattice robots can reconfigure themselves from any shape to any other. In computer simulations, the sequence of moves looks baroque yet eerily systematic. Finding the quickest sequence still poses a formidable mathematical challenge (see sidebar, p. 756).

    … To many.

    Daniela Rus pioneered the expanding-cube design for a lattice robot. Computer simulations by her lab show how stacks of lattice modules can reshuffle themselves into a nearly limitless variety of shapes.


    Lattice robots offer one other important advantage: Whereas most modular robots must monitor all their cells continuously to keep different parts from smacking into one another, in a lattice robot each module has to watch out only for its neighbors. Because the modules can occupy positions only in a grid, they must touch each other and “introduce themselves” before they compete for the same space. Their decentralized architecture lets lattice robots move in a free-form, spontaneous way, which Rus compares to the flowing of water.

    Another popular robot design is the chain or branched chain, in which the modules attach mostly in sequence but can also branch off if desired. The locomotion of these robots inspires animal metaphors, such as snakes, spiders, and centipedes. But they can also attach end to end and form wheels. More structured than the flowing motion of lattice robots, these “gaits” have, so far, proved more suitable for accomplishing tasks that might have practical value, such as climbing fences or tunneling through pipes.


    Through a series of gyrations, PARC's PolyBots can autonomously switch from a basic slithering-snake configuration (top) to a four-legged walking spider (bottom).


    The most advanced chain robots at present are Yim's PolyBots and the CONRO robots built by Wei-Min Shen and Peter Will of the University of Southern California in Los Angeles. PolyBot and CONRO are comparable in many ways. Mechanically, they are basically hinges: PolyBot modules can bend in one direction, like an elbow, whereas CONRO modules can bend in two. Each module carries its own computer chips and can attach to as many as four neighbors. Both robots have physical latches for “docking” modules together. In a PolyBot the connection is also electronic, whereas CONRO modules communicate via infrared light.

    Chain robots aren't about to win Olympic medals for speed or agility. Simply attaching two modules—something any 3-year-old could do with ease—takes PolyBot 30 seconds. “Getting them to touch and dock blindly is hard,” Yim says. “When you have a chain of several modules together, the positional error for each module builds up as you go down the chain.” For that reason, whenever two PolyBot or CONRO modules dock, they have to shine infrared beams at each other for guidance—“flying down the beam,” Will calls it. Even trickier is dealing with the outside world. Although researchers have experimented with equipping some robots with cameras and other sensors, the machines still cannot decide for themselves when to reconfigure.

    Rapid response.

    Wei-Min Shen's CONRO robots can adapt on the fly to changes in shape.


    What they can do is move. Both PolyBot and CONRO have mastered the three main gaits—slithering, crawling, and rolling—and can switch gears handily in response to a change in shape. CONRO has an especially spectacular ability to adapt on the fly. “As far as we know, we are the only people who do live surgery on robots,” says Will. If you break a snake robot apart, it will start crawling as two snakes. Stick a snake's tail in its mouth, and it will figure out what happened and roll like a tank tread. Finally, if you attach snakes to the sides of a snake, they will realize that they are now legs and change gait accordingly. “The surprise in people's eyes when they see this is amazing,” Will says. “When the thing gets up and walks, all your human feelings about robots come out. Some people cheer for it, and others find it scary.”

    Daredevil ‘bots.

    Yim hopes to send modular robots into a hazardous environment.


    CONRO's adaptability comes from an innovative, decentralized control system, analogous to biological hormones. “In the body, the same chemical signal causes your hand to wave, your mouth to open, and your legs to move,” explains Shen. Similarly, in a CONRO robot, a module's reaction to signals (or “hormones”) from the other modules depends on its current function. Each module constantly monitors its own neighbors to determine what its current role is—say, a head, a spine, or a leg. If its position changes, so will its behavior.

    For specific marching orders, CONRO modules all get the same message and use information about their neighbors to determine how to act on it. PolyBots don't have to think as hard: Their modules consult a “gait-control table,” usually with a central control, to get unique messages tailored for each module. To slither forward like a snake, for example, the robot's “brain” might tell one module after another to bend to the left, straighten, and then bend to the right, sending a wave of motion down the snake's body. A different cycle of bending and straightening makes the tank-tread configuration roll (see figure below).

    Treading lightly.

    In “wheel” configuration, Yim's PolyBots move by consulting a gait-control table that cycles each module through four different states.


    Stepping out

    So far, no modular robot has stepped, rolled, or slithered out of the laboratory to prove its mettle in the real world. But that may change soon. Recently, Yim's group was included in a 5-year grant from NASA to explore an abandoned mine where biologists have found bacteria thriving in ultra-acidic water (Science, 10 March 2000, p. 1731). For humans, the site varies from hazardous to completely inaccessible. “Some parts would require being completely submerged,” Yim says. “It's not clear if swimming, floating, or crawling on the bottom would be best. It is likely that different gaits would be required.” In other words, it's a perfect job for a reconfigurable robot.

    If the PolyBots succeed in bringing back microbe samples, biologists could learn more about one of the most bizarre ecosystems on Earth. In the future, modular robots may also be used to build power stations in space (a project the CONRO team is working on) or to conduct search-and-rescue operations. Eventually, Yim would like to see one in every garage. “Our ultimate vision is what we call a ‘bucket of stuff,’” Yim says. “It will take decades, but hopefully not centuries. You go up to this bucket of stuff and say, ‘Do the dishes. Change the oil in my car.’ It climbs out and puts itself together in the appropriate shape. It's easy to command, it understands the environment, and it does the dishes, too.”


    Topologists and Roboticists Explore an 'Inchoate World'

    1. Dana Mackenzie*
    1. Dana Mackenzie is a writer in Santa Cruz, California.

    Last month, in a workshop at the Swiss Federal Institute of Technology in Zürich, researchers explored the common ground between the very concrete subject of robotics and the very abstract world of topology; from all reports, they found a lot to talk about. “The conference … was perhaps the most exciting I have ever attended,” says Steven LaValle, a roboticist at the University of Illinois, Urbana-Champaign (UIUC).

    Topology is involved even in an elementary robotic device, such as an arm with two pivots (see figure). At any time, the configuration of the robot arm can be described by two angles. These run from 0° to 360°, with every different pair of angles corresponding to a different configuration. (However, 0° and 360° are considered to be the same angle.) The angles can be thought of as “latitudes” and “longitudes” on a torus—the topologist's favorite surface. Any movement of the arm corresponds to a path from one place to another in the torus. “What you can achieve is determined by topology,” says Daniel Koditschek, an electrical engineer at the University of Michigan, Ann Arbor.

    Modular robots, of course, contain many more than two moving parts. Every part adds at least one dimension to the “configuration space.” Thus, for example, each pose of Mark Yim's 15-module humanoid PolyBot, built at the Palo Alto Research Center in California, corresponds to a point in a 15-dimensional space. Although 15 dimensions might be baffling to most people, they pose no difficulty for topologists, who are used to spaces with many dimensions.

    Go configure.

    Topologists translate movements of a robot's arm (left) into paths through a “configuration space” (right).


    In Zürich, topologists Robert Ghrist of UIUC and Aaron Abrams of the University of Georgia in Athens showed how the topology of configuration spaces might simplify the movement of lattice robots, whose movements can be described by discrete translations. (Their work does not apply to robots with continuous motions, such as Yim's PolyBots.)

    If you want a lattice robot to morph from, say, a wheel to a centipede, your best bet now is a sort of blind flailing around through configuration space that takes you from wheel-like shapes to centipede-like shapes. But Ghrist and Abrams have devised a path-shortening algorithm that shrinks the random stagger down to the shortest possible path. Their idea exploits the possibility of moving cubes simultaneously whenever they don't interfere with each other. It also uses a deep theorem by French topologist Mikhael Gromov—proved in a completely different context—to show that you never get hung up on an intermediate path that appears shortest but isn't. In their algorithm, says Ghrist, “there's no deity instructing every module where to go. You just optimize the path locally, and then you pull out this abstract theorem that gives you the global result—bam!” According to Gregory Chirikjian, a roboticist at Johns Hopkins University in Baltimore, “until now, the people who worked on modular self-reconfigurable robots have established their own algorithms, whereas the Ghrist and Abrams approach is more of a blanket approach that has potential to be applied to any of these [lattice] systems.”

    “That's where mathematicians can really contribute something,” Ghrist says. “The mind frame of roboticists is to work only on the system they have in the lab.” But mathematicians have the luxury of looking for a broader theory. So far, the details of such a theory are sketchy. “Reconfigurable systems are still an inchoate world,” says Koditschek. But if the work of Ghrist and Abrams provides any clue, some of the answers for navigating that world may already be lurking in topology books.


    Depression Drugs' Powers May Rest on New Neurons

    1. Gretchen Vogel

    Blocking neurogenesis in adult mice renders antidepressants ineffective, lending support to a theory that a dearth of newborn neurons contributes to depression

    People who recover from depression sometimes speak of feeling reborn. Tantalizing evidence suggests that this renaissance isn't merely metaphoric: In the brain, newborn neurons may help ease depression.

    Several seemingly disconnected observations lie behind the idea that neurogenesis and depression are linked. Stress, which suppresses neurogenesis, can also trigger bouts of depression. Certain brain regions in chronically depressed patients tend to be smaller than those of their nondepressed counterparts (see p. 760). And antidepressant medications seem to promote neurogenesis —at least in rodents. The theory, first proposed several years ago (Science, 13 October 2000, p. 258), has begun to attract more attention. But it is still based largely on circumstantial evidence. No one knows exactly what new neurons do in the brain (Science, 3 January, p. 32), so it is hard to determine how a lack of neurogenesis might lead to depression.

    Work described on page 805 provides some of the best evidence to date that changes in neurogenesis might at least partly explain how the disease progresses. Neuroscientists René Hen, Luca Santarelli, and Michael Saxe of Columbia University in New York City and their colleagues show that blocking neurogenesis in mice also blocks the effects of antidepressants. The experiments, says neuroscientist Fred Gage of the Salk Institute in La Jolla, California, are the first to show a cause-and-effect relationship between the growth of neurons and depression. Although it does not clear up all the questions about the connection, he says, “it definitely supports the idea that neurogenesis is involved” in the disease.

    The researchers used a standard model for testing the efficacy of antidepressants in rodents. They placed hungry mice in an arena in which food was visible in the brightly lit center. The more reluctant mice are to venture into the light to eat, the higher they score on a scale of anxiety and depression. After 4 weeks of treatment with either fluoxetine (better known as Prozac) or drugs from another antidepressant family, the mice retrieved the food about 35% faster than those given water. The increased moxie was accompanied by a 60% increase in the number of dividing cells in a brain region called the hippocampus, which is involved in memory and learning and is a site of neurogenesis in the adult brain.

    The result is consistent with previous studies showing an increase in brain cell growth in response to antidepressant drugs. But such studies don't prove that the new cells are related to behavioral changes linked to antidepressants. “Antidepressants probably do a lot of things that have nothing to do with the [therapeutic] action of the drug,” says neuroscientist Elizabeth Gould of Princeton University in New Jersey.


    Without the growth of new neurons (blue) from hippocampal stem cells (green), antidepressants are ineffective.


    The researchers then tested whether blocking the brain's ability to grow new cells would also block the effectiveness of antidepressants. They exposed mice to x-rays aimed at the hippocampus to kill the stem cells that give rise to new neurons. Irradiated mice were different from untreated counterparts in two important ways: Four weeks of treatment with antidepressants did not prompt the usual spurt of new neurons, and the drugs had no effect on the animals' reluctance to retrieve food pellets from brightly lit areas.

    The authors acknowledge that radiation has more effects than just killing stem cells. To check whether more general brain damage could explain the drugs' impotence, the team looked at the irradiated animals' stress responses and learning abilities. The researchers found no differences between treated and control animals. Cellular studies also showed that neurons from the irradiated hippocampus responded normally.

    In a second set of experiments, the scientists tested genetically engineered mice that lack a serotonin receptor known to play a role in responses to fluoxetine. These knockout mice aren't affected by fluoxetine, but they do respond to so-called tricyclic antidepressants, including imipramine and desipramine, which work through a brain chemical called norepinephrine. When treated with tricyclic antidepressants, the knockout mice grew new neurons and retrieved the brightly lit food faster. But given fluoxetine, which as expected induced no behavioral response, the mice also had no new neuron growth.

    The results do not prove that a lack of neurogenesis causes depression—or that increased neurogenesis cures it. In fact, the authors note that two of their results don't quite fit the theory: First, the knockout mice that lack the serotonin receptor are more anxious than other mice, but they have normal levels of neurogenesis. Second, the irradiated mice, in which neurogenesis is blocked, do not seem to be more anxious or depressed than control mice. The observations don't have easy explanations, says Gage, except that “things are a lot more complicated than just the simple interpretation one might be tempted to take.”

    Another puzzle has been that the hippocampus is traditionally associated with learning and memory rather than emotions, whereas human depression is usually considered a mood disorder. But Hen points to recent studies that suggest a role for the hippocampus in mood as well. A study by David Bannerman and his colleagues at the University of Oxford, U.K., for example, suggests that damage to the dorsal part of the hippocampus leads to learning deficits, whereas damage to the ventral region has no effect on learning but leaves rats less anxious.

    Many questions about the role of neurogenesis in depression will have to wait until scientists develop better animal models of depression or ways to probe cell growth directly in human patients. “There are some things that I would predict are very specific to humans,” Gould says. Co-author Ronald Duman of Yale University says a deeper understanding may come from following the newborn neurons as they mature. “What we would like to see is how these cells fit into the physiology of the hippocampus. We're not quite there yet.”


    Applying Himself to the Business of Space

    1. John Bohannon*
    1. John Bohannon is a freelance writer in Paris.

    Far from being a threat to science, the new business-minded chief of the European Space Agency insists that applied projects will be its savior

    PARIS—Here at the headquarters of the European Space Agency (ESA), Jean-Jacques Dordain seems to move at light speed from office to office. ESA's new director is in high spirits: From his vantage point, things are looking up for the embattled agency. For several months, a cloud of doubt had hung over Galileo, a $1 billion collaboration between ESA and the European Union to set up a civilian version of the U.S. military's Global Positioning System (Science, 25 April, p. 571). Now just a few weeks after moving into the director's office, Dordain is celebrating the signing of the first set of long-awaited Galileo contracts between ESA and its industry partners. “Galileo is right on schedule,” beams Dordain, smoothing a tie printed with brightly colored cartoon rockets blasting off. Galileo's first transmission to paying customers is scheduled for 2008.

    To Dordain, Galileo and other commercial space activities promise to be ESA's salvation. The agency has been struggling to keep its basic science missions afloat because of the shrinking pockets of its 15 national paymasters. “I don't want to see the funding of any ESA program decrease,” says Dordain. To achieve this, he is promoting ESA as a “provider of solutions” for “clients” such as the E.U. Expanding ESA's business side, he says, will free its basic science programs from having to share resources with space applications.

    But to many European space scientists, Dordain's lofty idea is going down like a lead balloon. They fear that ESA is swinging away from basic research in favor of moneymaking ventures such as Galileo. “The mood these days is very gloomy,” says Len Culhane, director of the Mullard Space Science Laboratory at University College London. A green paper on space policy, the result of 4 months of soul-searching by European scientists, politicians, and the general public, was discussed here at a conference on 17 July. Many of the alternative futures for the 28-year-old agency could put basic science on the chopping block, Culhane and others fear.

    Few, though, doubt that ESA must transform to survive. “Science is in a bad state, so something has to be done,” says Risto Pellinen, a Norwegian atmospheric physicist and head of ESA's Science Programme Committee.

    Dream job.

    Jean-Jacques Dordain has been obsessed with space since childhood.


    For Dordain, this job was his destiny. “Space is my life,” he says with passion. “Sputnik was launched when I was 10, and then came Apollo. … I had no choice.” He started as a rocket engineer, and his dream was to ride a rocket into orbit. Although the opportunity never materialized, he was among the first group of astronaut candidates selected in France. “But I'm still ready to go,” he adds. Dordain has worked his way through the ESA ranks since 1986.

    Despite Dordain's bullishness, ESA's scientific missions continue to be plagued by uncertainty. Since 2001, the 15 member states have kept ESA's science budget frozen, resulting in an annual 3% erosion in spending power due to inflation. Not only does this leave no room for planning missions beyond those already on the books up to 2012, but unforeseen costs can doom ongoing projects. Currently on tenterhooks is Rosetta, a probe designed to rendezvous with a comet and land on its surface. Rosetta was due to set off last January before a launcher failure in December 2002 grounded the mission. The cost of storing it, carrying out a new launch, and lining up a new cometary target has drained $100 million from ESA's science budget. Despite this setback and rumors that the project might be axed, Dordain says Rosetta is “definitely” scheduled for launch in February 2004.

    Other highlights on ESA's roster of scientific missions include Mars Express, which is now on the way to the Red Planet, and Smart-1, a crewless mission to the moon set to launch this month that will test a new ion drive. For the longer term, ESA's Aurora program plans a wide-ranging search across the solar system for signs of life that aims to put a human on Mars around 2030.

    In the green paper consultation that ended last month, scientists proposed doubling ESA's $400 million science budget to safeguard these projects. “This is logical,” Dordain says, “because the United States has about the same-sized population and economy as Europe but spends twice as much on basic research in space.” This imbalance, he says, “makes no sense, neither to the scientists nor the citizens.”

    Dordain says that he can increase funding for ESA science missions, but not directly. The problem, he says, is that many member states channel funds for ESA through their science ministries, and “they will never increase how much they spend” because it would mean cutting funding to researchers in their own countries. Because much of ESA's overall $2.9 billion budget is spent on application projects, Dordain is looking to tap other pots of money in the member states, “money dedicated to transport policy, to defense policy, and to the enlargement of the European Union.” Dordain argues that money from the science ministries could then be earmarked for science missions.

    The “idea makes great sense,” says Pellinen. Others are dubious. “I do not think this can work,” says Johan Bleeker, director of the Space Research Organization Netherlands in Utrecht. “It is greatly oversimplified to think that one might create alternative funding sources for space [research] without [losing] existing resources which reside—rightly or wrongly—at research ministries.”

    Another strategy to ease ESA's financial travails that is under discussion is to merge more closely with the E.U., ultimately becoming something more like its U.S. counterpart, NASA. “But this is a very sensitive issue,” says Pellinen, because “ESA wants to keep its independence.” Indeed, European space scientists will be watching closely to see whether Dordain can maintain ESA's scientific excellence, says Pellinen, as the agency is prodded into a tighter embrace with industry.

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution