News this Week

Science  24 May 2002:
Vol. 296, Issue 5572, pp. 1376

You are currently viewing the .

View Full Text

Log in to view the full text

Log in through your institution

Log in through your institution


    Pioneering Physics Papers Under Suspicion for Data Manipulation

    1. Robert F. Service

    Recent discoveries at Bell Laboratories—the research arm of Lucent Technologies in Murray Hill, New Jersey—said to be of Nobel quality suddenly became mired in questions last week. Outside researchers presented evidence to Bell Labs management on 10 May suggesting possible manipulation of data involving five papers published in Science, Nature, and Applied Physics Letters over 2 years. In response, Bell Labs officials said that they are forming a committee of independent researchers to investigate. Their conclusions may not be known for months, but scientists who have seen the data are already saying that the potential fallout from the investigation could be devastating.

    The Bell Labs papers describe a series of different experiments with organic conductors, but portions of the figures seem almost identical, according to the physicists who raised suspicions. Particularly puzzling, they say, is the fact that two graphs show a pattern of “noise” that looks identical, although it should vary randomly.

    Bell Labs physicist Jan Hendrik Schön is lead author on the papers in question and the only author whose name appears on all five. Among his most frequent co-authors are two colleagues from Murray Hill, Bertram Batlogg—a former Bell Labs physicist who has since moved to the Swiss Federal Institute of Technology in Zurich—and Bell Labs chemist Christian Kloc. Schön told Science he stands behind his data and says it's not surprising that experiments with similar devices produce similar-looking data. “We are trying as hard as we can to repeat those measurements,” Schön says. “I am convinced they will show I haven't done anything wrong.” Co-authors on the five papers either declined public comment or could not be reached.

    Many scientists have reacted with disbelief. “I'm shocked,” says James Heath, a chemist at the University of California, Los Angeles, and director of the California NanoSystems Institute: “It's hard to understand. I know these people. Most of them are good, careful scientists.” “It's a little overwhelming,” adds Lydia Sohn, a Princeton University physicist who helped bring some of the discrepancies to light. “It's just disturbing, and disappointing, and sad.” The noise pattern is particularly disturbing, says Charles Lieber, a chemist and nanoscience expert at Harvard University in Cambridge, Massachusetts: “It's virtually impossible for me to believe that some of this wasn't made up.”

    Schön himself acknowledges that the similar noise pattern is “difficult to explain.” But others affiliated with Bell Labs suggest privately that a systematic artifact in the measurement equipment might account for the similar noise trace, and that in the other cases, computer files containing similar data could have been mixed up.

    Still, Lieber and others say the concerns are so serious that the authors should immediately withdraw the papers in question. “They should be retracted until they can be duplicated,” Lieber says. But Cherry Murray, who heads physical sciences research at Bell Labs, says the company won't take any action until the external review committee reaches its conclusion. “We are not rushing to judgment,” Murray adds. Science's editor-in-chief, Donald Kennedy, says that's the right course of action. “Until one completes an investigation, it's premature to make any decisions about the papers,” he says.

    Until last week, most physicists viewed Schön and his collaborators with something between envy and awe. Schön joined Bell Labs as a postdoc in 1998 to work with Batlogg and Kloc, setting out to study the way electrical charges conduct through organic crystals. They soon propelled Bell Labs beyond all competition in the nascent field of organic transistor research.

    In a series of groundbreaking papers—most of which are not directly implicated in the current inquiry—the researchers showed that they could use devices called field effect transistors (FETs) to inject large numbers of electrical charges into organic materials. By changing the concentration of charges, they could tune the electronic properties of the materials to behave in any number of ways—as an insulator or semiconductor, a metal or superconductor—exhibiting a malleability that had never been seen before.

    The group also reported that organic FETs displayed superconductivity at a temperature higher than had ever been seen in an organic material, revealed quantum signatures never before seen in organics, and could be made to act as lasers and novel superconducting switches. Physicist Art Ramirez of Los Alamos National Laboratory in New Mexico, praising the work in an interview prior to the recent revelations, says “the string of papers is really outrageous” in its success. “I don't know of anything like it.” Heath says he was equally impressed: “I saw Batlogg talk about [the team's results] a year ago at a meeting in Venice. I was blown out of my chair. I thought, ‘These guys are going to Stockholm.’”

    The astounding results prompted groups around the world to attempt to replicate the work. But to date, although other researchers have made some progress, no one has reported duplicating any of the high-profile results. That troubled many in the community, says Cornell University physicist Paul McEuen, the first to notice the apparent duplication of data.

    Some physicists grew more concerned last fall when Schön published a pair of papers on a different topic in Nature and Science with Bell Labs colleagues Hong Meng and Zhenan Bao. In the first, published in the 18 October 2001 issue of Nature, the researchers reported making a novel type of transistor in which the key charge-conducting layer was composed of a single layer of an organic conductor. In the Science paper, published in the 7 December issue (p. 2138), they reported diluting that charge-conducting layer with nonconducting insulating molecules, allowing them to track the electrical conductivity in a transistor through a single molecule. Together, the results received international press attention as a triumph of molecular-scale electronics. But McEuen says the papers puzzled researchers because, despite the novel architecture of the devices, they seemed to conduct current in a manner similar to traditional FETs.

    Last month, a more troubling aspect came to light: Researchers noted that figures describing the conductivity in the two papers appeared identical, even though the measurements were supposedly done at temperatures different enough to affect the results. According to Princeton's Sohn, several Bell Labs researchers pointed out the identical figures to her, McEuen, and others. “Collectively, people at Bell [Labs] were nervous,” says Sohn, although she declines to identify who tipped her off. Word of the duplicate figures began to spread. And late last month, Lieber and Harvard physicist Charles Marcus contacted manuscript editors at Science and Nature informing them of the apparent problem.

    A few days later, even before he had heard from Science, Schön e-mailed Science associate editor Ian Osborne to say there had been a mix-up and that the wrong figure had mistakenly been incorporated in the Science paper. Schön also sent along a new figure, which appears as a correction on page 1400 of this issue.

    But Sohn says the mix-up explanation just didn't sit well with her or McEuen. “Paul said, ‘Lydia, I'm just going to look at the data, the figures,’” says Sohn. And on Thursday, 9 May, McEuen stayed up much of the night looking through Schön's Science and Nature papers and found what he calls two “disturbing” coincidences.

    Striking resemblance.

    Published data from studies of different devices revealed a similarity in recorded “noise.” Schön says the bottom figure was sent to Science by mistake (see correction, p. 1400).

    The first involves the same duplicate figures that prompted the heads-up from Lieber and Marcus. McEuen noticed a close resemblance with yet another figure, this one in the 11 February 2000 issue of Science (see figures above). The figures show how changes in an electrical voltage applied to a control electrode called a gate alter the ability of charges to conduct through a simple circuit of two FETs. The devices in the 11 February 2000 Science paper reportedly contain different materials in the key charge-conducting channel in each FET and a different physical geometry, both of which should cause these devices to conduct current differently from devices described in the other papers, says McEuen. But when McEuen resized the figures and overlaid the data, he found that the seemingly uninteresting background data on the right portion of the figures looked similar. “The noise looks almost identical, bumps and all,” McEuen says. “This is very confusing and disturbing. They should be vaguely similar, maybe roughly similar. But certainly the noise shouldn't be the same,” McEuen says. “This knocked me for a loop.”

    He quickly got another shock. McEuen noticed that the same 11 February 2000 Science and 18 October 2001 Nature papers contained another similar figure, which also closely resembled a figure in a third paper, from the 28 April 2000 issue of Science. All three papers describe different organic conductors. Yet, if one ignores the labels, several of the data traces appear very similar. “There is no physical reason why they should be so similar,” McEuen says.

    The next day, 10 May, McEuen says he and Sohn were concerned enough that they contacted Schön, Batlogg, managers at Bell Labs, and manuscript editors at Science and Nature. He says that all involved expressed deep concern.

    A couple of days later, Sohn found another uncomfortable coincidence. In this case, six data traces in a figure in the 3 November 2000 issue of Science appear virtually identical to ones in the 4 December 2000 issue of Applied Physics Letters (APL). But, whereas the Science paper tracked the conductivity of a light-emitting organic material known as α-6T in a FET, the APL paper followed the conductivity in a non-light-emitting FET made with an organic compound called perylene. Moreover, whereas the FETs in the α-6T figure are “n-channel” devices, which conduct negatively charged electrons, those in the perylene figure are “p-channel” devices, which conduct positive charges. According to McEuen, most physicists believe that should cause the devices to conduct current in a slightly different manner. “They overlap, noise and all,” says McEuen. “They are identical,” except that the labels on the axes referring to the voltages applied to the devices have an opposite sign, he adds.

    Taken together, the three examples are deeply troubling, says Leo Kouwenhoven, a physicist at Delft University of Technology in the Netherlands. “I think that it is very worrisome,” Kouwenhoven says. “I can imagine you switch one figure by mistake. It's hard to imagine how you switch 10 figures.”

    Schön says that because his papers report the conductivity in FETs, “I would expect them to be very similar.” He declines to comment on other specific issues. Bell Labs' Murray declines to comment on specifics as well, but adds: “I am very concerned. … This deserves a full and complete investigation.”

    A five-member committee headed by Stanford University physicist Malcolm Beasley began an investigation last Friday. Beasley says he cannot estimate how long it will take or whether it will be broadened to look at data presented in other papers. Schön has been the first author on 17 papers in Science and Nature alone in the last 2.5 years and a co-author on dozens elsewhere. Beasley says: “We are hoping for something by the end of summer.”

    McEuen, for one, believes Bell Labs is taking the proper first step. “Beasley has great stature in the community. … Everybody wants to get to the truth.”


    Temple, Tourism May Sink Chinese Museum

    1. Xiong Lei,
    2. Ma Guihua*
    1. Xiong Lei and Ma Guihua write for China Features in Beijing.

    NANJING, CHINA—City officials have asked one of the country's most decorated research institutions to relocate a science museum already under construction. The surprising request has pitted scientists at the Nanjing Institute of Geology and Palaeontology (NIGP) against community leaders and triggered a heated debate over urban redevelopment.

    NIGP, created in 1951 at the dawn of the modern Chinese state, has never had a public outlet to showcase its fossils and other discoveries. So, scientists were elated last June when city authorities, after a 4-year review, approved plans for a $3.6 million, three-story museum on land near NIGP's offices, research labs, library, and collections. Ground was broken in December 2001, and NIGP officials say the project is about 40% complete.

    But their joy turned to sorrow on 22 February when a vice mayor of Nanjing told them to suspend construction. The official said the museum would obstruct the view of the Cock Crowing Temple, a 1400-year-old Buddhist nunnery and tourist attraction that sits on a nearby hill. Last month, city officials unveiled three designs intended to boost tourism by enhancing the temple vista. One would convert NIGP's two oldest buildings into a park, and another would tear down the library and specimen buildings to create more open space. All would uproot the museum and splinter the institute's lush campus.

    “We were very much surprised to hear this,” says NIGP Professor Jin Yugan. “It's not right for the government to make the decision [to halt construction] without first consulting the Chinese Academy of Sciences [CAS, which operates NIGP] or the institute. After all, the project has gone through all legal formalities.” He and five other academicians fired off a letter asking city officials to reconsider the renovation. Moving the museum, they claimed, “will affect not only scientific research and popular science, but also international exchange.” The letter has gone unanswered. But another missive—this one from a grade-school student to the local newspaper—has become a rallying cry for those who feel the museum should be moved.

    Craning their necks.

    Buddhist leaders at the Cock Crowing Temple in Nanjing worry that construction of a paleontological museum (top left) could detract from the historic site.


    Indeed, things are not looking good for NIGP. The minutes of a town meeting on 18 April, called to discuss the issue, noted that the museum “will certainly affect the visual effect of the landscape and make the space look more crowded. People from all walks of life have responded strongly to this. It is thereby proposed that the museum should move to a new site and that the construction be stopped.” NIGP official have raised objections to the minutes, which stand as the only public record of the controversy.

    Nunnery officials say that they are watching the debate from the sidelines because the land belongs to the state, although they support the city's proposal. “It's not our idea. It's the government who wants to do it,” says Abbot Lian Hua. At the same time, a nun in the abbot's office notes that moving the museum would provide the temple with “more green land, a pool to set free captive fish, and more room for citizens to worship and rest.”

    That's not how the local scientific leadership sees it. Yan Shouning, head of the Nanjing Branch of CAS, says that the nunnery shares blame with the media. “The municipal government supports the museum,” he insists. “But [it must accommodate] the different opinions from the local religious circle and the press.” Zhou Xuebai, vice mayor in charge of city construction, declined to comment on Yan's analysis of the dispute; he says that he is merely “coordinating the matter” by acting as a mediator.

    Prominent researchers hope to break the deadlock by appealing directly to the central government. The day after the 18 April meeting, 33 academicians of CAS and the Chinese Academy of Engineering in Nanjing signed a letter to top Chinese leaders urging the government “to care more about scientists and the future of the country's research institutes than tourism or local development.” That letter has so far gone unanswered, however, leaving institute officials wondering about the museum's fate.


    Can Chimps Ape Ancient Hominid Toolmakers?

    1. Gretchen Vogel

    As anyone with a weakness for pistachios knows, eating nuts can be a lot of work, but the rewards are worth the effort. The high-fat, high-protein foods are also a favorite of humans' closest living relatives, chimpanzees. In the tropical forests of West Africa, chimpanzees are especially avid nutcrackers, spending hours patiently using stone or wooden hammers to break open the tough shells of Coula, Panda, and other nuts. That behavior, studied for decades by primatologists (Science, 25 June 1999, p. 2070), now may also shed light on how early hominids began to make and use tools.

    On page 1452, primatologist Melissa Panger and archaeologist Julio Mercader, both of George Washington University in Washington, D.C., with primatologist Christophe Boesch of the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, present one of the first research reports on chimpanzee archaeology—a description of stone pieces they dug up at a chimp nutcracking site in the Taï forest in Côte d'Ivoire.

    Scientists have watched enough chimps to know that these fragments were created by accident, whereas many early hominid artifacts were clearly intentionally shaped. But the researchers argue that the chimps' leavings bear some resemblance to some of the simplest artifacts left by hominids millions of years ago—although other anthropologists disagree. In any case, says Mercader, the chimp assemblage raises the possibility that scientists could identify sites where ancient hominids, like the chimps, used unmodified stones as tools—something that so far hasn't been spotted in the archaeological record.

    The work “opens new ways of looking at some of the oldest human sites,” Mercader says. It may also deepen understanding of ape behavior. “We now have a way to detect and trace ape culture back in time,” he says.

    Fruits of their labor.

    Chimpanzees use stone tools to break nutshells, leaving shattered stone pieces (top) behind.


    Written observations of chimpanzee nutcracking date back to Portuguese explorers in the early 1600s. Mercader and Panger teamed up with Boesch to see whether they could uncover evidence of even earlier nutcracking. To have a baseline with which to compare perhaps earlier finds, the team excavated a site around the remains of a recently deceased Panda tree. Boesch, with his wife Hedwige, had observed chimpanzees cracking nuts there for 2 decades.

    The team members identified six wooden anvils around the tree where chimpanzees had cracked nuts. As they dug, they found a wealth of stone pieces, evidently broken off as the chimpanzees pounded their hammers on the nuts. Some pieces, the team claims, resemble some of those found at certain early human sites, with sharp edges and signs that they had been broken more than once.

    The authors—and other anthropologists—emphasize that the chimpanzee site does not resemble classic early human toolmaking sites, where there is clear evidence that the inhabitants used sophisticated flaking techniques to detach stone slivers, used as cutting tools, from larger “cores.” But the chimpanzee data, coupled with the wealth of behavioral observations, might help researchers interpret some of the more ambiguous sites containing fewer cores, Mercader argues.

    The work shows that chimpanzees can leave a definite record of nutcracking, says archaeologist Jeanne Sept of Indiana University, Bloomington. The description “should encourage archaeologists to examine Paleolithic assemblages more closely” for signs of ancient nut feasts, she says.

    Stanley Ambrose of the University of Illinois, Urbana-Champaign, points out that because chimpanzee and hominid hands are different, early hominids probably had different tool-using skills. However, sophisticated tools appear suddenly in the archaeological record about 2.5 million years ago, so additional study of chimp sites might help researchers detect ancient assemblages that represent earlier steps in toolmaking. “It is a short step from accidentally producing sharp-edged flakes and cores to discovering their utility for cutting and chopping,” Ambrose says.

    But several paleoanthropologists, including Ambrose, were not impressed by some of the similarities the researchers found between the chimp stone fragments and those of early hominids. For example, the team notes that at both the chimp site and at three early hominid sites, the stone pieces were chiefly small and large pieces were rare. But that's not surprising, says Ambrose, because there were no naturally occurring large stones available in at least one of the ancient sites.

    Paleoanthropologist Tim White of the University of California, Berkeley, finds that “what they have excavated is utterly unsurprising. … Even the ‘simplest’ Oldowan sites are fundamentally different” from those of the chimpanzees. He notes that the chimpanzees show no evidence of selecting stone for its material properties aside from weight.

    The original goal—finding evidence of ancient chimpanzee nutcracking—will take much more digging, says Mercader. Anthropologist Frédéric Joulian of école des Hautes études en Sciences Sociales in Paris, who has also analyzed chimp and human nutcracking sites in and near the Taï forest, agrees. Separating chimp from human or prehuman activity, he warns, will not be an easy nut to crack.


    Theorists Doubt Claims for Perfect Lens

    1. Konstantin Kakaes*
    1. Konstantin Kakaes is a writer in Paris.

    A spat has broken out in the normally calm world of optics over whether it is possible to make a perfect lens. Two years ago, physicist John Pendry of Imperial College in London predicted that a strange class of optical materials, known as negative index media, could make a lens that focuses all the parts of a light wave, even those that normally decay. But now, two different groups of researchers are attacking Pendry's conclusions.

    When light crosses a boundary between two materials, it changes speed; because it changes speed, it bends. The “index of refraction” of a material is a measure of how much it bends a beam of light. In 1968, Russian physicist Victor Veselago used Maxwell's equations—the basic laws governing electricity and magnetism—to predict that in certain specialized materials the refractive index can be negative, with the result that light bends in the opposite direction. Veselago's speculation appeared to have been confirmed last year, when a group at the University of California, San Diego (UCSD), made a “metamaterial”—a microscopic lattice of circuit boards imprinted with copper “split ring resonators” and wire strips (Science, 6 April 2001, p. 77)—said to display a negative index of refraction for microwaves.

    Their work was spurred by Pendry's calculation that these negative index media have an added bonus: They amplify so-called “evanescent waves.” In most materials, parts of a light wave decay—the evanescent wave—and this ultimately limits the clarity of a lens. Pendry's insight was that these waves do not decay as usual when light is refracted negatively.

    Weird stuff.

    Researchers say this “metamaterial” can refract microwaves the wrong way.


    But some researchers think such a phenomenon is too good to be true. A team led by Prashant Valanju at the University of Texas, Austin, says in the 6 May issue of Physical Review Letters (PRL) that Veselago himself made a mistake in the direction of a light ray in a fundamental diagram. That purported error casts doubt over all subsequent research with these materials. Negative refraction “would violate two basic laws of physics: that no signal can travel faster than light, and that causality must be obeyed,” says Valanju.

    In the 20 May issue of PRL, Nicolas Garcia and Manuel Nieto-Vesperinas of Spain's Higher Council for Scientific Research in Madrid claim that it's Pendry, not Veselago, who's in error. For the evanescent waves to be sufficiently amplified, they say, the energy density in the material would have to be infinite—a physical impossibility. Valanju thinks the UCSD group did not see negative refraction in its metamaterial but rather “diffraction effects.”

    “Whatever our experiment was,” says David Smith of the UCSD team, “the [critics] wouldn't be happy,” because it conflicts with Valanju's theoretical predictions. Pendry stands by the UCSD data. He believes that Valanju errs in calculating the velocity of light in these negative index media and that the objections to the “perfect lens” are largely emotional. Pendry and Smith are submitting another paper to PRL that they believe answers Valanju's theoretical criticisms. But new experiments—at UCSD and elsewhere—may be the only way to bring this debate into sharper focus.


    Scientists Wary of New Academy Reforms

    1. Vladimir Pokrovsky,
    2. Andrei Allakhverdov*
    1. Vladimir Pokrovsky and Andrei Allakhverdov are writers in Moscow.

    MOSCOW—A revolution appears to be under way at the Russian Academy of Sciences (RAS)—but it's unclear whether this is a genuine transformation of Soviet-style management at the country's research behemoth or a cynical attempt to thwart real reform.

    At the RAS general meeting last week, academy members approved a sweeping overhaul that would merge several of the disciplinary fiefdoms, stripping power from top officials on RAS's governing board, the presidium. The academy's leadership portrays the reorganization—creating nine divisions out of the existing 18—as a way to steer more funding to the cream of its roughly 400 institutes. However, others view it as shuffling chairs on the deck of the Titanic.

    In either case, observers agree that the academy has indeed hit an iceberg in the form of President Vladimir Putin. At a meeting of his top advisers last March, Putin declared that the state would no longer distribute research funding as a kind of welfare but instead focus it on several unnamed priority directions. That would be a radical change for RAS, which since the Soviet collapse has fiercely defended its system of doling out crumbs to each scientist, rather than conducting merit-based competitions. In the meantime, the unknown fraction of scientists who actually perform research has had to subsist on tiny Russian grants or team up with foreign labs.

    The new system, which incorporates Putin's thinking, could strengthen areas such as mathematics that once commanded respect worldwide but have since lost scores of top minds to emigration. Merging RAS's two mathematics divisions, says Guriy Marchuk, who until 1991 served as president of RAS's Soviet predecessor, could resurrect the discipline. A single division will now be responsible for funding much of Russia's mathematics, with explicit instructions to funnel more money to the elite and eliminate redundant projects, says Gennady Mesyats, deputy to RAS president Yuri Osipov.

    If the presidium had confined itself to these mergers, its reforms might have won broad acclaim. But Mesyats also used the meeting to announce that the new divisions will be divided into sectors, each of which would elect members to the academy. These new barriers between RAS scientists could lead to new research-stifling fiefdoms, says Alexander Krasovsky, an academician at the Military Aviation Technical University in Moscow. “Instead of freeing the academy of the swollen administrative machinery, the reform has forged new links in the managerial chain,” he says. And that, some fear, is akin to adding subdivisions in a scientific Potemkin village.


    World Health Body Fires Starting Gun

    1. Richard Stone

    CAMBRIDGE, U.K.—The privileged few who study one of the world's most notorious viruses now have an unfamiliar luxury: boundless time. On 18 May, the World Health Organization's (WHO's) top decision-making body approved a recommendation to delay destruction of the world's two known stocks of smallpox, held under tight guard in Russia and the United States. And, to the surprise of many at last week's meeting of the World Health Assembly (WHA) in Geneva, anticipated calls for a new destruction date failed to materialize.

    A year ago, WHO was poised to approve incineration of the stocks—the last known samples of live virus after the disease was eradicated from the wild—by the end of 2002. But the 11 September attacks, followed by the anthrax-tainted letter campaign, heightened fears that smallpox could be resurrected from clandestine stocks or, less plausibly, diverted from sanctioned stocks. Those disturbing scenarios prompted WHO's governing board last January to recommend extending the virus's stay on death row. The reprieve could permit Russia, the United States, and collaborating countries to develop modern diagnostics, safer vaccines, and drugs against the disease (Science, 15 March, p. 2001).

    Not dead yet.

    Last week's WHA decision paves the way for research on live smallpox virus for the next several years.


    WHA's imprimatur allows this loosely coordinated program to shift into high gear. “For scientists, it's really good news,” says Antonio Alcami of the University of Cambridge, U.K., a mousepox expert and WHO adviser. He notes that potential smallpox studies—part of a batch of biodefense projects that a U.S. National Institute of Allergy and Infectious Diseases panel will review for funding next month—could now proceed with confidence that any promising vaccines or drugs they turn up could be pitted against live virus.

    Indeed, smallpox researchers may have more breathing room than expected. Last January, China's Permanent Representative to the United Nations in Geneva, Sha Zukang, implored the agency to set a new date for destruction (Science, 25 January, p. 598). China backed off this demand at the WHA meeting. According to Lev Sandakhchiev, director of the Russian smallpox repository in Koltsovo, this “may mean that we have another 5 to 7 years [of research] ahead of us.”


    Accounting Error Leads to Funding Drought

    1. Julia Day*
    1. Julia Day is an intern in the Cambridge, U.K., office of Science.

    CAMBRIDGE, U.K.—A major British research funding agency has canceled an entire round of grants, worth $19 million, in an attempt to fend off a cash crisis. Last week's decision by the Natural Environmental Research Council (NERC) has infuriated scientists in fields ranging from atmospheric and polar sciences to freshwater biology. “The long-term damage will be to the career structure of young scientists” who find themselves without a project this year, says Ekhard Salje, head of earth sciences at Cambridge University.

    NERC is one of seven agencies that channel government money into academic research. Its current woes stem from a failure in its new accounting system and overspending on staff salaries last year. In a statement last week, NERC announced that the cruel double whammy, its own doing, has forced it to save $28 million this year, although NERC has asked the government to contribute $8.5 million to lessen the blow.

    Loss of talent?

    Cancellation of NERC funding may force Britain's young scientists abroad.


    The agency will find most of the savings by canceling its first of two rounds of 3-year research grants planned for 2002, forcing researchers on as many as 50 projects to seek funding elsewhere. Aspiring grantees must now wait until December for the next round. “It was a regrettable decision that was not taken lightly,” says David Brown, NERC's director of science programs. “If there was any other course of action we would have taken it.” He says that other programs, such as NERC's small grants, studentships, and prestigious fellowships, are unaffected.

    Researchers are dismayed by the lost opportunity and the major blow that it will deal to departments that rely heavily on NERC money, says Salje. Many of the students in his own department at Cambridge, he notes, are funded through the standard grants program. “We won't be able to educate the next generation of young scientists,” he says. In some cases, labs in other countries will benefit from NERC's accounting error. Ph.D. student Markus Geisen of the Natural History Museum in London was to lead a research project on a micropaleontology grant this summer but says he now plans to skip over to Germany for a short-term contract researching coccolith biology.

    Brown says that NERC will seek more money for the December round of grants if it receives a flood of strong proposals. However, paleontologist Jeremy Young of the Natural History Museum, who was hoping to employ Geisen, doesn't know what to expect come December. “The competition … will be very high,” he predicts. “It is going to cause absolute chaos.”


    Reversals Reveal Pitfalls in Spotting Ancient and E.T. Life

    1. Richard A. Kerr

    Analyses of a martian meteorite sparked a search for geological markers that record the existence of life from long ago—and perhaps from far away—but mounting failures point up the difficulties

    Today, life seems easy enough to recognize. Much of it is green and grows, some of it walks or slithers, and even that mold on the bathroom wall all too obviously reproduces. But a few billion years hence, what could be said about today's life? What lingering traces—a smudged imprint in rock, an oddly composed bit of organic matter, or distinctively imbalanced isotopes—might show that life existed eons before?

    For almost 2 centuries, paleontologists wrestling with that sort of question have pushed the earliest known life back in time, first using bone and shell, then wormy squiggles in the mud and vanishingly small fossils. And in the past few years, egged on by a claim for traces of life in a 4.5-billion-year-old rock from Mars, researchers have explored new kinds of biomarkers—molecules and isotopes—in very ancient rocks. But interpreting both new and old kinds of markers has proven more complicated than many had hoped, and the results have sparked several heated debates.

    In this issue of Science, for example, two geologists challenge a startling claim for the first signs of life on Earth: that the skewed isotopic composition of bits of graphite in rock from an island off Greenland shows that life existed 3.85 billion years or more ago, when huge, globe-sterilizing impacts were still battering the planet. The debate highlights the growing realization that as analyses become ever more high-tech, relying on tinier samples and subtler traces, it becomes more important to understand the environment in which a presumed biomarker formed. “Know the rock” is the new catchphrase.

    As a result, many of the arguments over early-life claims center on geology. Researchers in paleontology and the burgeoning field of astrobiology are learning, or relearning, the lessons of geological context. Those lessons are essential not only in analyzing carbon and other isotopes but also in searching for microfossils and worm tracks on Earth and in seeking subtle biosignatures in the martian meteorite. “We had a very optimistic view of how easy it was going to be to recognize the signs of life,” says meteoriticist Harry McSween of the University of Tennessee, Knoxville. “We have a lot of work to do.”

    A big claim from a small beginning

    The latest controversy concerns a claim for the oldest signs of life on Earth. In 1996, geochemist Stephen Mojzsis, now at the University of Colorado, Boulder, and his colleagues analyzed bits of graphitic carbon from a patch of rock from the small island of Akilia, southwest of Greenland. Previous studies had suggested that the rock is a sedimentary banded iron formation (BIF) and at least 3.85 billion years old. Using an ion microprobe, Mojzsis found that 20-micrometer spots of graphite encased in micrograins of this formation were strongly depleted in carbon-13, the heavier stable isotope of carbon.

    Bands of contention.

    The lighter stripes in this rock are either sediments with signs of earliest life or inscrutable volcanics.


    At the time, that looked like a promising biosignature. Life, in particular relatively sophisticated photosynthesizing organisms, preferentially incorporates the lighter isotopes of carbon. And BIFs are composed of particles that settled to the bottom of the sea, where organic matter might collect; that carbon might have survived in the form of the graphite bits. Mojzsis and his colleagues concluded that they had “strong evidence for life” by 3.85 billion years ago—400 million years earlier than previously thought.

    Such a provocative claim prompted renewed interest in the backyard-size chunk of Akilia rock. Geologist Christopher Fedo of George Washington University in Washington, D.C., and geochronologist Martin Whitehouse of the Swedish Museum of Natural History in Stockholm remapped the geology of the 2-kilometer-long island and analyzed the elemental composition of the rock in question. On page 1448, they argue that the Akilia BIF is no BIF at all. “The green bands look identical to green rocks that surround it,” says Fedo. “The trace-element composition of these things looks nothing like a BIF.” Instead, they see a magnesium-rich volcanic rock repeatedly kneaded by tectonic forces and injected by quartz-rich fluids to form the banding. “The layering is clearly not sedimentary,” Fedo says.

    Mojzsis disagrees. The Akilia outcrop isn't a classic BIF, he says, but it is a quartz-rich sedimentary rock injected by magnesium-rich magma to form the banding. Most of Fedo and Whitehouse's elemental analyses are of the intruded rock and therefore irrelevant, he says, and the rest are consistent with the quartz-rich rock—which harbors the isotopically light carbon—being sedimentary.

    Despite that defense, researchers familiar with Greenland geology now tend to reject a sedimentary origin for the Akilia rock. “Fedo and Whitehouse provide strong evidence that … none of the layering can be considered sedimentary,” says geochemist Balz Kamber of the University of Queensland in Brisbane. That “suggests that the isotopically light carbon cannot be proven to be biogenic.” Field geologist Minik Rosing of the Geological Museum of the University of Copenhagen says he finds Fedo and Whitehouse's arguments “very convincing.”

    More contested life signs

    A more tangible milestone in the record of life on Earth is the earliest known microfossil, ascribed to 3.5-billion-year-old blue-green algae in rock from Australia; their discovery was announced by paleontologist William Schopf of the University of California, Los Angeles, in 1993. But in March, micropaleontologist Martin Brasier of the University of Oxford and colleagues challenged that finding, too.

    Two views.

    A microfossil (right) becomes a blob with more depth of focus.

    CREDIT: M. BRASIER ET AL., NATURE 416, 73 (2002)

    Instead of being a piece of a sunny, shallow sea floor, the siliceous rock bearing the fossils formed in the dark roots of a sea-floor hot spring, said Brasier. The microscopic squiggles therefore are not the remains of photosynthesizing blue-green algae but lifeless jumbles of organic matter, Brasier argues. Schopf admits that he got the geological context wrong, but he insists that the rock preserves some kind of hot-spring microbe, just not blue-green algae. Brasier doesn't have “the experience looking at Precambrian microfossils,” says Schopf.

    Yet another landmark, the oldest sign of animals, came under attack in February (Science, 15 February, p. 1209). Squiggly grooves supposedly cut in mud by burrowing worms 1.1 billion years ago—half a billion years before the previous record—were redated to 1.6 billion years old. That strained credulity—especially because some researchers had begun to think that the squiggles resembled mud cracks more than worm tracks.

    While these milestones are being questioned, another promising technique for spotting ancient life turns out to be “more complicated” than previously thought, as the technique's originator says. By a Herculean analytical effort, geochemists Brian Beard and Clark Johnson of the University of Wisconsin, Madison, had managed to measure the tiny enrichment of the light isotope of iron caused by bacteria (Science, 4 December 1998, p. 1807); inorganic processes didn't seem able to do it. But it turns out that this fractionation isn't a definitive sign of life. “There are a lot of chemical processes that can fractionate iron isotopes,” says geochemist Ariel Anbar of the University of Rochester in New York state. “The trick will be using some more context to pull out a biosignature.”

    Mars, again

    Perhaps the biggest disappointment in the search for biomarkers has come from astrobiology's most famous exhibit: martian meteorite ALH84001. At its premiere in 1996, this chunk of rock was said to contain four kinds of apparent biosignatures: organic matter, carbonate minerals, magnetite grains, and actual bacterial microfossils. After 5 years of study, only nanometer-scale grains of magnetite—indistinguishable from those made by some bacteria— remained (Science, 22 December 2000, p. 2242). Now even the magnetite is under heavy fire.

    At the March Lunar and Planetary Science Conference in Houston, soil mineralogist D. C. Golden of Hernandez Engineering and NASA's Johnson Space Center (JSC) in Houston and his colleagues reported that they had used heat to break down iron-rich carbonates and create magnetite grains that bear a striking resemblance to those of ALH84001. According to Golden, they even bear the distinctive faceting previously known only in biogenic magnetites. But geologist David McKay of JSC, leader of the group that originally proposed the ALH84001 biosignatures, didn't “see anything that would change our minds. Clearly, more work needs to be done in this area.”

    Soon enough, McKay got his wish. In the 14 May issue of the Proceedings of the National Academy of Sciences, meteoriticist Edward Scott and microscopist David Barber of the University of Greenwich, U.K., reported that a dissection of ALH84001 at the nanometer scale, using transmission electron microscopy, shows magnetite growing and filling voids with the same crystallographic orientation as that of the surrounding carbonate. This suggests to them that the shock of a meteorite impact vaporized pockets of ALH84001's iron-rich carbonates. Then, the iron was redeposited as magnetite, which tracked the structural orientation of the remaining carbonate. “Biogenic sources should not be invoked for any magnetites,” they write.

    Creative crash?

    An impact on Mars formed the gray bands—and, possibly, lifelike magnetite—in this slice of martian meteorite.


    Such setbacks are reminding researchers that life usually doesn't leave a unique trace; in many cases, what organisms do, inorganic chemistry can too. “It's not enough to say, ‘Here's a biomarker that organisms produce,’” says meteoriticist Ralph Harvey of Case Western Reserve University in Cleveland. “You have to say, ‘Here's why it can't be produced other ways.’ That's a much bigger burden.”

    And it requires a major commitment to detailed, interdisciplinary geological study, adds geologist Roger Buick of the University of Washington, Seattle. It's all too easy to “fly into a place knowing very little about the geological context, grab one sample, perform [a] particular scientific trick on it, and write a presto paper,” he says. But the recent rash of contested claims shows that over time, the scientific method is doing its job. “People who do have the geologic skills to reinvestigate some of these claims are doing what should have been done at the beginning,” says Buick. “I'm pleased it's happening.”

    The burden of understanding geological context will weigh most heavily on astrobiologists. “If the specialists cannot agree on the quality of evidence from terrestrial [Akilia] rocks,” asks Queensland's Kamber, “what hope is there to agree on evidence from tiny meteoritic fragments or returned samples?”

    But there is some hope, researchers agree, as evidenced in recent earthly successes. Copenhagen's Rosing has found Greenland rock 3.7 billion to 3.8 billion years old—from just after the bombardment—with isotopically light carbon that everyone seems to agree really did start out as a sediment. And individual complex molecules unique to terrestrial eukaryotic microorganisms have been found preserved in 2.5-billion-year-old rock, 300 million years before the first suspected eukaryotic fossils (Science, 25 June 1999, p. 2112). With the impetus from ALH84001 and a commitment to the interdisciplinary investigation of geologic context, biosignatures could work, researchers say. “I don't believe any of the evidence from the martian meteorite,” says geochemist George Cody of the Carnegie Institution of Washington's Geophysical Laboratory in Washington, D.C., “but it's been the biggest boon for space science. It got us thinking.”


    Marine Researchers Hope to Sail Off Into the Unknown

    1. David Malakoff

    Researchers and officials from 20 nations gathered in Paris last week to discuss how best to spark a new, cooperative age of global marine discovery

    PARIS—A volcano 2500 meters high and 20 kilometers across would be a major terrestrial landmark. But nobody knew this lava behemoth, its flanks teeming with new and unusual sea life, even existed until last month, when marine researchers surveyed a slice of the Pacific sea floor 1000 kilometers north of New Zealand. It was first spotted during a 22-day mapping voyage by the research vessel Tangaroa that turned up more than 50 new subsea peaks in a 24,000-km2 area—a “stunning” haul, says expedition member Ian Wright, a marine geologist at New Zealand's National Institute of Water and Atmospheric Research.

    If marine researchers meeting here last week* get their way, such voyages of pure discovery will soon be an international priority. A panel assembled by the U.S. National Academy of Sciences, on orders from Congress, invited nearly 100 researchers and legal experts from 20 nations to discuss a proposed global ocean exploration initiative that would send marine scientists where they have never been before. The goal: to spark cooperative projects “that may not test a specific hypothesis but would search for new knowledge,” says marine law expert John Norton Moore, a panel member and director of the University of Virginia's Center for Oceans Law and Policy in Charlottesville.

    That open-ended approach would be something of a break from current practice. The high cost of oceanographic research and a desire for long-term data have pushed marine scientists to pursue tightly focused studies that return repeatedly to well-researched areas. Rare are the expeditions that hark back to an earlier era, when marine explorers set out without knowing exactly where they were going or what they would find. As a result, exploration advocates say, marine scientists remain centuries behind their terrestrial colleagues in efforts to map and understand their realm. But a new generation of sensors, submersibles, and data-crunching systems could close the gap, they argue. “Now is the time to revive the exploration mode,” says Marcia McNutt, head of the Monterey Bay Aquarium Research Institute in Moss Landing, California.

    Such ideas received a mostly warm reception from the multinational group at the meeting. “Exploration is important because it yields surprises, and surprises are the spice of life,” said Victor Smetacek, a plankton ecologist at Germany's Foundation for Polar and Marine Research in Bremerhaven.

    But the enthusiasm was tempered by a range of concerns, and participants didn't agree on such key details as how much a global exploration effort might cost, who would participate, and who would set priorities. Some participants worried about how funding agencies might rate research that isn't based on a testable hypothesis, and how developing nations could justify supporting projects that promise few immediate benefits. Others warned that explorers could face potentially hazardous legal waters, as coastal nations become increasingly reluctant to let foreigners operate off their shores. The challenges “make international genome research look simple compared to ocean exploration,” says Su Jilan, a physical oceanographer at China's Second Institute of Oceanography in Hangzhou and chair of the United Nations' Intergovernmental Oceanographic Commission.

    Rising opportunities.

    Scientists aboard New Zealand's Tangaroa (top) discovered a submerged, 2500-meter-high volcano this month.


    The complexity, however, didn't faze biologist Shirley Pomponi, vice chair of the academy panel and director of research at the Harbor Branch Oceanographic Institution in Fort Pierce, Florida. At the end of the 3-day meeting, Pomponi pronounced herself pleased—and a bit surprised—by the support shown for the concept. She's also confident that researchers could work out acceptable arrangements. “There's a consensus that an international effort is a good idea, and that's a strong start,” she said.

    U.S. scientists buoyed hopes by reporting progress in overcoming concerns that exploration is “dilettante science.” Two years ago, despite worries that funding for exploration might take funds away from mainstream marine science, a presidential panel led by McNutt recommended that the government start spending $75 million a year on exploration (Science, 6 October 2000, p. 25). The report did spark change, although funds haven't yet started flowing at that level. It prompted the National Oceanic and Atmospheric Administration (NOAA) to set up a new exploration office, which will have spent nearly $30 million by the end of this year on select expeditions to everything from shipwrecks to deep-sea mounts, and on efforts to draw accurate seabed maps and create databases of sea life. “We have shown that we can draw new money in for exploration,” says Craig McLean, the program's director. McNutt's report also caught the eye of Representative James Greenwood (R-PA), who wrote the legislation requesting the academy study.

    Exploration advocates have also swayed views at the U.S. National Science Foundation (NSF), which once expressed qualms about unfocused marine research. It now says that exploration has its place. “Exploration advances the breadth of knowledge, while basic research advances the depth of knowledge,” says James Yoder, who heads NSF's ocean research programs.

    But getting governments on board isn't enough. Workshop participants said that academia and journal editors must also be persuaded to reward explorer-scientists with tenure and publications. “We need to address how we compare [traditional scientists] with people who explore—what are the criteria for successful exploration?” asked Rene Drucker-Colin, vice chancellor of Mexico's Autonomous National University and head of that nation's Academy of Sciences. Whatever the criteria, funders should be prepared for failure, says Michael Meredith of the British Antarctic Survey in Cambridge: “There are going to be times when you don't find anything interesting.”

    Would-be ocean explorers will also need to convince some developing nations that exploration is in their national interests, says Muthukamatchi Ravindran, head of India's National Institute of Ocean Technology in Chennai. Some, he notes, may be suspicious that richer nations will use exploration as a cover to prospect for exploitable mineral or biological resources, or develop sea-floor maps for military uses. And past failures to share data fully may poison current efforts to gain access to some waters. “Exploration cannot be seen as an invasion [of national waters] with science as the spearhead,” warns NOAA's McLean.

    One way to defuse tensions, McNutt and others argue, is to make exploration data freely available, ideally in real time. Richer nations could also provide poorer partners with useful products—such as sea-floor maps of national waters—and technologies that would allow homegrown studies. “Technology transfer will be key,” says Temel Oguz of the Middle East Technical University in Erdemli, Turkey. “In order to explore, I need new technology.”

    How to set such sharing rules, however, remains an open question. One idea floated at the meeting was to establish a formal committee under the auspices of the United Nations to set priorities and perhaps funnel funding. Such a high-level body, said Marta Estrada of Spain's Ocean Sciences Institute in Barcelona, would “help researchers leverage extra funds from their own governments.” There was also support for a less bureaucratic arrangement, with nations and institutions cooperating as their interests overlap. Others argued against any structure that would complicate existing efforts to negotiate the sharing of ship, satellite, and submarine time.

    Researchers had plenty of suggestions for exploration priorities. Many agreed that mapping of unknown areas—currently 90% or more of the ocean—will be essential. “You can't separate mapping from exploration—it's where you start,” says Larry Mayer of the University of New Hampshire, Durham, who showed off cybermapping tools that allow users to “fly” through three-dimensional seascapes. There was also strong support for targeting the 20-million-km2 Southern Ocean. “It has a pretty good claim to being the least explored place on Earth,” says Meredith.

    Some biologists, meanwhile, argued for going deep and getting small. “In general, the deeper we go the less we know—and the bigger the animal, the more we know,” said Annelies Pierrot-Bults of the University of Amsterdam. Others emphasized the need to augment the “snapshots” taken by traditional exploration missions with long-term monitoring data collected by satellite and buoy systems.

    The job of sifting through and organizing these ideas now falls to the 15-member academy panel, led by marine seismologist John Orcutt of the Scripps Institution of Oceanography in La Jolla, California. It hopes to weigh in on these and other issues—such as whether the U.S. should build a dedicated exploration vessel—by the end of the year, with a public report due in February 2003. That would allow the panel to feed into a major U.S. government report on ocean issues due next year, as well as ongoing European efforts to craft a new ocean research agenda.

    Meanwhile, New Zealand scientists are settling down to analyze the reams of data from their recent expedition. One of the first tasks, says Wright, “will be to give appropriate names” to the newly discovered mountains. Ocean exploration advocates hope that such mass-naming exercises soon become commonplace.

    • *International Global Ocean Exploration Workshop, sponsored by the Ocean Studies Board, U.S. National Academy of Sciences, 13 to 15 May.


    Can Space Station Science Be Fixed?

    1. Andrew Lawler

    A blue-ribbon panel reporting next month has the task of setting U.S. station research priorities. But will anyone listen?

    If U.S. space station research were a child, it would be in foster care. For the past decade, it has been a victim of unrealistic expectations, abuse, and neglect. Politicians touted fanciful space-derived cures to justify spending billions of dollars to build the station, outside researchers disparaged the effort as worthless, and NASA managers blatantly “borrowed” its allowance.

    But these three groups—lawmakers, agency officials, and the scientific community—now say it's time for a realistic and credible research plan to give the much-maligned program a shot at a stable adolescence. Next month, a star-studded, 20-member scientific panel appointed by NASA Administrator Sean O'Keefe will propose a firm list of priorities for research aboard the orbiting lab now under construction. To be effective, the panel must make a case convincing enough to win the backing of a cash-strapped NASA, a parochial Congress, and a fed-up research community—a tall order.

    The timing may be right, however. A new team of NASA managers is onboard, lawmakers are growing anxious, and the broader science community is becoming involved through the panel, chaired by Columbia University endocrinologist Rae Silver. This makes it an auspicious moment to fix a host of internal and external troubles plaguing the biological and physical research program at NASA, which is centered on the space station. Those troubles include chronic shortages—of money, flight opportunities, research equipment, scientific rigor, and respect—and an excess of disciplinary rivalry. “This is the time to change the system,” says Joan Vernikos, former head of NASA life sciences.

    One of the panel's toughest tasks may be to overcome entrenched skepticism among researchers, who have seen many previous recommendations come to naught. “We've revisited the same problem for the past 10 years,” says Martin Fettman, a veterinarian at Colorado State University in Fort Collins and longtime space researcher who declined to serve on Silver's panel. “I'm tired of participating on committees whose reports collect dust.”

    Weighty advice

    Silver acknowledges that her panel—which bears the unwieldy name of the Research Maximization and Prioritization Task Force—is a Johnny-come-lately. Before plunging in, she gathered 24 kilograms of studies and reports by NASA and advisory panels of the National Research Council (NRC), weighing them on her bathroom scale. “Some areas of research have been reviewed repeatedly for 15 years,” she notes.

    But no panel before hers has had the charter to pick priorities across a broad array of areas, from how cells grow in microgravity to the creation of new drugs. It is looking at eight different fields, including fundamental biology and physics, biomedical research, biotechnology, combustion science, fluid physics, materials science, and space product development. O'Keefe says he is looking for the highest possible scientific payoff. “We want to use [the station] for a purpose, not for a photo op,” he told reporters last month. And he insists that he doesn't want the panel to feel bound by any constraints.

    In limbo.

    The space station is flying, but its science mission remains largely earthbound.


    The panel does, however, face one major hurdle: time. Due to report in mid-June, it has had less than 6 weeks and only three meetings lasting 2 days each to do its work. Task force members have done a “meta-analysis” of earlier studies and now are laying out overall priorities, Silver said last week at the final meeting.

    Given the time constraints and the breadth of the panel's task, some researchers are skeptical about the outcome. Critics note that many members are distinguished but are largely unfamiliar with station research. “This is not their area of expertise,” grouses one station researcher. Others worry that the preponderance of biologists on the panel will mean that discipline takes precedence over less glamorous areas. “Biology should play an important role but not to the exclusion of physics,” says David Weitz, a Harvard University physicist who recently conducted experiments on the physics of colloids in space.

    Silver notes that her panel's job—which she says is being accomplished largely by e-mail to accommodate the busy schedules of members—is simply to come up with the best research plan. Given the intense concern on Capitol Hill that the station investment might yield little science, O'Keefe will be under pressure to abide by the panel's priorities. But scientists such as Fettman are only too aware that earlier studies have been ignored.

    Broken promises

    The research community's skepticism stems from a long history of broken promises. Although the space station has largely been justified by NASA managers and politicians for its science potential, the actual research program has consistently taken a back seat. During the late 1990s, for example, NASA managers diverted nearly $1 billion that was intended to build scientific facilities to plug gaps in funding for basic construction of the station.

    Matters came to a head last year, when mounting cost overruns forced the agency to halt work on station elements that would enable the number of crew members to double from the current three to the planned six astronauts. Because NASA estimates that 2.5 people are needed to maintain the facility, crew members would have little time to conduct research. Scientists such as Fettman cried foul, Congress complained, and European, Japanese, and Canadian partners formally told the U.S. government that such cuts were unacceptable.

    Heavy lifting.

    Columbia's Rae Silver weighs previous station reports.


    O'Keefe, a station critic in his previous job as deputy at the White House Office of Management and Budget, was unmoved. He insisted that the number-one priority is to bring station costs under control. The Silver panel is intended to assure the scientific community, international partners, and Congress that research still matters—even in this time of belt-tightening.

    Scientists hoping to conduct experiments in the microgravity of low-Earth orbit welcome the assurance but say they are tired of waiting for the promise of the space station to become reality. Flying an experiment on the space shuttle or the Russian space station Mir during the 1990s was typically an ordeal requiring years of patience, reams of paperwork—and the realization that they might not be able to do it twice. “No one in their right mind expects results from one experiment,” says Vernikos. The station was expected to solve problems of limited slots, long waits, and short flights.

    Those hopes keep receding, however. Take the case of the station's centrifuge. Innumerable NASA advisers have urged the agency to make a large centrifuge a central and early part of the research effort. By providing a range of gravitational levels, a centrifuge could supply vast amounts of new data on the effect of gravity on plant and animal development. Work would range from physiological studies on mice to biotechnology applications. But it has repeatedly been put on hold. In the mid-1990s, NASA turned over responsibility for the centrifuge to Japan in exchange for launching Japan's research module on the space shuttle. Now the centrifuge—a decade ago slated to go up this year—is not scheduled for launch until 2007.

    George Sarver, who heads the station's biological research project at NASA Ames Research Center in Mountain View, California, admits that the delay is “frustrating for everyone.” But, he says, at this stage no amount of money could prepare major facilities like the centrifuge much before 2007. Others add that with a crew of three, research would still get short shrift.

    NASA's new biology and physical sciences chief, Mary Kicza, puts the best face on that harsh reality. She notes that a July shuttle mission will be devoted to research and that a host of experiments—from biomedical studies of astronauts to fundamental physics—are well under way. But crew time is limited, the shuttle flight is the last of its kind planned, and other options—such as Mir—don't exist.

    Most ominously, researchers are giving up. “On a percentage basis, the number of [space] researchers is going down at a fairly impressive clip,” notes Ursula Goodenough, a biologist at Washington University in St. Louis and a longtime skeptic of space-based research. Task force member Mary Jane Osborn, a biologist at the University of Connecticut, Farmington, adds that “if there are long delays, obviously the community will evaporate.” Even former astronauts who have successfully flown experiments say they have had enough. “We're balancing on the edge of a razor all the time,” says Millie Hughes-Fulford, a biochemist at San Francisco's Veterans Administration Medical Center, who has focused on gene expression in microgravity—and is one of the few scientists to have done a whole series of experiments. “People like me are saying, ‘Maybe I should do NIH [National Institutes of Health] [grants] instead.’”

    Silver lining?

    Such fatalism will be difficult for the Silver panel to counter. But researchers say the task force has the power to define the coming debate over station science. The hardest part will be to lay out priorities. Kicza's office has traditionally tried to slice its disciplinary pie in an equitable manner. But that makes little sense now, given budget and flight constraints. Says Vernikos: “We're not NIH or NSF [National Science Foundation]—and we never will be. We should focus on very specific problems.”

    Big payoff.

    NASA's Sean O'Keefe wants a revamped research effort.


    But there may be ways to do more with less. Automation may ease the problem of crew time. The European Space Agency already has taken a lead in developing equipment that requires minimal attention in orbit. Hughes-Fulford's latest experiment, for example, will make use of the European Biopack, a largely automated centrifuge, slated to fly in July on the shuttle. Ames's Sarver says NASA is pushing automation to reduce the need for scarce crew time. For example, a $35 million cell-culture unit slated to fly by 2006 will take samples and preserve and refrigerate them without astronaut help, and a built-in microscope will enable researchers on the ground to have more control over the experiment. But automation can create more costs and problems—for instance, by greatly increasing the need for space-consuming freezers to store samples, notes Tim Hammond, a biologist at Tulane University in New Orleans.

    If an independent research organization, perhaps modeled on the Space Telescope Science Institute in Baltimore, Maryland, were established, it might also provide better direction with less bureaucracy for station science. An NRC panel already has suggested that course, but the proposal has bogged down at NASA. Some congressional members who support Houston's Johnson Space Center fear it could usurp that organization's control over station operations, and others worry that an independent institute would have less political pull in winning an annual budget. Kicza has a team studying whether the agency should back the idea.

    Increasing the amount of crew and research time in orbit would obviously be the best solution for station users. One proposal is to attach the space shuttle to the station for long periods, providing room for a much larger station crew. The shuttle could also be flown on an annual mission dedicated wholly to science, as in the one slated for July. Both ideas would require much more money than O'Keefe has on the books. And Congress, despite its concern over the health of station research, may prove extremely reluctant to fork out more money for an effort already mired in red ink.

    That reality is what makes researchers such as Fettman skeptical that NASA will accept Silver's recommendations any more readily than it has her predecessors'. The problem, he says, is not with the research program. “What needs changing,” he says, “is NASA's response. If they listened, wouldn't that be too cool?”

    Still, many researchers say the panel can get the ball rolling by carefully spelling out what needs doing immediately and what can wait. Recommendations for more automation, greater use of the shuttle for crew and experiments, and a limited research repertoire—along with more money—will help, they add. That will involve pain for those researchers left behind at the launch pad, but it might give the program with an unhappy childhood a shot at maturity.


    NSF Moves With VIGRE to Force Changes in Academia

    1. Dana Mackenzie

    A new NSF program aims to make mathematics more user friendly for students—but it's not for every university department

    Three years ago, the National Science Foundation (NSF) set out to change how the United States trains its mathematicians. This spring, it sent a handful of universities their first report cards. The verdict: The culture of math departments is changing, but even the most prestigious departments will find themselves out in the cold if they don't do it NSF's way.

    The vehicle for change is an ambitious system of grants designed to fix a badly leaking educational pipeline. From 1992 to 1999, the number of U.S. mathematics majors dropped by 23%, and the number of math graduate students fell by 20%. When those students emerged with diplomas, many could not find jobs. In 1995 and 1996, the unemployment rate among new math Ph.D.s topped 10%, rivaling that of Michigan steelworkers. Many graduate students postponed entering the job market, causing the median “time-to-degree” to rise to nearly 7 years—two more than most math departments consider optimal.

    In 1998, a blue-ribbon panel chaired by retired General William Odom concluded that the talent drain was threatening not only the nation's leading position in mathematics worldwide but also its ability to innovate in related disciplines. In response, NSF launched its Grants for Vertical Integration of Research and Education (VIGRE) program. The program has become a showpiece in NSF's most rapidly growing division, which hopes to boost VIGRE's budget next year by 62%, to $26 million.

    VIGRE is based on the assumption that declining enrollments were caused by a lack of mentoring, according to Don Lewis, who as head of the math division created the VIGRE program. “Students viewed mathematics courses only as training for mathematics teaching,” he says. Fields such as genomics, cryptology, and image processing were awash in jobs ideal for trained mathematicians, he adds, but students and postdocs didn't know about them. The answer, to Lewis, was obvious: Help universities do a better job of training and informing students about nontraditional careers. Or, as Bill Rundell of Texas A&M University in College Station puts it: “VIGRE makes us do what we should have been doing all along.”

    To make that happen, Lewis proposed a complete reform in the culture of mathematics, a shift away from the subject's traditional sink-or-swim individualism. VIGRE would encourage departments to transform themselves into mathematical villages, blurring the distinction between traditional boundaries separating research and instruction, pure and applied math, and advanced and beginning students. The goal was to help students at every level prepare for the next step in their careers.

    SUMS up.

    This undergraduate math symposium at Brown University is one of the innovations fostered by the VIGRE program.


    “I wanted to have the mathematical research environment be more like the environments in other sciences,” Lewis says, in which undergrads, graduate students, and postdocs all rub shoulders in the laboratory. “Biology does this much more effectively than mathematics” does, adds Philippe Tondeur, who succeeded Lewis as division director. “Students know [their subject] is exciting. They know where it leads.”

    A dozen universities received the first round of VIGRE grants in 1999, and 19 more have been added in two subsequent rounds. By mathematical standards, the new grants are lavish, ranging up to $1 million a year. The great majority of the money goes to graduate traineeships and postdoctoral fellowships, for which only U.S. citizens qualify. In return, departments must demonstrate progress in a variety of ways: increased enrollments, research papers, or posters by undergraduates; decreased “time-to-degree” for graduate students; improved outreach to other sciences and to nonacademic employers; and effective mentoring.

    The grants have been a godsend for middle-tier universities, and not merely because of the money. For math professor Loyce Adams of the University of Washington, Seattle, the biggest difference is that the department can now afford to have postdocs. “They give seminars on topics not covered in the normal curriculum,” she says.

    Skip Garibaldi, a VIGRE-supported postdoc at the University of California, Los Angeles, has both a teaching and a research mentor. The latter is helping him write a book with Jean-Pierre Serre, a former winner of the Fields Medal (the mathematical equivalent of a Nobel Prize). “You can't get much above Serre, and you can't get much below me,” says Garibaldi. “So that's an example of vertical integration.”

    View this table:

    VIGRE has fueled a big jump in the overall number of postdoctoral positions in mathematics. Those positions provide a welcome safety valve for new Ph.D.s seeking jobs in a recession-weakened job market. The VIGRE grants have also created grassroots opportunities. Last month, Brown University hosted a first-ever math symposium organized by and for undergraduates. The event, called Brown SUMS (Symposium for Undergraduates in the Mathematical Sciences), drew 75 students from across New England—a number that organizer Miguel Daal says was depressed by unseasonably warm weather. Even so, the conference gave the students an opportunity to network and to learn about the growing variety of mathematical research that goes on outside of math departments.

    As NSF's Lewis intended, VIGRE has spurred many departments to reorganize their programs. The applied mathematics department at the University of Colorado, Boulder, whose grant was renewed this year, has created “tetrahedral research groups” that include faculty, postdocs, graduate students, and undergrads. “I think the idea was one reason we got the award in the first cycle,” confesses James Meiss, the director of the program. Gareth Roberts, a VIGRE postdoc at Colorado for 2 years before getting a tenure-track job at the College of the Holy Cross in Worcester, Massachusetts, says that “the tetrahedra gave me a nice research feeling. In graduate school, I felt it was just me and my adviser, and mostly me. The VIGRE program put oomph into my research.”

    If anyone seems disgruntled with VIGRE, it is faculty members at top-tier schools. The Massachusetts Institute of Technology, for example, has failed in repeated attempts to get a grant. Lewis's own school, the University of Michigan, Ann Arbor, failed the first time around, perhaps because it had already implemented some of the changes envisioned. “No good deed goes unpunished,” wryly notes Al Taylor, a principal investigator on the (successful) second application. Some Ivy League universities chafed at having to recruit more U.S.-born graduate students, with inferior mathematical background, because foreign students were ineligible for traineeships. “At some schools, it was a rarity to have an American student,” Lewis says.

    But no department had quite as traumatic an experience as the three (out of 12) that lost their grants this spring after a scheduled third-year review: Carnegie Mellon University, Rutgers University, and the University of California, Berkeley. The rebuff to Berkeley, America's largest producer of mathematics Ph.D.s, came as a shock to many mathematicians.

    At a VIGRE workshop earlier this month in Reston, Virginia, Berkeley's chair, Calvin Moore, struck a defiant note, saying that mentoring isn't everything. “One of our goals is to cultivate self-reliance,” he said. “Berkeley is a tough place. Berkeley is not a warm and fuzzy place. Students react to this atmosphere: Some thrive, and others don't.” At the same time, he acknowledged, VIGRE has made a difference: The number of undergraduate math majors soared from 170 to 475 in 3 years.

    Henry Warchall, a VIGRE management officer at NSF, says there's no mystery about why some schools lost their grants. The criteria are spelled out in the eight questions the grantees have to answer during the site review, he says, and those who don't have good answers get the hook. “There's no hidden agenda,” Warchall said.

    That level of micromanagement may be exactly the problem for some departments. “I've talked with people in other departments who resent the social engineering aspect of VIGRE,” says David Targan, the principal investigator on the Brown University VIGRE grant. “But I think it's perfectly reasonable. If we want to advance as a field and get support from the NSF, we need to do it in certain ways.”


    New Mapping Project Splits the Community

    1. Jennifer Couzin

    A new type of genomic map, known as the haplotype map, promises to speed the search for elusive genes involved in complex diseases. But some geneticists question whether it will work

    It will upend the practice of medicine and save lives the world over, asserts Francis Collins, the director of the National Human Genome Research Institute (NHGRI). “This is the single most important genomic resource for understanding human disease, after the sequence,” he says. “We needed it yesterday, as far as I am concerned.” Indeed, his institute is leading the National Institutes of Health's (NIH's) $40 million downpayment on the project.

    But to many biologists, it's an untested concept hardly worthy of the $110 million it will consume. “The whole thing is a big waste of taxpayer money,” says Joseph Terwilliger, a statistical geneticist at Columbia University in New York City.

    Welcome to the haplotype map, a new type of genome map that, depending on where you look, is eliciting exuberance or exasperation.

    Proponents of the map, who include Collins, Eric Lander of the Whitehead Institute Center for Genome Research in Cambridge, Massachusetts, Panos Deloukas of the Sanger Institute in Hinxton, U.K.—and just about every big name from the Human Genome Project—say it's the best hope for tracking down the genes involved in common diseases, such as heart disease and cancer, that claim so many lives and have eluded most gene-hunting strategies. As an added perk, they say, it provides a tantalizing glimpse at human evolution and migrations.

    On the other side, many researchers—mostly population geneticists—say the map's promise is inflated and it may fail to deliver, adding that its proponents are forging ahead with too little data on how best to proceed. “I think there was lots of good, pure, scientific motivation for wanting to do this haplotype map,” says Nancy Cox, a human geneticist at the University of Chicago who's not involved in the project. Still, she adds, “people fell into a trap that we should have been smart enough to avoid: that was saying, ‘If we have this, we'll get the genes for complex common disorders.’ I think it's premature to know how much that will help.”

    The most virulent critics even contend that the HapMap, as it is known, is nothing more than a full-employment enterprise for the big sequencing labs that might find themselves out of a job once the human genome is completed. “Is the HapMap being designed to satisfy the need [to get] another big project going?” asks Kenneth Weiss, an anthropologist and geneticist at Pennsylvania State University, University Park. Terwilliger minces no words. “The haplotype map is just an excuse for Lander and others to keep funding coming for large factories set up for the sequencing of the genome”—an idea Lander dismisses as sour grapes.

    Despite these misgivings, the HapMap is well under way. In addition to NIH's $40 million, Canada recently kicked in CAN$9.5 million to fund one of its own researchers. Still some $60 million short, NHGRI will award the first grants this fall.

    Unexpected architecture

    The idea for the HapMap emerged from the gradual realization that the genome has a surprisingly structured architecture. Rather than being thrown together randomly, thousands of DNA bases—as well as patterns of single-base variations found among them—line up in roughly the same order in many different people. Like an interior decorator debating among four kitchen designs, a person's genome has just one of a few potential blocks of DNA to slap into a defined space on a chromosome. Each DNA block—or kitchen design—is a haplotype.

    Mark Daly, a computational biologist at Whitehead, stumbled upon these blocks a couple of years ago as he was scouring chromosome 5 for susceptibility genes for Crohn's disease. Stretches of DNA from 129 families affected by the disease kept falling out in one of about four patterns, Daly wrote in the October 2001 Nature Genetics.

    Not everyone's smiling.

    A plan to study haplotypes in these populations is prompting angry words.


    His Whitehead colleagues David Altshuler, Lander, and others wasted no time investigating. In a paper published last spring in Nature, they described common haplotypes in a group of northern Europeans and a Nigerian population called Yorubans, arguing that haplotypes varied somewhat between the two—a vestige of evolutionary history. A paper in Science (23 November 2001, p. 1719) by a group led by David Cox of Perlegen Sciences in Mountain View, California, found just a few haplotypes on chromosome 21.

    Presented at a Cold Spring Harbor Laboratory meeting last spring, these findings initially generated skepticism. They began to win converts, however, and by summer, Collins made the HapMap—a map identifying haplotypes across the genome—a priority and was rustling up money and collaborators. In March, NIH began soliciting grants for the first of the map's two stages. The first will be to create haplotype maps of the genomes of three populations: those of northern and western European ancestry, Japanese and Chinese, and Yorubans. In the second stage, scientists will test whether the haplotypes they find in those very large populations also appear in about 10 others.

    From SNPs to haplotypes

    Haplotypes gained popularity as scientists realized that mining the genome was much tougher than expected. A few years ago, geneticists heralded single-nucleotide polymorphisms (SNPs) as the long-sought answer to finding genes involved in complex diseases. Unlike cystic fibrosis or Huntington's disease, say, which are caused by a single genetic defect, complex diseases seem to arise from an uncertain mix of multiple gene mutations and environmental factors. Because each gene likely has a subtle effect, its “signal” is exceedingly tricky to detect.

    SNPs are simply places where the genomes of different people vary by one base; they occur at least every 300 bases. A genome blanketed with millions of these markers, the reasoning went, would enable researchers to compare which SNPs predominate in people with a certain disease. Those that do may point to a disease mutation on nearby DNA that is inherited along with the SNP.

    Over the past few years, a public-private consortium has deposited some 2.4 million unique SNPs in a public database. Geneticists everywhere are using them. But parsing millions of SNPs to find a few implicated in disease can be challenging—and can cost a fortune. Genotyping, or identifying which bases a SNP can be, costs about 30 cents per SNP, making large disease gene studies unaffordable.

    Instead of searching for individual SNPs, the HapMap focuses on patterns of a few SNPs that define each haplotype. If a specific haplotype is more common among those with a certain disease, the mutation linked to that disease should be on that same block of DNA. The HapMap is slated to have about 1/20 the number of SNPs in any given region of the genome that a full SNP map would, says Altshuler, vastly reducing the number that need to be analyzed and saving a truckload of money in the process.

    If all goes as planned, researchers working with a finished HapMap could do what Whitehead's Daly is now doing. After finding that one haplotype in the region he was studying is about twice as common among those afflicted with Crohn's, he's now scouring that 200,000-base-long stretch of chromosome for a gene or genes that boost susceptibility to the disease.

    Building blocks.

    Persons B and D share a haplotype unlike the other three, characterized by three different SNPs.


    Since a workshop last summer in Washington, D.C., roughly 40 scientists and ethicists from institutions across the United States, Europe, and Asia have been solidifying plans for the map. They have settled on the populations to study in the large-scale map: the two that the Whitehead group identified in last year's Nature paper, and Asians. The game plan, says Lisa Brooks, program director for the genetic variation program at NHGRI, is to pool the DNA of roughly 50 unrelated people in each population and then search for haplotypes across the entire genome.

    Before NHGRI had worked out this plan, the Whitehead scientists launched a minihaplotype map study to test the idea, analyzing haplotypes in European Caucasians, Japanese and Chinese, Yorubans, and African Americans. In a paper published online by Science this week (, they report that their HapMaps, which focused on 0.4% of the genome, showed that between six and eight SNPs can identify a haplotype and that a given stretch of DNA includes one of three to five haplotypes.

    To save money and time, HapMap researchers will start with SNPs in the public database, selecting those that provide good coverage across the chromosomes. Then they'll whittle that set down to those found in at least 10% of their sample populations. If there are large gaps on the chromosomes where no SNPs appear, they'll seek additional ones for the HapMap—something Lander predicts will be necessary.

    But many questions remain unresolved. A key issue is how many SNPs need to be mapped—in other words, how dense does the map need to be? “A less dense map saves some money [but] takes a little bit of deliberation” to construct, says Pui-Yan Kwok, a geneticist at the University of California, San Francisco. But “some people can't wait” for this deliberation process to unfold. Kwok, who supports the map, is concerned that those assembling it will “use a brute-force genomic approach to just throw a bunch of markers together” when the benefits of that strategy are poorly understood.

    But will it work?

    Regardless of its final design, will the HapMap actually turn up susceptibility genes? Many say the answer just isn't known. “There's virtually no empirical evidence” that haplotypes will help in the search for genes behind common ailments, says Jonathan Pritchard, a population geneticist at the University of Chicago.

    The success of the HapMap hinges on one critical, hotly debated assumption: that common mutations are behind most common diseases. In other words, many people susceptible to, say, colon cancer share specific mutations that increase their likelihood of developing the disease. Because the haplotypes being mapped will include only common SNPs, the disease mutations associated with these SNPs will, by definition, be equally common.

    But another possibility, particularly espoused by population geneticists, is that common diseases arise from combinations of rarer mutations. And the HapMap is unlikely to reveal these. “There are innumerable diseases where nothing has been found yet, which in itself is an argument that that [common disease-common mutation] hypothesis has got to be rejected,” says Kenneth Kidd, a population geneticist at Yale University in New Haven, Connecticut.

    Lander concedes that the number of mutations that are common is unknown, but he believes “it's reasonable to guess there are many hundreds.” And “even if the map is only useful for the common mutations, that'll be just fine,” he says. Despite his confidence, however, some supporters “have this nagging fear” that there may be problems with the map, says Kwok.

    Proponents also add that there's no way to know whether the $110 million gamble is worth it without, well, taking the gamble. David Valle, a pediatrician and human geneticist at Johns Hopkins University, argues that even an apparent flop confers critical information. The HapMap “will be the proof or disproof of common versus rare variants” theory, he says.

    HapMappers will need about $60 million or so more to vindicate their theory. Collins is optimistic that the money will flow in. China, Britain, and Japan have all expressed interest, he says.

    HapMap proponents are also passing the hat among the drug companies that supported the SNP initiative. “It's under consideration,” says a GlaxoSmithKline spokesperson. But these companies may be reluctant to gamble: Many are now scrambling to preserve revenue as they lose patent protection on popular drugs. Furthermore, says Zenta Tsuchihashi, group leader in pharmacogenomics at Bristol-Myers Squibb in Princeton, New Jersey, “until someone really does it, it's a little bit hard to judge the value” of the HapMap.


    Texas Surgeon Vows to Take Next Step in Beating Cancer

    1. Jocelyn Kaiser

    The new director of the National Cancer Institute brings his campaign to translate basic research into saving lives to Washington

    Andrew C. von Eschenbach says he made “a conscious decision” to spend his first 100 days as the new head of the $4.2 billion National Cancer Institute (NCI) “listening and learning.” But don't confuse that behavior with passivity or a lack of confidence. “I'm a surgical personality,” says the 60-year-old urologic surgeon, speaking during a 13 May interview with Science from his office on the Bethesda, Maryland, campus. “You consult, then you operate.”

    As President George W. Bush's pick to run the world's largest cancer research enterprise, von Eschenbach comes to Washington with a bulging budget and a self-imposed mandate to turn those basic research dollars into treatments and cures. The buzzword is translational research, and von Eschenbach, who spent 26 years at the University of Texas M. D. Anderson Cancer Center in Houston, says that he plans to apply that philosophy to everything from proteomics to drug development.

    Relatively unknown among bench biologists, von Eschenbach has made a deliberate effort to meet leaders across many fields since taking the NCI helm in late January. He's already won over some of them to his cause. “My contact with him has been very reassuring,” says Massachusetts Institute of Technology biologist Phil Sharp, one of several prominent basic scientists with whom von Eschenbach has met. But he adds, “[von Eschenbach's] problem, in my opinion, is going to be satisfying this [basic research] part of the NCI community.”

    Many scientists say that they are waiting to see how von Eschenbach operates now that his designated probationary period has ended. “I think we are all concerned that NCI without strong and vibrant leadership can drift into a bureaucratic mode,” says David Nathan, president emeritus of the Dana-Farber Cancer Institute in Boston, although he emphasizes that his comments are not based on anything that von Eschenbach has or hasn't done.

    Von Eschenbach inherits an organization that saw six banner years under Richard Klausner, who overhauled the intramural program and boosted molecular research on cancer before leaving in October 2001. And he will likely get a generous budget in 2003—the president has requested a 12% increase to $4.7 billion, the largest percentage boost of any entity of the National Institutes of Health (NIH) outside of the 57% rise to combat bioterrorism proposed for the National Institute of Allergy and Infectious Diseases.

    Scribbling diagrams on a notepad as he talks, von Eschenbach comes across as energetic and full of ideas. Despite the challenge of running NCI, he says that he must look beyond NIH to achieve his agenda: “NCI can't do everything. We have to partner, collaborate, and cooperate.” Toward that goal, he made it a condition of his appointment that he continue as a leader of the National Dialogue on Cancer, a forum started by the American Cancer Society (see sidebar). A two-time cancer survivor, he has not yet found time to act upon another employment criterion: that he see patients for half a day a week.

    Although few in Washington knew his name until his appointment, von Eschenbach “has been a major force in prostate cancer research,” says gene therapy researcher Richard Mulligan of Harvard Medical School in Boston. After attending Saint Joseph's College in Philadelphia and medical school at Georgetown University, he began a fellowship in 1976 at M. D. Anderson, one of the two largest cancer centers in the country. There, he says, he “quickly appreciated that, without basic science, you're not going to solve the problem of cancer.” Johns Hopkins University cancer biologist Don Coffey recalls how von Eschenbach “came flying up to Baltimore” one day after reading papers written around 1990 by Bert Vogelstein of Hopkins that linked mutations in the p53 tumor suppressor gene to human colon cancer. The two proposed looking for p53 mutations in human bladder cancers, an application of a basic research finding that yielded papers in Science and The New England Journal of Medicine.

    Von Eschenbach's broadest contributions have built on a key clinical observation: Prostate cancer cells tend to metastasize on bone. To investigate this in the lab, von Eschenbach recruited to M. D. Anderson cancer biologist Leland Chung, who showed in the early 1980s that bone cells produce growth factors specific to prostate cancer cells. Von Eschenbach “was very instrumental in bringing forward” the role of such cell interactions in cancer, Coffey says. Subsequent studies have led to phase II and III clinical trials of a prostate cancer therapy that targets bone growth factors.

    In control.

    Andrew von Eschenbach brings corporate management principles to running NCI.


    To support such research, von Eschenbach launched a Prostate Cancer Research Program in 1996 and corralled former President Bush for its board. The program's fundraising was “unusually effective,” says M. D. Anderson president John Mendelsohn, netting over $10 million. The money enabled the university to attract two leading angiogenesis scientists and put the program on the map nationally, he says. The Bushes have a long-standing interest in cancer research, having lost a daughter to leukemia. These interactions, combined with meetings involving then-Governor George W. Bush, “put me in a position to be thought of for other things,” notes von Eschenbach.

    Von Eschenbach says that he hopes to speed progress at NCI by following the same “circular” approach—moving from clinical observation to the lab and back to the bedside—that he used to understand bone-tumor interactions. Anything less is not acceptable, he says. “If all we do is unravel basic mechanisms, they all pile up. … People are dying, and we've got to make a difference in that.”

    Although he remains cautious about his plans, he provided a few details:

    • Translational research. Von Eschenbach defines translational research as going “all the way” from genomics to clinical trials. One of his ideas is to find ways to get translational researchers to draw on NCI's mouse models consortium as a platform for validating drugs. Von Eschenbach also wants to use genomics tools such as gene arrays to look at how a person's “host factors”—such as immune, endocrine, and emotional status—help send a cell on the path to cancer. He has set up a clinical center task force to examine other ideas, including having centers collaborate with state health departments in helping doctors enroll more patients in clinical trials.

    • Intramural program. Von Eschenbach doesn't anticipate a major change in the balance between intramural and extramural research, but he's interested in “integrating” the two by bringing in “small groups” of extramural researchers. He may also follow Klausner's practice of inviting outside scientists for “minisabbaticals.” The division of gene-environmental interactions, led by Joseph Fraumeni, is “an extremely important part of the equation,” he adds.

    • Management. Von Eschenbach, who often quotes management guru Andrew Grove of Intel Corp., compares his job to that of a corporate CEO. “When I look at my portfolio of management and leadership responsibly, there's no one single person created that has the skill sets, the energy, the time.” He hopes to free himself to explore “NCI's role in the larger cancer community” by hiring two senior managers, one as chief operating officer and the other as chief of staff. Al Rabson, currently deputy director, will function as “chief academic officer,” dealing with staff, faculty, and professional societies.

    The “content expertise” will come from NCI's six division heads, von Eschenbach says. High on his recruiting list is filling the vacancy left by the recent departure of Robert Wittes as director of the Division of Cancer Treatment and Diagnosis. Von Eschenbach pledges to be more specific about his plans “in the next 60 to 90 days.” In the meantime, he hopes that basic researchers can “get excited about the fact that I hope to provide some of the leadership that will help them to do what they do best.”


    Tongues Wag as von Eschenbach Keeps Ties to National Dialogue on Cancer

    1. Jocelyn Kaiser

    Three years before Andrew C. von Eschenbach became director of the National Cancer Institute (NCI), he helped create the National Dialogue on Cancer. This amorphous private entity, funded by the American Cancer Society (ACS), brings together VIPs and cancer organizations to talk about how to conquer cancer.

    It also attracts controversy. The Dialogue has been criticized as unfocused, as well as being a closed shop dominated by one sector of the advocacy community. Some detractors have also suggested that von Eschenbach, who stepped down as ACS president-elect when he was nominated to the NCI post, is too cozy with the group.

    As vice chair of its steering committee, von Eschenbach says there's nothing mysterious about the group, despite its meetings behind closed doors. “It's nothing but a forum that allows groups, individuals, organizations, interested parties to … deal with how they might effectively address cancer as a societal problem,” he says. Former President George Bush and his wife Barbara are co-chairs of the Dialogue and have hosted gatherings at their home in Kennebunkport, Maine. The 150-some participants range from celebrities such as CNN talk show host Larry King to politicians, federal officials, biotech executives, and prominent cancer research clinicians.

    Proponents admit that the Dialogue has had a slow start, but they believe it's now poised to make a difference in the war on cancer. “Any organization of leaders has to take time to find its niche,” says participant Charles Balch, executive director of the American Society of Clinical Oncology, who says the group is now “moving from dialogue to action.” Its current plans include finding ways to remove barriers to genomics-based drug development—for example, by developing national standards for tissue banks.

    Texas tandem.

    Von Eschenbach and former President George Bush promote a Dialogue on Cancer.


    The Dialogue's most visible accomplishment is launching a spin-off group, or committee, that drafted legislation to rewrite the 1971 National Cancer Act. Senator Dianne Feinstein (D-CA), co-chair of the Dialogue, introduced a version of the bill in February that would increase spending on certain cancer research and prevention programs and create new incentives for companies to develop cancer drugs.

    At the same time, the group has been snubbed by some of those it purports to represent. Some prominent advocacy groups have been reluctant to participate in the Dialogue, partly because they say it too closely tracks ACS's views. In particular, some groups chafe at the society's effort to shift the emphasis from research to public health—such as education campaigns encouraging people to adopt healthier lifestyles and be screened for cancer.

    The Cancer Letter, a Washington, D.C., newsletter, published a series of articles in the past 2 years questioning some of the Dialogue's activities. These included receiving funds from an ACS government contract with the Centers for Disease Control and Prevention (CDC) that the Dialogue used to fund participants' travel and other expenses. The newsletter argued that the Dialogue was essentially lobbying for CDC, which ACS has asserted should have a greater role in the national cancer agenda. ACS officials dispute that and say there is nothing improper about their use of the funds.

    The Bushes are now leading a drive to raise $15 million to bankroll the Dialogue's projects, and von Eschenbach says that NCI will staff some of these activities. But not everyone is pleased by his decision to commit federal resources. “It's something Rick [Klausner, previous NCI director] would never do,” says the leader of one advocacy group. The problem, says the advocate, is that the Dialogue “is really not a shared agenda.”

  15. The Intelligent Noncosmologist's Guide to Spacetime

    1. Charles Seife

    Since Einstein unleashed it on a bemused world, physicists have known that the stuff that shapes our universe is real, earnest, and increasingly useful. This pocket history explains how it came to be an indispensable part of their intellectual toolbox, and an asset to yours

    The reality of spacetime first came crashing home on a little island near the coast of West Africa. On 29 May 1919, during a brief lull in a rainstorm on the island of Principe, Sir Arthur Eddington snapped 16 photographs of a solar eclipse. Those photographs changed our view of space and time. On them, Eddington saw a handful of stars that were in the wrong place.

    “One thing is certain and the rest debate/Light rays, when near the Sun, do not go straight,” Eddington exulted upon his return to England. Just as Albert Einstein had predicted, the sun's gravitational pull had subtly warped the fabric of space and time. Light that passes near the sun is bent, making stars near the sun's edge appear in the wrong positions. Eddington's photos proved that Einstein's bizarre theory about the fabric of space and time was more than mere fantasy. It was a hard reality.

    More than 80 years later, the strange fabric that makes up our universe is more important than ever. Scientists have been devising test after test in attempts to trip up Einstein's description of the nature of space and time, and so far, all have confirmed the strange picture. The fabric of spacetime is real, and scientists can see it ripple and twist. Those undulations contain secrets of the birth and nature of the universe.

    Rubber sheets

    The story of spacetime began in earnest in 1915, when Einstein formulated his general theory of relativity. The equations of general relativity liken space and time to a flexible fabric, something like a rubber sheet. One set of tools that mathematicians use to describe curving and stretchy objects constitutes a field of study called “differential geometry.” Differential geometry allows mathematicians to probe curves and surfaces in space, and to define quantities such as “curvature” and “torsion” that describe their properties. And although a rubber-sheet spacetime seems like an artificial construct, it is a very natural and powerful idea when you have the tools to deal with it.

    Why space and time, rather than just space? Einstein realized that motion in the everyday three dimensions of space (up-down, left-right, and back-front) also affects motion through the fourth dimension, time. For instance, if you move very, very fast in space, your wristwatch will tick very, very slowly in relation to your clock back on Earth. Although space and time have slightly different mathematical properties (our four-dimensional universe has three “spacelike” dimensions and one “timelike” one), they are inseparable. Change your motion through space and you automatically affect your motion through time, and vice versa. So in a mathematical sense, time and space are woven together into a single four-dimensional object.

    The key equation of general relativity defines the relation between the curvature of spacetime and the energy and matter that are sitting on the sheet. And although it might seem odd to discuss the shape of the fabric of the universe—and a four-dimensional fabric, at that—it makes perfect sense to mathematicians and physicists. It also cleared up some long-standing mysteries.

    For one, it explained where gravity comes from. A heavy object, such as our sun, distorts that spacetime fabric, bending it slightly, like a bowling ball on a mattress. If you place a marble on the mattress, it will fall toward the bowling ball, because of the curvature of the mattress. Likewise, if you place an asteroid near the sun, it will fall toward the sun, because the curvature of spacetime forces it to move in that direction. Shortly after Einstein unveiled it, scientists realized that this gravity-as-curvature-of-spacetime theory explained a mysterious anomaly in the orbit of Mercury. Newton's laws were unable to explain why the small planet's orbit shifted so quickly, and scientists had long been stumped for an explanation. Einstein's spacetime equations, however, diverge from Newton's in regions of strongly curved spacetime, such as those near a heavy body like the sun. That slight discrepancy perfectly explained Mercury's mercurial behavior.

    A handful of scientists, including Eddington, quickly took notice of the new theory and tried to figure out a way to test its consequences. The problem was that, in our solar system, Einstein obviously differed from Newton only very close to our sun—and the sun is so bright that it is hard to observe anything close by. Hence the need to wait for a solar eclipse.

    In 1919, Eddington's celebrated expedition to Principe was the first test of the concept of spacetime. Because our sun bends spacetime and light follows the contours of that fabric, the theory of relativity predicts that the sun must bend light that passes near it, something like a lens. A star whose light passes close to the sun, deep into the dimple that the sun makes in the rubber sheet, should appear in the wrong place. Its apparent position in the sky should be somewhat altered by the gravitational pull of the sun.

    This is, of course, precisely what Eddington saw. As the solar eclipse blotted out the sun, the newly visible stars were not in their proper places. Eddington and his team had seen the curvature of spacetime: They had spotted a “gravitational lens.” Einstein was right. Spacetime was real.

    Cosmic mirage.

    By warping the spacetime around it, a massive object can cause light from a more distant star or galaxy to take different routes to Earth. Such “gravitational lensing” can split a single object into two or more images.


    Wrinkles in time and Einstein in drag

    Six decades passed before astronomers spotted another of gravitational lensing's peculiar incarnations. If you have an enormous amount of matter, such as a galaxy cluster, and a distant, bright object is in the right place behind it, astronomers see double. Thanks to the bending effect of the matter in the lens, light from the background object takes several curved paths to Earth. As a result, astronomers see multiple images for a single object (see figure above). In 1979, astronomer Dennis Walsh and his colleagues spotted the first of these cosmic clones: two images of the same bright quasar in the heavens.

    Meanwhile, scientists were studying another consequence of a spacetime fabric: gravitational waves. Gravitational waves are a natural offshoot of the rubber-sheet construction of general relativity. Just as a massive object sitting on the fabric of spacetime creates a dimple, so moving or changing objects, under certain conditions, create wrinkles in the fabric. Those wrinkles, tiny distortions in spacetime, zoom away at the speed of light. Because these gravitational waves carry energy, anything emitting them will lose a tiny bit of its speed.

    The first sign of gravitational waves came from just such an energy loss. Jocelyn Bell, a graduate student at Cambridge University, set the stage for the discovery in 1967, when she found an object that blinked on and off in the sky, emitting incredibly regular pulses of radio waves. Soon thereafter, scientists spotted several similar objects in different regions of the sky and ruled out an artificial origin. Bell had seen the first pulsar, a spinning, burned-out husk of a star that emits a powerful beam of radiation. In 1974, Bell's adviser, Anthony Hewish, won the Nobel Prize for the discovery.

    Barely a month before Hewish got the call from Stockholm, Joseph Taylor, an astronomer at Princeton University, and his graduate student, Russell Hulse, discovered a pulsar that was different from all the others that had been found. Its bursts seemed to be less regular, speeding up and slowing down rather than ticking away with unchanging tempo. Hulse and Taylor realized that they were seeing a binary pulsar—a pulsar orbiting an unseen companion. As the pulsar sweeps out its orbit in space, it zooms toward and away from Earth, making the pulses seem to speed up and slow down, even though the star itself spins with clocklike precision. The pulsar, ticking away like an enormous metronome as it orbited its companion, gave scientists a way to test this prediction for the first time.

    As the pulsar and its unseen companion dance around each other, they must wrinkle the fabric of spacetime—they must emit gravitational waves. Those gravitational waves carry away some of the stars' energy, slowing them down and causing them to fall inward. Their orbits get shorter and shorter as they get closer and closer together. In 1978, Taylor and his colleagues showed that this was precisely what the binary pulsar was doing. Every year, the pair's orbit took 75 milliseconds less than it did the previous year. The tiny decrease marked the first evidence for gravitational waves. In 1993, Taylor and Hulse won the Nobel Prize for the discovery.

    Another hard-to-measure consequence of spacetime is “frame dragging.” In 1918, physicists Joseph Lense and Hans Thirring realized that spinning massive objects twist spacetime just as colliding massive objects wrinkle it. Astronomers working in a twisting spacetime will have their perceptions, their measurements of the universe, slightly skewed. As a result, a celestial “compass” that provided a perfect measurement of an observer's orientation in relation to distant galaxies would fail if the observer were in a region of twisting spacetime. The compass would seem to slip. If it started out pointing at the center of the Milky Way, it would soon wind up pointing in a different direction.

    Matter matters.

    “Wheel rim” distortions of some galaxies in this Hubble Space Telescope image of cluster Abell 2218 show the influence of unseen mass closer to Earth.


    Indeed, scientists hope to see evidence of frame dragging, also known as the Lense-Thirring effect, by detecting the influence spinning bodies have on gyroscopes—the best approximation we have of a celestial compass. Gyroscopes tend to maintain their orientation, unless twisted spacetime makes them change direction.

    In 1997, two teams of scientists announced that they had seen the Lense-Thirring effect in action for the first time. Both teams used an orbiting x-ray observatory to chart the behavior of disks of hot gas around heavy spinning stars. One team, led by Wei Cui, then at the Massachusetts Institute of Technology, looked at spinning black holes, and the other, led by Luigi Stella of the Astronomical Observatory of Rome, observed spinning neutron stars. Both saw the orientations of the spinning accretion disks wobble. The enormous gyroscopes were rotating relative to the rest of the universe, just as the equations of spacetime predict.

    Although the evidence is not yet conclusive, scientists hope to see the frame-dragging effect directly, thanks to a half-billion-dollar satellite, Gravity Probe B, which is slated to be launched early in 2003. Essentially a fancy gyroscope, the probe will try to sense the subtle frame dragging caused by Earth's spin.

    Scientists also hope to see gravitational waves directly, using a nearly $400 million experiment known as the Laser Interferometer Gravitational Wave Observatory (LIGO). LIGO's two facilities in Washington and Louisiana house sensitive devices meant to detect the subtle squish and stretch of spacetime caused by a passing gravitational wave. The facilities are almost fully operational; scientists are busy shaking down the instruments, trying to isolate the sources of noise. Any day now, they will be taking their first scientific data.

    Nobody knows quite what LIGO will see—possibly nothing at all. The waves it is most likely to spot come from spiraling and colliding neutron stars and black holes, and scientists don't know precisely how much gravitational radiation from these sorts of events is rattling around the universe. But if LIGO does see the signature of a gravitational wave, it will be a tremendous accomplishment; it will give scientists their first direct view of a moving distortion in spacetime. More important, gravitational radiation will become a tool for understanding the black holes and neutron stars in our universe. Charting the heavens with gravity waves as well as light waves may reveal secrets of the universe that traditional astronomy has failed to uncover.

    Spacetime as a tool

    Even without gravitational waves, the subtle curvature of spacetime near enormous objects is helping scientists tackle one of their most pressing astrophysical questions: the nature of dark matter. For the past decade, one incarnation of the warping of spacetime, known as “microlensing,” has been revealing hunks of matter called MACHOs: massive, compact halo objects. Nobody is quite sure what these might be; they could be burned-out stars or brown dwarfs, stars too light to ignite their fusion engines. But whatever they are, they have mass, and therefore they warp the fabric of spacetime. As a MACHO passes in front of a background star, the gravitational pull of the MACHO warps the fabric of spacetime and bends the light so that more of it is focused on Earth. As Earth receives more and more light from the star, the star appears to brighten, and as the MACHO moves away over the course of a few weeks, the star dims, once again, to its original luminosity.

    An international team of astronomers and astrophysicists, known as the MACHO project, has been using telescopes in Australia and the United States to look for these signature flickers in background stars. (There are also several competing groups, such as the aptly named OGLE collaboration.) Since the MACHO project began in 1993, it has spotted hundreds of these microlensing events; nowadays, astronomers spot about one a week. The flood of data is allowing astronomers to chart where the dark matter resides in our galaxy—and may allow them to figure out what it's made of. Because black holes distort spacetime in a slightly different manner from brown dwarfs, astronomers might be able to tell what sort of massive bodies are floating about in the halo.

    Curvature of spacetime can also reveal the presence of invisible matter in galaxies and galaxy clusters by measuring how dramatically their mass reroutes the light from objects behind them. The more dramatic the distortion, the more matter there is and the more tightly packed it is. In mid-2001, scientists at Bell Labs used such large-scale lensing to discover a previously unknown “dark” cluster of galaxies 3.5 billion light-years away (Science, 17 August 2001, p. 1234). But the shape of spacetime holds an even more exciting story than the birth of black holes or the secret of dark matter: It tells us about the nature of the cosmos.

    The fabric of spacetime can have a curvature locally, like the distortions caused by the sun or by passing gravitational waves, or “globally,” a curvature for the entire universe. It's somewhat like the situation on our own planet: Locally, Earth's surface has peaks and valleys, rolling hills and crevasses, and little lumps that affect a small area. But zoom out far enough, and you see that Earth is a sphere, even though the curvature is all but imperceptible across small distances.

    It's the same with the universe as a whole. Locally, spacetime can be flat or rippled; it can even gape with immense, seemingly bottomless pits. The universe as a whole, however, also has a shape. It might be flat, or it might have “positive curvature” like a ball, or “negative curvature” like a saddle. All of these shapes are in four dimensions, of course, so they're very difficult to visualize, even with mathematical training. Nonetheless, the three-dimensional versions—a plane, a sphere, or an enormous saddle—are reasonable approximations of our 4D universe.

    Which universe?

    Depending on how spacetime curves on a large scale, light rays crossing the universe might remain parallel, converge, or drift apart.


    In the past 2 years, scientists have measured the shape of the universe by looking at features in the cosmic background radiation, the ubiquitous hiss of microwave energy left over from a time shortly after the big bang. According to theory, there are spots in that radiation, and the spots have to be a certain size. By measuring the apparent size of those spots in the sky, cosmologists could figure out the shape of spacetime.

    If spacetime were flat, two parallel light rays would stay the same distance apart as they approach us, making distant features look precisely as large as they should. But on a surface with “positive” curvature, approaching rays would move farther apart, making distant objects look bigger; on a surface with “negative” curvature, the beams would get closer together, making distant objects look smaller (see figure above). In 2000, a balloon observatory known as BOOMERANG measured the size of those spots for the first time—and they were precisely as large as expected (Science, 28 April 2000, p. 595). The data were powerful evidence that the universe has no curvature. A year later, another set of experiments, DASI and Maxima, confirmed BOOMERANG's discovery (Science, 4 May 2001, p. 823). Physicists are virtually unanimous: The world is round, but the universe seems to be flat.

    The shape of the universe tells scientists about the mass and energy that sit upon the fabric of spacetime. When astronomers and astrophysicists totaled up the amount of matter in the universe, they discovered that it is only about 35% of what is needed to flatten out spacetime; judging by appearances, the universe ought to be saddle shaped. So cosmologists have concluded that there must be some other energy out there, some other “stuff” that flattens out the fabric of space and time. This is the mysterious “dark energy” that makes the universe expand ever faster and that has become one of the most profound mysteries in modern astrophysics.

    If there's a mystery more profound than the nature of dark energy, it has to be the physics that governed the beginning of the universe. Shortly after the big bang, the fabric of space was rippling with gravitational radiation. Scientists hope that by the end of the decade they will be able to see the remnants of those wrinkles.

    The peculiar squash-and-stretch action of a gravitational wave affects the polarization of light waves—including those that make up the cosmic microwave background. The signature of ancient gravitational waves in the cosmic microwave background is a spiral quality that mathematicians call “curl.” If one drew a map of the polarization of the cosmic background radiation, this curl-type component would look something like little hurricanes. The swirls are an artifact of the forces that brought our universe into being. They would contain the first bonanza of information about an era that had been totally inaccessible.

    Several groups of scientists—including the BOOMERANG and DASI teams—are racing to detect polarization in the cosmic background radiation, but detecting the echoes of the ripples in ancient spacetime is probably beyond the reach of these experiments. We will have to wait until the Planck satellite, due to be launched in 2007, or another future experiment before scientists first see the subtle signature of the warps of ancient spacetime.

    But when they see that signature, scientists will begin to see the marks of creation woven within the weft of the four-dimensional fabric of space and time.

  16. A Foamy Road to Ultimate Reality

    1. Charles Seife

    What is the fabric of spacetime made of? We know spacetime can ripple and curve and twist, but what does its fabric look like on very small scales? Nobody knows for sure, but when physicists find out, it may drastically alter our view of the universe—and of space and time.

    “One thing we know absolutely for sure is that spacetime does not have a precise meaning at very short distance scales,” says Harvard University physicist Andrew Strominger. The problem is that on small scales the fabric of spacetime must be subject to one of the key properties of quantum objects: the Heisenberg Uncertainty Principle, which states that a particle's momentum and position can't both be precisely defined at the same time. A direct consequence of this principle is that the universe is seething with particles and energy—a ubiquitous “quantum foam” that suffuses the cosmos, even in the deepest vacuum.

    On smaller and smaller scales, the fabric of spacetime fluctuates with energy more and more dramatically. When you look at the fabric below a certain scale known as the “Planck length,” the equations of general relativity can no longer describe the fabric. “Spacetime goes haywire,” Strominger says, noting that scientists need to go beyond the theory of general relativity to describe the structure of the fabric itself. “The only real theory we have to discuss things is string theory.”

    String theory—and its generalization, M-theory—replaces pointlike particles such as electrons with higher dimensional objects, such as strings and membranes. That theoretical shift changes the way objects look on the very smallest scales and gets rid of some troubling problems that make general relativity's equations break down there. It also makes the string theorist's portraits of spacetime look very different from the traditional smooth fabric. “In some examples, if you go to tiny distances, it has a grainy structure of a rather subtle sort,” says physicist Nima Arkani-Hamed, also at Harvard. “It's not like a picture where there's a little lattice, but there's some sense in which there's discreteness on tiny scales.”

    In a sense, spacetime doesn't have any meaning at sizes below the Planck length. If scientists managed somehow to build a powerful enough particle accelerator to see the structure of spacetime below the Planck length, they would hit a brick wall. “When you start to probe spacetime on shorter and shorter distances, you are stymied,” says Arkani-Hamed. “You'd start creating little black holes.” Those black holes would decay, releasing a shower of particles, but once you start making black holes, you get no further information about the structure of spacetime.

    Despite this depressing scenario, theorists are trying to figure out what the nature of spacetime might be. “We haven't put it all together into a unified picture of what happens to spacetime in general,” says Strominger. “But there are very interesting statements one can make about the nature of spacetime at short distances.”

    String theorists don't know what rules apply to our universe, but they are able to analyze how the quantum foam and the fabric of the universe behave under certain situations. In one of these scenarios, says Arkani-Hamed, the concept of space disappears on small-length scales. “When you get to distances shorter than the Planck length, really, fundamentally, there's no space at all,” he says. It's a mind-boggling idea, but in a sense, the idea of space can become redundant; it emerges from more fundamental properties of objects. “Space itself is created out of the interactions of particles.”

    Scientists have cooked up many such possibilities, each of which might or might not apply to our universe. Still, string theorists hope that they will eventually understand which equations truly describe the fabric of spacetime. “Everything is pointing to the fact that spacetime is only an approximate concept,” says Strominger. “It's not the absolutely ultimate concept, and we're desperately struggling to find out what the right language to describe the universe is.”