News this Week

Science  06 May 2005:
Vol. 308, Issue 5723, pp. 770

You are currently viewing the .

View Full Text

Log in to view the full text

Log in through your institution

Log in through your institution


    A Heavyweight Battle Over CDC's Obesity Forecasts

    1. Jennifer Couzin

    How many people does obesity kill?

    That question has turned into a headache for the Centers for Disease Control and Prevention (CDC) in Atlanta, Georgia: In the past year, its scientists have published dueling papers with conflicting estimates on obesity-associated deaths—the first three times greater than the second. The disagreement, some fear, is undermining the agency's health warnings.

    The bidding on obesity's annual death toll started at a staggering 400,000—the number cited in a CDC paper co-authored by CDC chief Julie Gerberding in 2004. But dissent prompted an internal inquiry, and CDC decided this year to lower the number to 365,000. That was still too high for some CDC analysts, who together with colleagues at the National Cancer Institute (NCI) in Bethesda, Maryland, published a new figure on 20 April—112,000 deaths. The low estimate is spawning other problems, though. A food-industry interest group is touting it as evidence that obesity is not so risky. Even researchers who favor the low number worry that it will lead to complacency.

    After trumpeting the highest estimate a year ago and warning that obesity deaths were poised to overtake those caused by tobacco, CDC officials now say that numbers are unimportant. The real message should be that “obesity can be deadly,” says George Mensah, acting director of CDC's National Center for Chronic Disease Prevention and Health Promotion. “We really add to the confusion by sticking to one number.”

    But some of CDC's own scientists disagree. “It's hard to argue that death is not an important public health statistic,” says David Williamson, an epidemiologist in CDC's diabetes division and an author on the paper with the 112,000 deaths estimate.

    Calculating whether obesity leads directly to an individual's demise is a messy proposition. To do so, researchers normally determine by how much obesity increases the death rate and what proportion of the population is obese. Then they apply that to the number of deaths in a given time, revealing excess deaths due to obesity. Both studies use that approach, but methodological differences produced big disparities between the two papers—one by epidemiologist Ali Mokdad, Gerberding, and their CDC colleagues, published in the Journal of the American Medical Association (JAMA) on 10 March 2004, and the new estimate by CDC epidemiologist Katherine Flegal and colleagues at CDC and NCI, published in JAMA on 20 April.

    Feeding on confusion.

    An ad campaign by a food industry-supported group seeks to exploit discrepancies in estimated obesity deaths.


    Both relied on data about individuals' weight and other measures from the National Health and Nutrition Examination Survey (NHANES), which has monitored the U.S. population since the 1970s. The Mokdad group used the oldest, NHANES I. Flegal's group also used two more recent NHANES data sets from the 1980s and 1990s. Her method found fewer obesity-associated deaths—suggesting that although obesity is rising, some factor, such as improved health care, is reducing deaths.

    Other variations in methodology proved crucial. For example, the two groups differed in their choice of what constitutes normal weight, which forms the baseline for comparisons. Flegal's team adopted the definition favored by the National Institutes of Health and the World Health Organization, a body mass index (BMI) between 18.5 and less than 25. The Mokdad team chose a BMI of 23 to less than 25; this changed the baseline risk of death, and with it, deaths linked to obesity.

    In their paper, the Mokdad authors said they selected that narrower, heavier range because they were trying to update a landmark 1999 JAMA paper on obesity led by biostatistician David Allison of the University of Alabama, Birmingham, and chose to follow Allison's methodology. (CDC spokesperson John Mader said that Mokdad and his co-authors were not available to be interviewed.) “There's no right answer” to which BMI range should be the “normal” category, says Allison. He felt his choice was more “realistic,” and that expecting Americans to strive for even lower BMIs might be asking too much. But that relatively small difference in BMI had a big effect on the estimates: Had Flegal's team gone with the 23-to-25 range, she reported, the 112,000 deaths estimate would have jumped to 165,000.

    The scientists also diverged sharply in how they tackled age. It's known that older individuals are less at risk and may even benefit from being heavier: A cushion of fat can keep weight from falling too low during illness. And young obese people tend to develop more severe health problems, says David Ludwig, director of the obesity program at Children's Hospital in Boston.

    Flegal's group took all this into account by assigning risks from obesity to different age groups. Stratifying by age meant that when Flegal turned to actual death data—all deaths from the year 2000—she was less likely to count deaths in older age groups as obesity-related.

    Allison concedes that in retrospect, his decision not to stratify by age was a mistake. And it had a big impact on the estimates. “Very minor differences in assumption lead to huge differences in the number of obesity-induced deaths,” says S. Jay Olshansky, a biodemographer at the University of Illinois, Chicago.

    Olshansky, Allison, and Ludwig published their own provocative obesity paper in The New England Journal of Medicine in March. It argued that U.S. life expectancy could begin decreasing as today's obese children grow up and develop obesity-induced diseases, such as diabetes and heart disease (Science, 18 March, p. 1716).

    But Olshansky now says that in light of Flegal's recent paper on obesity deaths and a companion paper that she, Williamson, and other CDC scientists authored in the same issue of JAMA, his life expectancy forecasts might be inaccurate.

    Heavy duty.

    Being obese in childhood increases the likelihood of health problems such as diabetes later on.


    The companion paper, led by CDC's Edward Gregg, examined how much cardiovascular disease was being driven by obesity. The findings were drawn from five surveys, most of them NHANES, beginning in 1960 and ending in 2000, and they dovetailed with the conclusions in Flegal's 112,000 deaths paper. All heart disease risk factors except diabetes were less likely to show up in heavy individuals in recent surveys than in older ones. That suggests, says Allison, that “we've developed all these great ways to treat heart disease” such as by controlling cholesterol. This could also explain, he and others say, why NHANES I led to much higher estimates of obesity-associated deaths than did NHANES I, II, and III combined. Although obesity rates are rising, obesity-associated deaths are dropping.

    Ludwig disagrees that this trend will necessarily continue or that Gregg's paper disproves the one he co-authored with Olshansky. Type 2 diabetes, which is becoming more common in youngsters, “starts the clock ticking towards life-threatening complications,” he notes.

    Olshansky is uncomfortable with the kind of attention Flegal's 112,000 estimate is getting. “It's being portrayed,” he says, as if “it's OK to be obese because we can treat it better.” In fact, one of Flegal's conclusions that sparked much interest—that being overweight, with a BMI of 25 to 30, slightly reduced mortality risk—had been suggested in the past.

    Certainly, food-industry groups are thrilled by Flegal's work. “The singular focus on weight has been misguided,” says Dan Mindus, a senior analyst with the Center for Consumer Freedom, a Washington, D.C.-based nonprofit supported by food companies and restaurants. Since Flegal's paper appeared, the center has spent $600,000 on newspaper and other ads declaring obesity to be “hype”; it plans to blanket the Washington, D.C., subway system with its ad campaign.

    Some say that CDC needs to choose one number of deaths and stand behind it. “You don't just put random numbers into the literature,” says antitobacco activist and heart disease expert Stanton Glantz of the University of California, San Francisco, who disputed the Mokdad findings.

    Scientists agree that Flegal's study is superior, but it may also be distracting, suggests Beverly Rockhill, an epidemiologist at the University of North Carolina, Chapel Hill. Even if obese individuals' risk of death has been overplayed in the past, she says, we ought to ask: “Are they living a sicker life?”


    Picture-Perfect Planet on Course for the History Books

    1. Govert Schilling*
    1. Govert Schilling is a writer in Amersfoort, the Netherlands.

    Look closely at the faint red speck of light in this false-color photo. It's the first image ever of an exoplanet—a planet outside our own solar system. The 8-million-year-old world, about the size of Jupiter but five times as massive, has water vapor in its atmosphere and circles its mother brown dwarf star every 2500 years or so at a distance of 8 billion kilometers. The whole system is 230 light-years away in the constellation Hydra. The planet's name? 2M1207b, but that may change.

    A European-American team of astronomers led by Gaël Chauvin of the European Southern Observatory took the infrared photo in April 2004 using ESO's 8.2-meter Very Large Telescope (VLT) in Chile, outfitted with a revolutionary system to compensate for atmospheric turbulence. Until now the team couldn't rule out the possibility that the red dot was a background object, unrelated to the brown dwarf. But new VLT measurements confirm that the two objects are moving through space together, and independent Hubble Space Telescope data released on 2 May at an exoplanet workshop in Baltimore, Maryland, all but clinch the case. “At the 99.9% level, I agree this is probably the first image of an extrasolar planet,” says Eric Becklin of the University of California, Los Angeles (UCLA), who was not involved in either study.

    First light.

    Infrared image shows portrait of an extrasolar planet (left).


    But is it really a planet and not, say, another brown dwarf star? According to theoretical models for inferring the mass of young, low-mass objects from their infrared spectra, 2M1207b is only five times as massive as Jupiter. That's well below the 13.6-Jupiter-mass cutoff the International Astronomical Union uses to distinguish planets from brown dwarfs. “The possibility that this object is a brown dwarf is out of the box,” says Glenn Schneider of the University of Arizona in Tucson, who presented the Hubble results. If anything, “the models may well overestimate the masses at very low mass,” says Gibor Basri of the University of California, Berkeley. Together with his student Subu Mohanty and others, Basri developed a new way of determining masses of substellar objects by deducing their surface gravity from detailed spectroscopic measurements. Their results indicate that bodies like 2M1207b are probably even less hefty than current theoretical models suggest.

    With its claim to fame assured, says co-discoverer Benjamin Zuckerman of UCLA, the team hopes to give the planet a name better suited to its historic status. “Anyone with a bright idea is welcome to suggest it,” he says.


    IBM Offers Free Number Crunching for Humanitarian Research Projects

    1. Daniel Clery

    CAMBRIDGE, U.K.—When researchers have a project that involves a lot of number crunching, they usually have to think small. They compress data and algorithms to make the best use of expensive computer time. Now the computer giant IBM is offering researchers who meet certain criteria a chance to do the opposite: to think big—supercomputer big—and it will provide access to the computing power for free.

    The company's philanthropic arm has launched an effort known as World Community Grid (WCG) to support research projects with humanitarian goals. “We aim to take the most cutting-edge technologies and use them in the public interest,” says Stanley Litow, president of the IBM International Foundation. The computing power comes courtesy of many thousands of ordinary computer users around the world who freely donate their computers to a project at times when they would otherwise sit idle. Linked by the Internet, the grid gains power as it accumulates machines. Last month WCG signed up its 100,000th computer.

    WCG uses the same technique as projects such as SETI@home and, which install a screen saver on computers to sift radio signals for extraterrestrial messages or model climate change (see p. 810). The difference is that WCG has a permanent infrastructure and can run five or six projects at once. IBM created the open grid because “we found that a lot of projects were dying on the vine in the absence of computing power,” says Litow.

    Group effort.

    Small computers are being linked in huge networks to analyze protein folding and other puzzles.


    WCG is not the first grid freely available to researchers. The company United Devices in Austin, Texas, which creates similar links for the pharmaceutical, oil, and financial industries, set up in 2001 and has since signed up more than 3 million machines.'s first project was to scan 3.5 billion molecules for potential as drugs against cancer. Chemist Graham Richards of Oxford University in the U.K., who led the effort, says participants “employed more computing power than the whole world pharmaceutical industry” can bring to bear on such problems. Richards says the project found lots of promising molecules and is now embarking on the more painstaking process of synthesizing the molecules and testing them in vitro.

    The Oxford team also used to search for drugs against anthrax and, in collaboration with IBM, smallpox—a project that screened 35 million potential drug molecules to find 44 strong candidates in a matter of weeks. “The smallpox experiment was such a success,” says Viktors Berstis, IBM's technical head of WCG, that IBM decided to set up its own grid. WCG was launched in November 2004, with help from United Devices, and its first task was the Human Proteome Folding Project. Devised by researchers at the Institute for Systems Biology in Seattle, Washington, the folding project predicts structures for the thousands of protein sequences uncovered by the Human Genome Project. At a symposium in Seattle last week, the institute announced that the project had already calculated 50,000 structures. Its goal—100,000 to 150,000 structures—would take 100,000 years to complete if the institute relied on its own computing power.

    Interested researchers can propose projects at, and IBM has assembled a high-powered advisory board, including David Baltimore, president of the California Institute of Technology in Pasadena, and Ligia Elizondo, deputy director of the United Nations Development Programme, to sift through the proposals. The board is meeting this week and hopes to have a first slate of new projects in a few months. Berstis says he hopes eventually to sign up as many as 10 million computers. “Most researchers haven't even thought of this kind of massive computing power,” he says. It's time to think big.


    Chemists Want NIH to Curtail Database

    1. Jocelyn Kaiser

    The American Chemical Society (ACS) wants the U.S. government to shut down a free database that it says duplicates the society's fee-based Chemical Abstracts Service (CAS). Government officials defend the site, called PubChem, saying the two serve different purposes and will complement, rather than compete with, each other. But ACS officials are hoping to convince Congress to stop PubChem unless the government scales it back.

    PubChem was launched last fall by the National Institutes of Health (NIH) in Bethesda, Maryland, as a free storehouse of data on small organic molecules. It is a component of the Molecular Libraries Initiative, which is a part of NIH Director Elias Zerhouni's road map for translating biomedical research. So far, PubChem includes information on 650,000 compounds, such as structures and biological assays, as well as links to PubMed, NIH's free biomedical abstracts database. It will grow to include data from the Molecular Libraries centers, which aim to screen thousands of molecules for biological activity. NIH expects basic researchers to use PubChem to identify chemicals they can use to explore how genes and cells work.

    Boiling point.

    ACS's Madeleine Jacobs says NIH's PubChem goes too far.


    But ACS claims PubChem goes far beyond a chemical probes database. It is, ACS says, a smaller version of CAS, which employs more than 1200 people in Columbus, Ohio, and makes a significant contribution to the society's $317 million in annual revenue from publications. Institutional subscribers receive data on 25 million chemicals, including summaries written by CAS experts and links to chemistry journal abstracts. Like CAS, PubChem assigns each chemical a unique identifying number, and until a few weeks ago, the sites even looked quite similar, says ACS Chief Executive Officer Madeleine Jacobs. Claiming that PubChem could wipe out CAS, Jacobs argues that NIH should abide by its stated mission of storing only data from the Molecular Libraries Initiative and other NIH-funded research.

    NIH officials counter that PubChem indexes a set of biomedical journals that overlaps only slightly with those CAS indexes and, unlike CAS, does not provide curated information on patents or reactions. “They have a vast amount of information that PubChem would never dream of including,” says Francis Collins, director of the National Human Genome Research Institute. PubChem's focus on biological information such as protein structures and toxicology is complementary, he says. NIH has offered to link entries in PubChem to CAS, but ACS says that wouldn't help.

    ACS has enlisted Ohio's governor, Republican Bob Taft, as well as the state's congressional delegation to push its case. The legislators sent a letter on 8 March to Health and Human Services Secretary Michael Leavitt arguing that PubChem could pose “direct and unfair competition” with CAS. The lawmakers compare it to PubScience, a Department of Energy abstracts database that was shut down in 2002 after House appropriators decided it violated rules prohibiting the government from duplicating private services. ACS was part of that lobbying campaign.

    NIH officials are worried that PubChem could suffer the same fate and hope to make their case this month to Senator Mike Dewine (R-OH). Jacobs, for her part, wants NIH to “stick to its mission” and cut back the scope of PubChem. If not, she promises “to bring to bear all of our influence and resources.”


    Panel Gives Thumbs-Down to European Institute of Technology

    1. Gretchen Vogel

    BERLIN—Efforts to create a European Institute of Technology (EIT) to compete with the Massachusetts Institute of Technology (MIT) could do more harm than good to science in Europe, an advisory panel told the European Commission last week. The idea for a so-called EIT was proposed in February as part of the relaunch of the so-called Lisbon strategy, designed to boost Europe's flagging economy. The strategy highlights research as a catalyst for economic growth, and commission president José Manuel Barroso proposed that the European Union establish an Institute of Technology with MIT as its model.

    Barroso has stumped for the idea in several major speeches, once suggesting that it might be located in Poland, one of the E.U.'s newest members. Although researchers have been largely skeptical, the EIT has gained momentum in some political circles. A group of European Parliament members even suggested a possible campus: their Parliament building in Strasbourg, France—one of two sites where the Parliament sits every month. Many parliamentarians would be happy to give up the building and the trouble of maintaining two home sites.

    But on 27 April, the European Research Advisory Board (EURAB), a group of scientists that counsels the commission on policy matters, recommended that it shelve the idea. “As much as we would like to see an EIT come into existence in Europe, we are wary that it cannot be created top-down,” the panel says in its statement. “An EIT must grow bottom-up from existing research communities.”

    Instead, it says, the planned European Research Council (ERC), a body to fund basic research, should be given full support to prompt the kind of competition that helps shape top institutions such as MIT. The ERC—originally proposed by a grass-roots movement of European scientists—was part of the commission's proposal for the €70 billion ($90 billion) 7th Framework program (Science, 15 April, p. 342), but its exact funding and structure are still unclear.

    E.U. research spokesperson Antonia Mochan says the commission is exploring the EIT proposal. Although it has not ruled out starting a new institution, she says, both research commissioner Janez Potocnik and education commissioner Ján Figel1/2 have said that perhaps a network of “centers of excellence” across Europe “would be the most relevant way to deal with this issue.”

    But even such a network worries the advisory panel members. “Our point is that [the institute] would distract from the ERC,” says EURAB chair Helga Nowotny of the Vienna Science Center. The panel decided to issue the statement after hearing of increased support for the idea among politicians, she says: “Every science minister from Poland to Portugal wants to host an EIT.”


    Celera to End Subscriptions and Give Data to Public GenBank

    1. Jocelyn Kaiser

    A once-deafening debate over access to human genome sequence data ended quietly last week. Celera Genomics Corp., the company that launched a commercial effort to sequence the human genome and then set about making money from the data, is closing its subscription-based database service and will release its genomic data on humans, rats, and mice to the public.

    The move marks the epilogue in the saga of J. Craig Venter, who founded Celera (now owned by Applera Corp.), and Francis Collins, director of the National Human Genome Research Institute in Bethesda, Maryland, and leader of the Human Genome Project, which made its genome sequence data public immediately. The former rivals both praised Celera's move to deposit its data in GenBank. “I think it's a wonderful development. [Applera] deserves a lot of credit for putting this data in the public domain,” says Collins. Venter, no longer with Celera, sent an e-mail from his ship, Sorcerer II, on a scientific cruise off the coast of Australia, stating that he has been “strongly in favor” of the move, which “sets a good precedent for companies who are sitting on gene and genome data sets that have little or no commercial value but would be of great benefit to the scientific community.”

    Most scientists would probably say that the outcome was inevitable. “I think the whole model ran its course and was superceded by the public effort,” says genome sequencer Richard Gibbs of Baylor University in Waco, Texas.

    Four years ago, the race between Collins and Venter to finish a rough draft of the human genome sequence ended in a dead heat. The public effort published its data in Nature and deposited them in GenBank, run by the U.S. National Center for Biotechnology Information (NCBI). Celera, whose paper was published in Science, shared its data for free only with scientists who agreed not to redistribute or commercialize the data—a restriction that drew loud complaints from many researchers (Science, 16 February 2001, p. 1189). The company then created a subscription-based genomic database that later included proprietary data on rats and mice. In early 2002, however, Applera moved the company into drug discovery and Venter left; he now heads his own nonprofit institute.

    Come together.

    Former genome rivals J. Craig Venter (left) and Francis Collins (right) now see eye to eye on public database.


    In its heyday, the Celera Discovery System signed up more than 200 institutions and many drug companies. But subscriptions have fallen off, leading the company to end the service on 1 July and to give 30 billion base pairs of human, mouse, and rat sequence data to GenBank. Making the data public should generate customers for Celera's sister company Applied Biosystems, which supplies researchers with products such as gene expression assays, says Dennis Gilbert, the company's chief scientific officer: “It's a natural evolution of both the business and the science.”

    Experts say the human data (which includes DNA from Venter and four other people) won't add much new information to the available human sequence. But Celera's mouse and rat data will help publicly funded researchers fill gaps and complete the assembly and validation of the mouse and rat genome sequences. And because Celera and the public efforts sequenced different strains, the data will also help researchers map genetic variation in these model animals.

    Two of Celera's remaining subscribers had mixed reactions. Alzheimer's disease researcher Steven Younkin of the Mayo Clinic in Jacksonville, Florida, once viewed Celera's human genome assembly as “a godsend” because its data on gene variants were more reliable than the public assembly's. But Younkin says NCBI's is now just as good.

    However, obesity researcher Craig Warden of the University of California, Davis, says his group still uses Celera's mouse genome assembly to check results from the public mouse databases because of its greater accuracy for his genes of interest. “It will be a loss” if GenBank can't catch up, he says.

  7. NASA

    U.S. Lawmakers Call for New Earth Science Strategy

    1. Andrew Lawler

    Bolstered by a new report from the National Academies, members of the House Science Committee last week attacked the Bush Administration's plans to cancel or delay several missions in NASA's $1.5 billion earth science program. Legislators complained about the lack of a detailed and comprehensive global observation strategy and took issue with NASA's vague plans to transfer some activities to the National Oceanic and Atmospheric Administration (NOAA). Scientists hope the vocal, bipartisan criticism will force NASA to rethink its plans.

    “We need a vision and priorities for earth science just as much as we do for exploration and aeronautics,” said the committee chair, Representative Sherwood Boehlert (R-NY). Added ranking minority member Representative Bart Gordon (D-TN), “NASA's earth science program faces the prospect of being marginalized.”

    The National Research Council study (Science, 29 April, p. 614) warned that NASA's plans to halt operations of existing satellites, defer or cancel future missions, and reduce funding for analyzing data could undermine an ongoing effort to understand Earth's processes. A proposed $120 million cut for next year would leave the agency's earth science budget $645 million below what the Administration planned just 2 years ago to spend in 2006. NASA is expected to decide next month which of 10 operating satellites should be turned off this year.

    Boehlert said NASA's science chief, Al Diaz, told him 1 day before the hearing that the agency planned to transfer some of its responsibilities to NOAA. NASA traditionally has developed advanced instruments and new satellites, whereas NOAA has been in charge of operational systems such as weather satellites. Boehlert and several other lawmakers say they wouldn't object to NOAA's taking on climate observations, but Boehlert is “troubled” by the lack of detail on how and when that would happen and how much it would cost.

    All wet?

    Legislators object to NASA's planned cuts to Earth-observing missions like TRMM's monitoring of weekly global rainfall.


    The furor already has prompted NASA to continue work on Glory, a spacecraft designed to study atmospheric aerosols that was axed in the 2006 budget request. Diaz announced the reprieve at the 28 April hearing, adding that he believes the restructuring of the earth science effort would leave the field “much better positioned.” The agency has “no intention of abandoning earth science,” Diaz says.

    Representative Ken Calvert (R-CA) was one of the few legislators to side with Diaz. “I don't think the Administration is trying to hurt earth science,” he said. And Representative Dana Rohrabacher (R-CA), a longtime critic of global warming studies, derided the need for “yet another global warming satellite.” He added: “When you restructure, … you get rid of things that aren't worthy of the investment.”

    But those views were not widely shared among the committee. Representative Vernon Ehlers (R-MI) warned Diaz that NOAA would need additional funding to handle any new responsibilities and that Congress needed to be kept in the loop. “This can't be something that is done just because you want to get out from under the financial burden,” he said. Any shift would “take a good deal of hard work and coordination—and the concurrence and involvement of both the research community and Congress,” he added.

    The science committee doesn't control NASA's purse strings, however, and the appropriations panel that does has yet to weigh in on the issue. Still, a pitched battle over the future of NASA's earth science effort seems likely.


    Two-Thirds of Senate Backs More Research

    1. Eli Kintisch

    Advocates for the Department of Energy's (DOE's) Office of Science are hoping that a vote of confidence from the U.S. Senate will translate into more money for basic energy research. But a gloomy budget picture may foil their plans.

    Last week, 68 senators signed a letter calling for a 3.2% increase for the $3.5 billion DOE office. They want to add $250 million to the Bush Administration's budget request for the 2006 budget year, which begins on 1 October. The letter was circulated by Senators Jeff Bingaman (D-NM) and Lamar Alexander (R-TN), both of whom have large DOE laboratories in their states. This year's effort attracted 13 more signers than a letter circulated last year that opposed a similar Administration cut.

    Standing in the way of any boost, however, is a 2006 budget resolution passed last week by both the House and Senate that puts a tight cap on nondefense discretionary spending, the source of all federally funded civilian research. “The appropriators always come back and ask, ‘Why didn't you give us more headroom?’” says an aide to Bingaman. The House panel is expected to begin action next week on DOE's 2006 budget.

    The senators' letter paints a stark picture of life if the White House's proposed 3.8% cut in DOE science is adopted, including “25% reductions in existing scientific personnel and operations at scientific facilities.” It concludes with a warning that “our entire U.S. scientific enterprise is in danger of eroding.”

    DOE defends its proposed budget as generous given scarce funds, pointing to new monies for nanoscale science and the experimental fusion reactor ITER. Funds added by members for specific projects, says a department spokesperson, disguise the fact that the White House has actually requested a 10% increase over proposed 2005 funding levels.

    A concurrent letter-writing campaign in the House has garnered more than 100 signatures, up from 82 last year. Among the new Senate supporters are Democratic budget hawks Russell Feingold (WI) and Kent Conrad (ND). The Senate letter was sent to energy appropriations chair Senator Peter Domenici (R-NM) and ranking member Senator Harry Reid (D-NV).


    The Dark Side of Glia

    1. Greg Miller

    Long ignored, the nervous system's glial cells may turn out to be key players in disease and prime targets for therapy

    When Linda Watkins gave an invited lecture a few years ago, she ruffled the feathers of at least one senior researcher in the audience. Drawing on her studies at the University of Colorado, Boulder, Watkins had argued that nervous system cells called glia contribute to the chronic pain resulting from nerve injury. This was at odds with the predominant thinking in the field, which held that such pain was purely a matter of miscommunication between neurons.

    The disapproving researcher, “a big-name person in the pain field whom I respect,” Watkins says, wasn't ready to accept that glia were involved. “[He] stood up after my talk and announced in front of the whole audience that he was greatly bothered by my being so glia-centric,” she recalls.

    These days such grumblings are becoming more rare. Recent research has shifted the once-heretical view that glia are key players in neuropathic pain into the mainstream. Indeed, on 2 April, the American Pain Society honored Watkins for her contributions to understanding the mechanisms of pain. Other researchers who have recently demonstrated new roles for glia say their work has also begun to garner more attention from colleagues who used to view the cells as mere support staff for the all-important neurons.

    The emerging realization of the importance of glia has given new life to an idea that has long lurked at the margins of neuroscience: that glia may have key roles in central nervous system disorders from neuropathic pain and epilepsy to neurodegenerative diseases such as Alzheimer's—and may even contribute to schizophrenia, depression, and other psychiatric disorders. There are also hints that glia may be promising therapeutic targets—a possibility that researchers have scarcely begun to explore.

    “We have been very neuron-chauvinistic,” concedes Christopher Power, a neurovirologist at the University of Calgary in Canada. “But it's clear [now] that you cannot ignore the roles of glia as important effectors of health and disease.”

    Workers' revolt

    Even the name “glia” reflects the low opinion early neuroanatomists held of these brain cells. It derives from a Greek word meaning “glue,” or possibly “slime.” Until recently, neuroscientists thought the cells' purpose in life was simply to provide physical support and housekeeping for the neurons, whose electrical impulses underlie all sensation, movement, and thought.

    In the last decade, however, researchers have discovered that glia, which outnumber neurons by as much as 10 to 1 in some regions of the human brain, have big-time responsibilities. During brain development, they guide migrating neurons to their destinations and instruct them to form the synapses that enable neurons to talk to one another (Science, 26 January 2001, pp. 569 and 657). In the adult brain, glia talk back to neurons, releasing neurotransmitters and other signals that regulate the strength of synapses (a possible mechanism of learning). They promote the survival of existing neurons—and perhaps even trigger the birth of new ones.

    The discovery of all these roles for glia in the healthy brain has prompted researchers to reconsider their connections to diseases. The most clear-cut case of glial involvement in a central nervous system disorder is in multiple sclerosis (MS), one of the most common neurological diseases. Dogma holds that MS is an autoimmune disorder, in which T cells and other immune system cells attack oligodendrocytes, the glia that form a fatty myelin sheath around the axons of neurons in the brain and spinal cord. Axons are neurons' transmission lines, and without insulating myelin, axonal communication breaks down. People with MS suffer movement and balance disruptions as well as impaired vision and other problems.

    Rising stars.

    Astrocytes such as these may play key roles in a variety of brain disorders.


    MS researchers have traditionally considered glia the victims, but there have been hints recently that the story is more complex. A study of tissue from the brainstems and spinal cords of 12 MS patients who died immediately after an outbreak of symptoms, reported by Australian researchers last year in the Annals of Neurology, found little evidence of T-cell infiltration into areas of the brain and spinal cord damaged by the disease. Instead, they saw widespread signs that the oligodendrocytes had been self-destructing. To the authors and other MS specialists, the study suggested that the immune reaction long thought to be the root cause of the disease might be a secondary response to something going awry in the oligodendrocytes.

    Last October, Power's team presented another twist on the glia-MS story in Nature Neuroscience. Studying brain tissue collected from autopsies of MS patients, the researchers identified a gene that is overactive in astrocytes, another type of glia. The gene, HERV-W, jumped from a retrovirus into a primate ancestor of humans about 50 million years ago; its product, a protein called syncytin, now plays an important role in the developing placenta. When the researchers caused the syncytin gene to be overactive in cultured human astrocytes, the cells became toxic to cultured oligodendrocytes. They then inserted the gene into a virus that infects mainly astrocytes and injected the modified virus into the brains of healthy mice; the animals developed MS-like symptoms within 2 weeks and had unusually high numbers of misshapen and dead oligodendrocytes on autopsy.

    Astrocytes had not been suspected to play a role in MS, Power says: “We were amazed and really intrigued by this idea that they were involved in the pathogenesis.” His team has since developed a compound that blocks expression of the gene in studies with human blood cells and is now testing it in animal models of MS.

    Painful truths

    The third class of glia, microglia, also appear to have some dirty secrets. They now stand accused of causing neuropathic pain.

    Millions of people in the United States suffer from this form of chronic pain as a result of nerve damage caused by physical injury, surgery, viral infection, or chemotherapy (Science, 16 July 2004, p. 326). For many afflicted people, even the gentle brush of clothing against the skin can be excruciating.

    Morphine and other pain drugs don't help most people with neuropathic pain. Watkins suspects that current drugs are ineffective because they are intended to work on neurons, whereas increasing evidence suggests that microglia are the real instigators. When a healthy person steps on a tack, pain-sensitive neurons in the foot send a message to neurons in the spinal cord, which relay the message up to the brain. In people with neuropathic pain, the pain-sensing neurons become hyperexcited. In effect, they scream at the spinal cord neurons instead of talking to them in a normal voice—even when there's no tack or other painful stimulus.

    In the 1970s, researchers discovered that the types of injuries that lead to chronic pain trigger microglial cells in the spinal cord to proliferate and spew out various signaling molecules. But until recently it wasn't clear whether this glial activation somehow caused the sensory neurons' excitability or was merely an unrelated side effect, says Michael Salter, a neuroscientist at the University of Toronto in Canada.


    A 2003 study in Nature provided strong evidence that activated microglia are the cause. Salter, Kazuhide Inoue of the National Institute of Health Sciences in Tokyo, and their colleagues identified a protein that is necessary for neuropathic pain in rats with a severed nerve and showed that this cell surface receptor is displayed only by activated microglia. Much as a typical rat withdraws quickly from a painfully hot surface, the rats with a severed nerve withdraw a paw in response to a light touch. But when the team blocked the microglia receptor called P2X4 with a drug injected into the spinal column, the behavior of the injured rats rapidly returned to normal. When the drug wore off, the abnormal pain responses returned.

    Two big mysteries now confront researchers investigating the link between glia and neuropathic pain, Salter says. One puzzle is what activates microglia after injury. The other is how activated microglia hypersensitize sensory neurons in the spinal cord. Solving either one could have huge clinical payoffs in terms of treating pain.

    A paper published online 4 April in the Proceedings of the National Academy of Sciences provides an important clue, pointing to a possible trigger of microglial activation. Joyce DeLeo and colleagues at Dartmouth Medical School in Hanover, New Hampshire, had previously found that nerve injury activates a receptor called Toll-like receptor 4 (TLR4), which in the central nervous system is only expressed on microglia. In the new study, the researchers found that genetically altered mice lacking TLR4 showed markedly reduced microglial activation after nerve injury, as well as reduced hypersensitivity to pain. Blocking expression of the receptor in normal rats prior to a nerve injury yielded similar results.

    “It's a beautiful series of experiments,” says Watkins. She and Salter point out, however, that TLR4 is likely only one of several routes for microglial activation after nerve injury. Watkins's group, for example, has identified another candidate trigger—a protein called fractalkine—expressed on the surface of neurons.

    Much of the research on the second mystery—how microglia excite sensory neurons—has focused on immune system messengers called proinflammatory cytokines. Many researchers have suggested that proinflammatory cytokines secreted by activated microglia increase neural excitability and increase sensitivity to pain when injected into the spinal cords of rats. Despite this promise, however, few cytokine-blocking drugs have made it to human trials, largely because of their potent immunosuppressive action and other side effects.

    The potential pain therapy that most excites Watkins involves a cytokine from microglia that dampens inflammatory responses. In work described at recent conferences, she and colleagues delivered naked DNA for this protein, interleukin-10 (IL-10), to the spinal cords of rats with nerve damage. The cells surrounding the cords took up the DNA and bathed nearby glia in IL-10. The gene-delivery procedure involves something like a reverse spinal tap and appears to relieve neuropathic pain in the rodents for at least 3 months. “They behave like normal animals,” says Watkins. Her team is now collaborating with Avigen, a biotech company in Alameda, California, in hopes of gaining Food and Drug Administration (FDA) approval for a clinical trial for neuropathic pain patients.

    Watkins and her colleagues have also begun testing the gene-therapy technique in a rat model of chemotherapy-induced pain. Many cancer patients, notes Watkins, would rather forgo potentially lifesaving drugs than suffer the pain that comes with them. “People are literally dying because of the pain caused by chemotherapy,” she says.

    Too much excitement

    The ability of glia to make nerve cells hyperactive may also play a role in epilepsy, another disorder long thought to be purely a problem with neurons. Raimondo D'Ambrosio, a neuroscientist at the University of Washington, Seattle, has been studying the role of glia in trauma-induced epilepsy, which occurs in about half the people who suffer the most severe head injuries. According to him, a factor in such epilepsy is the split personalities of astrocytes, which together with microglia become activated in response to injury.

    Many researchers describe astrocyte activation as a Jekyll-to-Hyde transition. Quiescent astrocytes have long, armlike extensions that wrap around synapses and allow them to regulate the concentrations of ions and neurotransmitters around these nerve cell junctions. But in response to a head injury or other trauma, astrocytes often withdraw their arms and slack off on their stabilizing chores. Some of them migrate to the site of injury, where they help repair the damaged area and create a protective scar. This emergency response may ultimately be beneficial, but it causes problems too.

    Glial woes.

    The death of oligodendrocytes (left, purple) spells trouble for multiple sclerosis patients. When activated, microglia (right, blue) in the spinal cord may trigger chronic pain.


    “Reactive astrocytes are not as good as normal astrocytes at taking care of brain physiology,” D'Ambrosio says. His team has found that after head trauma, astrocytes reduce activity of the protein channels that allow them to draw potassium out of the space around neurons. As a result, potassium builds up in the extracellular space, making the neurons more likely to fire in synchronous patterns typical of epilepsy. “There's no question that excess extracellular potassium facilitates seizures,” D'Ambrosio says.

    Other changes in activated glia may also contribute to the neural excitability underlying seizures. Astrocytes recycle the neurotransmitter glutamate. Once glutamate is released at a synapse, it excites neurons until it is removed. Astrocytes handle about 90% of this glutamate clearance, but activated or injured astrocytes may abandon this task and may even release glutamate themselves.

    Furthermore, as in neuropathic pain, cytokines released by activated glial cells may contribute to epilepsy, D'Ambrosio says. Work by other researchers has shown that levels of some cytokines spike in the cerebrospinal fluid of people and in the brains of rats after a seizure. When glial cells are genetically engineered to overproduce certain proinflammatory cytokines in mice, the animals become more prone to seizures.

    D'Ambrosio also suspects that glia play important roles in more common types of epilepsy that aren't caused by trauma. “Glia can affect nearly every aspect of neuronal excitability and function,” he says, which makes them possible therapeutic targets for any form of epilepsy.

    Falling apart

    There's a growing suspicion that misguided glia can do more than overexcite neurons—they may even kill them.

    One of the first clues that glia may be involved in neurodegenerative disorders came from studies on a form of dementia that afflicts 10% to 20% of those infected with HIV. The virus's target of choice in the brain is microglia; it infects neurons very rarely, if at all.

    How infected microglia conspire to kill off neurons and cause dementia is not known. One possibility is that inflammatory cytokines and other compounds released by microglia injure neurons directly; another is that the microglia activate astrocytes, which abandon their glutamate-recycling duties, allowing the neurotransmitter to build up and kill neurons by overexciting them.

    Both mechanisms may also be at work in a wide range of neurodegenerative disorders, says Robert Nagele, who studies Alzheimer's disease at the University of Medicine and Dentistry of New Jersey in Stratford. For example, activated microglia invade the amyloid plaques in the brain that are the hallmark of Alzheimer's disease. Activated astrocytes also form a halo around the plaques. Many researchers agree that the inflammatory glial response contributes to the damage seen in Alzheimer's brains, says Nagele, but exactly how is a matter of debate.

    “There's a long list of [brain] diseases that are now appreciated to have an inflammatory component,” says Gary Landreth, an Alzheimer's disease researcher at Case Western Reserve University in Cleveland, Ohio. Although many anti-inflammatory drugs are being tested for Alzheimer's disease and other neurodegenerative disorders, the results have been mixed. These drugs probably reduce neurodegeneration in part by inhibiting the inflammatory response of glia, Landreth says, but they act throughout the body. A drug that targeted glia specifically might be very valuable, he says, if it dampened inflammation in the brain without weakening the immune system—but so far, no such compounds have been developed.

    Tackling glutamate excitotoxicity has also been tricky. The drug memantine, which is intended to protect neurons from this threat and was approved in the United States in 2003 for Alzheimer's disease, modestly slows cognitive decline in patients but doesn't seem to thwart the brain's eventual neurodegeneration. Although memantine blocks a type of glutamate receptor on neurons, a paper published in January in Nature suggests another way to prevent excitotoxicity: boosting the activity of the glutamate transporter on astrocytes, the molecular pump responsible for clearing glutamate from the synapse.

    Jeffrey Rothstein of Johns Hopkins University School of Medicine in Baltimore, Maryland, and colleagues screened more than 1000 FDA-approved drugs and discovered that a class of widely used antibiotics, the so-called β-lactam antibiotics, which includes penicillin and its derivatives, spurs astrocytes' production of glutamate transporters and increases the glial cells' uptake of glutamate. In a mouse model of the fatal neurodegenerative disease amyotrophic lateral sclerosis, one of these antibiotics delayed neuron loss and prolonged survival.

    On 22 February, the European Commission approved a drug that appears to work primarily on glia for Parkinson's disease, another neurodegenerative disorder. The drug rasagiline is already on the market for Parkinson's disease in Israel and is under consideration by FDA for use in the United States. The drug inhibits the monoamine oxidase B enzyme, which is found predominantly in microglia and astrocytes, says its inventor Moussa Youdim, a pharmacologist at Technion-Israel Institute of Technology in Haifa.

    Rasagiline was thought to work largely by preventing the enzyme from breaking down the neurotransmitter dopamine, which is deficient in Parkinson's patients. But the drug also appears to have glia-based neuroprotective effects, Youdim says. His team reported in February in the journal Mechanisms of Ageing and Development that the drug and related compounds sop up iron, preventing the metal from building up inside glia and undergoing chemical reactions that create dangerous free radical compounds that can seep out and wreak havoc on neurons. Researchers suspect that this process plays a role in other neurodegenerative disorders, and Youdim and colleagues are now testing rasagiline—as well as related compounds they've created recently that are even more effective at binding iron—in animal models of Alzheimer's and Huntington's disease.

    More than glue

    The most speculative links between glia and human disease concern psychiatric disorders such as schizophrenia and depression. Postmortem studies of patients with schizophrenia, bipolar disorder, and depression have turned up oddities in the numbers of glia in brain regions implicated in those disorders. In 1999, for example, Grazyna Rajkowska of the University of Mississippi Medical Center in Jackson and colleagues reported that people with major depression had at the time of their death low glial cell counts in certain areas of the frontal cortex, including regions thought to be important for cognition, mood, and motivation.

    Suspicious disappearance.

    The loss of astrocytes (dark masses) may contribute to depression.


    Since then, Rajkowska's lab has found evidence that AWOL astrocytes account for those low glial counts and that astrocyte counts are particularly low in depressed people who die young (often by suicide). Because astrocytes help stabilize the environment for neurons and provide them with growth factors, Rajkowska speculates that an early deficit in astrocytes could debilitate neural circuits in brain areas involved in regulating mood. “I believe everything starts with glial pathology,” she says.

    In other postmortem studies, Joseph Price, a neuroanatomist at Washington University in St. Louis, Missouri, and colleagues have found deficits in oligodendrocytes in the frontal cortex of depressed patients. Last year they reported a 20% to 30% reduction in the number of oligodendrocytes in the amygdala, a key emotion center. Price speculates that because oligodendrocytes provide the critical insulation on axons, the deficits could contribute to depression by causing faulty wiring in mood-related brain areas.

    A DNA microarray study published in the March issue of Molecular Psychiatry further implicates oligodendrocyte abnormalities in depression. A team from Wyeth Research in Princeton, New Jersey, and the National Institute on Drug Abuse in Baltimore, Maryland, found that the activity of 17 genes related to oligodendrocyte functions including myelination and cell communication were altered in postmortem temporal cortex tissue of patients with major depressive disorder.

    “So far, the data look quite intriguing in terms of glia in postmortem brains,” says psychiatrist Husseini Manji of the National Institute of Mental Health in Bethesda, Maryland. But he adds that at this point, hypotheses about how glial deficits might lead to psychiatric symptoms are “very preliminary.”

    That's not to say researchers lack theories. Astrocytes control the amount of glutamate at synapses, and abnormalities in levels of this neurotransmitter have been tied to a variety of psychiatric disorders, including depression, anxiety, and schizophrenia (Science, 20 June 2003, p. 1866). Astrocytes also recycle other neurotransmitters implicated in psychiatric disorders. In the last few years, a research team led by Masato Inazu of Tokyo Medical University in Japan has identified several transporters on astrocytes that take up serotonin, dopamine, and other so-called monoamine neurotransmitters. Many existing psychiatric drugs tweak levels of these neurotransmitters, and Inazu and colleagues have found that some antidepressant drugs alter activity of those astrocyte transporters. “Many of the drugs we already have might conceivably work on glia, and people just haven't realized it yet,” says Ben Barres, a glial biologist at Stanford University in California.

    Given the fairly modest healing abilities of current neuropsychiatric drugs, researchers think the glial leads are well worth pursuing. “There are many more glial cells in the brain than there are neurons, and they're not just glue,” says Robert Schwarcz, a pharmacologist at the Maryland Psychiatric Research Center in Baltimore. “They're actively participating in many important functions, and if anything goes wrong with them, there's going to be dysfunction and disease.”


    Embryologists Polarized Over Early Cell Fate Determination

    1. Gretchen Vogel

    Scientists are trying to determine when the first asymmetry occurs in the mouse embryo, but the embryo has so far thwarted their efforts

    Embryologist Hans Spemann famously pointed out 60 years ago that we are standing and walking with parts of our body that would have been used for thinking had they developed in another part of the embryo. Yet scientists still aren't sure when the cells in a mammalian embryo start to take on the individual identities that will determine their eventual fates in the organism.

    Currently, a debate is raging over the answer to this question. Some embryologists think that even the earliest cells have, if not an immutable destiny, at least a tendency to form one part of the embryo or another. Not everyone is convinced, however, and in recent months researchers have published a flurry of papers laying out their evidence that the earliest embryonic cells do—or don't—carry inherent preferences that tilt them toward one destiny or another.

    The studies address one of developmental biologists' most fundamental questions: How can a single cell—the fertilized egg—give rise to embryos and later animals with a distinct front, back, top, and bottom? In some species, the answers are well known. The unfertilized fly egg, for example, already contains concentrations of proteins in different regions that influence the eventual location of the fly's head and posterior. In frogs, one of the first events after fertilization is the development of a prominent “gray crescent” on the side of the egg opposite where the sperm has just entered, which contains key signals crucial for development.

    But pinning down what happens in the mammalian embryo has always been much more difficult. In the first place, the eggs of mammals are tiny—less than one-thousandth of the volume of a frog egg. And their embryos inconveniently develop inside the mother's body, making direct observations in the embryo's natural environment extremely difficult. What is certain is that the cells of mammalian embryos are much more flexible than those of their amphibian or insect counterparts. Scientists can take a two-, four-, or even eight-cell mouse embryo, tease the cells apart, recombine them with cells from another embryo, and produce a healthy mouse. In frogs and fish, such tricks yield animals with two heads or other major abnormalities.


    Scientists are debating whether the first two cells of a newly created mouse embryo have a tendency toward different fates in later development.

    CREDIT: T. HIIRAGI AND D. SOLTER, NATURE 430, 360 (2004)

    For decades, those experiments led most scientists to assume that the cells making up an early mouse embryo are equivalent, and that the first signs of embryonic polarity—having an up-down or left-right axis—appear in the blastocyst, a slightly oblong ball of a few dozen cells that forms about a week after fertilization. “The paradigm has been that [the mouse embryo] is a blank sheet until you start to make the blastocyst,” says developmental biologist Janet Rossant of the Samuel Lunenfeld Institute at the University of Toronto, Canada.

    By that time, cells have developed into at least two types: those of the inner cell mass, which will form the fetus as well as parts of the placenta and surrounding tissues, and the trophoblast cells, which will form much of the placenta but will not contribute to the developing fetus. At this stage, the embryo has a clear polarity: The inner cell mass clusters at one end of the blastocyst, which developmental biologists call the embryonic side, and the other half, called the abembryonic side, contains a hollow cavity called the blastocoel.

    But over the past decade, several groups probing the embryo's earliest stages have found evidence suggesting that the embryo's directionality might arise well before blastocyst formation. Some of the first hints came from Richard Gardner's lab at the University of Oxford in the United Kingdom. In 2001, he and his colleagues reported experiments in which they used tiny drops of oil to mark the two cells produced by the first division of the fertilized mouse egg. The researchers found evidence that the “equator” created by the first cell division tends to be roughly in the same plane as to the equator dividing the embryonic and abembryonic regions of the blastocyst, leading them to wonder if the egg itself might have north and south poles that influence the fate of cells derived from one hemisphere or the other. “It took 5 years to publish because I didn't believe it myself,” Gardner says.

    At about the same time, Magdalena Zernicka-Goetz and her colleagues at the University of Cambridge, U.K., started to look at exactly where the progeny of the embryo's first two cells end up. In 2001, they reported in Development that by carefully marking the sister cells with different-colored dyes, they found that the descendents of one tend to form the embryonic side—whereas the other gave rise to the abembryonic side. That suggested, the authors said, that from the first division on, the embryo has a polarity of its own.

    Later that year, the team reported results from a complex and marathonlike set of experiments to see if cells in the four-cell-stage embryo are distinguishable from one another. To identify the four cells reproducibly, the researchers took advantage of the fact that the mouse egg itself is not perfectly symmetrical. It remains attached to the so-called second polar body, which contains genetic material ejected during the egg's maturation. The side with the polar body is called the “animal” pole, whereas the opposite side is called the “vegetal” pole.

    According to Zernicka-Goetz and her colleagues, the two cells formed by the first cell division almost always divide with their cleavage planes roughly perpendicular to each other. One division follows the longitude of the oocyte and the other the latitude, so that the four-cell embryo consists of one cell containing mainly animal cytoplasm, one with mostly vegetal cytoplasm, and two cells containing a mix of animal and vegetal cytoplasm.

    In experiments that began at 6 a.m. and ran for nearly 20 hours, Zernicka-Goetz, Karolina Piotrowska-Nitsche, also at Cambridge, and their colleagues carefully tracked the divisions of early embryos, broke them apart at the four-cell stage, and created embryo chimeras with known compositions of the different cells. In previous experiments in which chimeric embryos were created by randomly mixing cells from four-cell embryos, nearly all the chimeras developed normally.

    Early patterns.

    The blastocyst-stage embryo has distinct embryonic and abembryonic poles.


    In contrast, the Cambridge team found that none of the chimeras made from cells that contained predominantly vegetal cytoplasm developed when implanted into foster mothers. Chimeras containing predominantly animal cytoplasm developed about 25% of the time, and those containing cytoplasm from both hemispheres developed 87% of the time—a result comparable to that of the random mixing experiments.

    “The result shocked us,” says Zernicka-Goetz. “It certainly isn't what we expected.” But she says further studies support the result. Last month, she and Piotrowska-Nitsche reported in Mechanisms of Development that the predominantly “vegetal” cell of the four-cell embryo contributes almost nothing to the inner cell mass of the blastocyst.

    None of the rules her team has identified are hard and fast, Zernicka-Goetz admits. And she acknowledges that many scientists continue to believe that because early cells are so flexible, there is no underlying pattern in the egg or early embryo. However, she says, “the other possibility is that there is a pattern, but not a determinant pattern—that you have a set of biases that push cells toward a certain path. We think we have evidence that this is closer to the truth.”

    Davor Solter of the Max Planck Institute for Immunobiology in Freiburg, Germany, is one of those who is not yet convinced. Indeed, he disputes most of Zernicka-Goetz's conclusions. For one, he and his colleague Takashi Hiiragi, also at the Max Planck Institute for Immunobiology, find no tendency for the first cell division to occur on a plane perpendicular to the animal-vegetal equator. In contrast, they reported in Nature last year that in their time-lapse recordings of the first cell division, the angle of division is mostly influenced by the relative location of the sperm and oocyte pronuclei as they move toward the center of the cell. They claim that the polar body, which starts out marking the so-called animal pole, actually moves toward the cleavage plane in about half of the embryos they observed, which might have influenced the observations that Gardner and Zernicka-Goetz report.

    And for another, Solter and Hiiragi don't see evidence that the first cell division correlates with the later axis of the blastocyst. One factor that might be complicating the experiments is the dynamic movements of the embryos as they develop. Solter says that “these embryos are like spinning yo-yos” in the time-lapse movies, which makes it nearly impossible to keep track of the angle of the original cell divisions.

    Solter, Hiiragi, and several colleagues also report in the 1 May issue of Genes and Development that they can find no evidence that one sister cell contributes preferentially to one end of the blastocyst or the other. In these experiments, the team used dye techniques similar to those of Zernicka-Goetz to mark cells at the two-cell stage and then filmed embryos using time-lapse photography for 3 days as the two cells grew into blastocysts. But the results showed no clear pattern of daughter cells in the resulting blastocysts, Solter says.

    Axis of disagreement.

    In some experiments in which one cell of the two-cell embryo is stained red and the other blue, the progeny of one cell tends to form the embryonic region of the blastocyst, whereas the other gives rise to the abembryonic region (micrograph on left). In other embryos (middle and right), the line is more difficult to draw. (DNA is stained green.)

    CREDIT: B. PLUSA ET AL, NATURE 434, 391 (2005)

    Some of the blastocysts did show patterns similar to those Zernicka-Goetz reports, Solter acknowledges, but they were a minority. Only about 25% of the embryos they observed had blastocysts in which the daughter cells sorted predominantly into the embryonic or abembryonic part—far less than the nearly 70% reported by Zernicka-Goetz and other groups. Solter, Hiiragi, and their colleagues propose that mechanical forces on and within the developing embryo determine its eventual polarity. But Zernicka-Goetz says her team used a different method of measuring the boundary between the cell types, by painstakingly counting cells at different layers in the blastocyst. “I would be very interested to ask them to analyze their data in the same way,” she says. “Only then can we really compare our results.”

    For his part, Gardner is taking something of a middle ground. “There is undeniable evidence that there is prepatterning” in the embryo, he says. But different techniques used in different labs—and the inherent flexibility of the embryo itself—make it very difficult to determine exactly how much. For example, he says, his lab, like that of Zernicka-Goetz, continues to see a consistent pattern between the plane of the first cell division and the shape of the blastocyst. But, Gardner adds, in his lab “we don't see a shred of evidence” that the cells of the four-cell embryo are different.

    If there is one point on which all parties would agree, it's that the techniques used so far make clear answers extremely hard to come by. “When people observe embryos, there's always a lot of variability,” says the Lunenfeld Institute's Rossant. “If you're looking for a certain result, you'll see it, but there will always be some results that do not fit.” So far, she says, “you have to say that arguments on both sides are inconclusive.”

    The definitive experiment, Rossant and others say, would be to identify a gene or protein, like those already identified in frog or fly embryos, that clearly marks the fate of different early embryonic cells. Zernicka-Goetz and her colleagues are searching for such a factor, she says, looking for differences in gene expression signatures, or in more subtle modifications of the cell's internal architecture. If the genetic search is successful, Solter says, he will be convinced. If someone could find a gene expressed in a specific region of the egg—or in one early cell and not others—and if removing that gene interrupts development, “then absolutely, prepatterning is proven,” Solter says. “If such a gene exists, it will be found.”


    Electronic Paper: A Revolution About to Unfold?

    1. Marie Granmar,
    2. Adrian Cho

    Developers have high hopes for paper-thin flexible displays, but some technologists say “killer apps” to drive the technology remain to be found

    In an exhibition hall at the 2005 World Exposition in Aichi, Japan, a gigantic newspaper covering more than 5 square meters delivers the news to passersby in crisp black and white. Unlike a traditional broadsheet, which goes from printing press to trash bin within a few hours, the Yomiuir Global Newspaper never becomes old news. Instead, the display rewrites itself electronically twice a day, keeping readers up-to-date without generating wastepaper. But the real message is the medium itself: Electronic paper is coming.

    Produced by Japan's TOPPAN Printing Co. and the American high-tech firm E Ink, the newspaper is just one of several demonstrations, prototypes, and products featuring the new technology. Already, electronic paper signs direct students across college campuses and passengers through the East Railway Station in Berlin. Japan's Sony Corp. sells a tabletlike device that stores and displays whole books, and last year the Netherlands' Royal Philips Electronics unveiled a prototype of a fully flexible sheet of electronic paper. Requiring far less power and potentially cheaper than conventional flat-panel displays, electronic paper could change how consumers view the world. But even as the budding technology emerges from the laboratory, technologists are debating a crucial question: Is there really a need for electronic paper?

    “The problem is that nobody can think of anything new that genuinely needs the flexibility,” says Kim Allen, an industry analyst with iSuppli Corp. in Santa Clara, California. “All the things that are technically possible are well served by rigid displays.” But Michael McCreary, a chemist and vice president for research and advanced development at E Ink in Cambridge, Massachusetts, says electronic paper will be so versatile it will create its own need. “It's going to do things that cannot be done with either paper or displays today,” he says. “I think that this is going to create a revolution.” Developers envision smart signs and high-tech scrolls that relegate newspapers and books to the recycling bin of history. Still, they must overcome several stiff technological challenges before pliable electronic paper is ready for mass production. And neither book publishers nor bookies know which applications of electronic paper are sure bets or how they'll pay off.

    Electric ink and organic circuits

    In spite of its name, electronic paper is the technological cousin of the flat-panel computer screen. A computer's liquid crystal display (LCD) consists of two sheets of glass sandwiched around a layer of tiny transistors and a layer of liquid crystal material—a soup of rodlike molecules that snuggle together a bit like sardines in a can. The screen is illuminated from behind, and ordinarily the molecules let light pass through them. But when a transistor applies a voltage to a point on the screen, or pixel, the molecules rearrange themselves to block the light and darken the pixel. Filters tint the light from neighboring pixels red, blue, or green to create a color image.

    The dream.

    Artist's conception of an electronic paper display.


    In electronic paper, too, each pixel changes color when a voltage is applied. But instead of emitting light, electronic paper merely reflects it, dramatically reducing power consumption. A pixel in electronic paper also holds its color without voltage. Thanks to such “bistability,” electronic paper uses power only when the image on it changes. All told, electronic paper may use less than 1/10,000 as much power as a computer's LCD.

    Researchers are developing several different technologies for the color-changing electronic ink. E Ink's “electrophoretic” material consists of microcapsules embedded in a plastic sheet, each filled with a clear liquid and submicrometer-sized particles, some colored black and others colored white. The black and white particles carry opposite electric charges; a pulse of voltage from the underlying transistor can make the white ones rise toward the surface of the display and the black ones sink away from it, or vice versa. Gyricon, a subsidiary of Xerox Corp. based in Ann Arbor, Michigan, does a similar trick with plastic spheres about 100 micrometers wide, each half black and half white with opposite electrical charges on the two sides. The spheres simply flip in response to a pulse of voltage. E Ink's and Gyricon's products provide better contrast than a computer's or cell phone's LCD and can be read even in direct sunlight. But for the moment, both are essentially black-and-white technologies that switch pixels too slowly to show moving images.

    Others are taking different tacks. NTERA, a high-tech company based in Dublin, Ireland, makes bistable displays that employ “electrochromic” dyes: molecules that switch from transparent to specific colors when they absorb electric charge. In contrast, Kent Displays Inc. in Kent, Ohio, has developed a reflective LCD that is bistable.

    Creating a truly paperlike, flexible display poses equally challenging technical problems. The problem lies behind the electronic ink, in the “backplane” of transistors that activate it. That's because the crystalline silicon from which transistors are usually made is brittle and must be deposited on a stiff substrate such as glass.

    To make a flexible backplane, researchers are developing materials such as non crystalline amorphous silicon, says Zhenan Bao, a chemist at Stanford University in California. Perhaps most promising, she says, are plastics that have electrical properties similar to those of silicon. Such “organic semiconductors” bend easily and can be deposited on flimsy plastic substrates. And in theory, Bao says, a simple inkjet printer could lay down organic circuits much more cheaply than the complicated multistep process now used to etch circuits into silicon.

    Philips and E Ink have teamed up to make a flexible electronic paper display driven by organic semiconductors. The low-resolution screen, roughly as thick as a sheet of printer paper, measures 12.7 centimeters along the diagonal and rolls up into a tube 1.5 centimeters wide. Philips envisions using the screens for retractable displays on hand-held devices and plans to begin design for production this year, says Hans Driessen, a spokesperson for Philips Research in Eindhoven, Netherlands. Philips guarantees the screen will roll and unroll at least 2000 times before it conks out—which may not be enough for people who make 10 calls per day from their cell phones.

    On track?

    Hand-held book readers and railway station signs may pave the way for markets for electronic paper.


    To market, to market …

    Electronic paper may someday rewrite the concept of the book, but to bring the technology to market, developers are using it first to spruce up the humble store sign. Gyricon sells a wirelessly programmable sign for $1295 and has developed a system to control multiple signs from a central location. “The reason retailers do a 3-day sale is it takes a day to put up the signs and a day to take them back down,” says Jim Welch, director of marketing and communications at Gyricon. “This opens the way for having an instant sale.” However, systems using smaller LCD shelf labels already exist.

    Developers also hope to produce larger, lower-power, and easier-to-read displays for cell phones and other hand-held devices. E Ink has positioned itself to sell its electronic ink by the sheet as a commodity to electronics manufacturers. The company already supplies the displays for Sony's LIBRIé electronic book reader. Introduced last year and sold only in Japan, the LIBRIé weighs 300 grams, holds 10 megabytes of text, and can flip more than 10,000 virtual pages before draining its four AAA batteries. Still, the book reader looks less like a new type of paper than a black-and-white subspecies of personal digital assistant.

    Ultimately, developers envision a kind of smart scroll that downloads newspapers, magazines, and books wirelessly, says Nick Sheridon, a physicist at the Palo Alto Research Center in California and inventor of Gyricon's technology. Sheridon doubts electronic paper will ever entirely replace paper. “There are books that I will always want on paper,” Sheridon says. “But I don't think that anyone is so attached to their newspaper.” If it lives up to its promise, electronic paper could slash newspaper companies' printing and distribution costs, says George Irish, president of Hearst Newspapers in New York City, which invests in E Ink. “It's certainly a technology that we want to involve ourselves in, at least on a test basis,” he says.

    To turn their prototypes into commercial products, though, developers will have to learn to make electronic paper cheaply and in large quantities and build a manufacturing infrastructure capable of challenging existing display technologies. All that could take years, says Stewart Hough, an industry consultant with Advanced Technology Applications in Madera, California. Still, Hough says, companies such as E Ink and Gyricon have found market niches that should keep them afloat until the technology matures. “You've got to fund your habit,” he says. And even skeptics expect the first bona fide electronic paper products to reach the market within a decade or so. Electronic paper is coming. The question is when will it arrive—and what will it do when it gets here?


    Shrinking Dimensions Spur Research Into Ever-Slimmer Batteries

    1. Robert F. Service

    As electronic devices shrink toward paper-thinness, researchers and companies around the globe are scrambling to come up with novel materials and designs for two-dimensional batteries to power them.

    Like their 3D cousins, so-called flat batteries convert chemical energy to electricity by making an electrical connection between a negative electrode, or anode, and a positively charged electrode, called the cathode. Electrons flow from the cathode to the anode through a conductive material, known as an electrolyte. In flat batteries, however, each layer is often mere micrometers thick.

    Companies already sell a variety of rechargeable and nonrechargeable flat batteries. Most serve low-power applications such as electronic ID tags and “smart” credit cards sporting tiny displays. To increase the output appreciably, technologists would have to spread the batteries out over much larger areas—a move that would either leave them thin and flimsy or require bulky, heavy shielding.

    Never too thin.

    Spray-on techniques could make “flat” power sources like this Solicore battery look positively chunky.


    To get around this problem, Larry Dubois, a chemist at SRI International in Menlo Park, California, and colleagues are working on a couple of possible fixes. First, Dubois's team has developed a scheme for printing multiple thin batteries atop one another. The setup uses a device resembling a computer-controlled ink-jet printer to lay down successive layers of liquid precursors for the electrodes and electrolytes of a rechargeable lithium-ion battery. The setup makes it relatively straightforward to layer multiple thin cells to improve both the total amount of power they can hold and the maximum voltage they can supply. The battery is then encased in a plastic housing. Because printing technology tends to be far cheaper than the traditional vacuum-deposition techniques used in battery manufacturing, the new technique could sharply reduce the high cost of lithium batteries. The computer can also spray down batteries to fit any shape. “We've even written the text of the Declaration of Independence” as a battery, Dubois says.

    In a second twist, Dubois's team has recently created batteries coated atop thin fiber-based electrolytes. The work is at an early stage, but the researchers hope to create batteries that can be integrated directly onto structural materials, such as the plastic fibers in a laptop case. That way the material that provides structural support for the computer could power it as well.


    Changes in the Sun May Sway the Tropical Monsoon

    1. Richard A. Kerr

    Oxygen-isotope patterns in a stalagmite from a cave in southern China indicate that the sun is one of the main drivers of tropical monsoon variations over the centuries

    Scientists have long presumed that a changeable sun might influence climate. Decades before satellite observations in the 1980s showed slight fluctuations in our star's brightness, researchers were hunting for evidence of a sun-climate connection. That search has made halting progress, however. Researchers have had trouble finding both proof of such a connection and an explanation for how it might work. Now evidence is accumulating that solar variations have altered at least one aspect of climate, the rain-laden monsoonal winds that sweep in from the sea around the tropics. And there's even a new mechanism for the observed sun-monsoon link. The latest evidence “is kind of selling me on [a sun-climate link],” says longtime doubter Gerald North of Texas A&M University in College Station. Still, he adds, “the big mystery is that the solar signal should be too small to trigger anything” in the climate system.

    The latest and some of the best evidence for a sun-monsoon link comes from a rock grown in a cave in southern China. On page 854, geologist Yongjin Wang of Nanjing Normal University in China and colleagues at Nanjing and the University of Minnesota, Twin Cities, report their analysis of a meter-long stalagmite that grew during the past 9000 years in Dongge Cave. Each added layer of carbonate mineral recorded the oxygen-isotope composition of the monsoon rains that were falling on southern China at the time, dissolving carbonate rock and redepositing that mineral as a stalagmite at about 100 micrometers per year. Monsoon rains upwind of the cave lighten the oxygen isotopes of rain falling at Dongge. So, the lighter the stalagmite oxygen isotopes are, the wetter the summer monsoon was when that bit of stalagmite formed. The team dated the layers radiometrically, using uranium and thorium isotopes.

    The stalagmite revealed that cycles of 558, 206, and 159 years, on average, are superimposed on a jumble of variations in monsoonal rains since the last ice age. These climate periodicities resemble those in the record of varying carbon-14 in tree rings, the authors note, cycles widely attributed to variations in solar activity. In fact, the two records have a half-dozen periodicities in common. “This matching suggests that the intensity of the summer [East Asian] monsoon is affected by solar activity,” says Minnesota team member Hai Cheng. “The sun is one of the main drivers” of monsoonal climate change on centennial time scales, he says.

    Millennial rains.

    This cross-sectioned stalagmite recorded 9000 years of rain influenced by solar variations.


    That's a strong contention in a field littered with debunked claims and disappearing correlations, but the existence of a solarlike signal in monsoon climate records is getting hard to ignore. “The correlation is very strong” in the Dongge record, says North. “I find it very hard to refute.” “This is probably the best monsoon record I've seen,” adds paleoclimatologist Dominik Fleitmann of Stanford University in Palo Alto, California. “Even better than ours.” That was a stalagmite record from southern Oman, 5000 kilometers to the west under the Indian Ocean monsoon, published in Science in 2003. It too showed solar signals, as have two other stalagmites from Oman, lake sediment records from East Africa another couple of thousand kilometers to the west, and a just-published second stalagmite record from Dongge. “Now we can ask the modeling community to provide a mechanism” for linking solar activity and the monsoon, Fleitmann says.

    As it happens, modelers do have a new sun-monsoon mechanism to offer. Gerald Meehl of the National Center for Atmospheric Research (NCAR) in Boulder, Colorado, and colleagues published a study in the Journal of Climate in 2003 in which they traced the effects of a brighter sun through the climate system. In their model, increased solar irradiance amplified the heating of the relatively cloud-free subtropics, which boosted evaporation there. When winds carried the additional moisture into monsoon regions, it condensed, increasing monsoonal rains and intensifying the winds that drive the whole system.

    Meehl, meteorologist Harry van Loon of Northwest Research Associates in Boulder, and meteorologist Julie M. Arblaster of NCAR have now found signs in the real atmosphere of what appears to be their model's sun-monsoon mechanism at work. Writing in the December 2004 Journal of Atmospheric and Solar-Terrestrial Physics, they report that the whole tropical circulation system that feeds monsoons around the world intensified and weakened over the past 50 years in step with the 11-year solar cycle. When the sun was brightest, rains were heavier not just in the Indian monsoon but in almost every region of localized tropical precipitation around the world, from the North American monsoon (in the American Southwest) to the Sahel of West Africa.

    Things may be looking up in sun-climate relations, but in this field especially, looks can be deceptive. In sun-climate, “just when you think you're making progress on one front, something on another front falls apart,” says solar physicist Judith Lean of the Naval Research Laboratory (NRL) in Washington, D.C. Modelers, including Meehl, have been using her estimates of the sun's brightness variations centuries before satellite observations to try to match the timing and magnitude of past climate variations.

    Now Lean questions her brightness estimates. She based them on the analogy of solar brightness with the brightness distribution of sunlike stars, but that analogy is not holding up, she says. With NRL colleagues Yi-Ming Wang and Neil Sheeley, she will shortly publish revised estimates, based solely on known behavior of the sun, that are only one-quarter the size of her star-based estimates. “Until we know what the carbon-14 record is telling you, all this has some uncertainty,” Lean says. “We have a lot to learn about the sun before we know what the past irradiance was.”


    Grassroots Supercomputing

    1. John Bohannon*
    1. John Bohannon is a science writer based in Berlin.

    What started out as a way for SETI to plow through its piles of radio-signal data from deep space has turned into a powerful research tool as computer users across the globe donate their screen-saver time to projects as diverse as climate-change prediction, gravitational-wave searches, and protein folding

    OXFORD, U.K.—If Myles Allen and David Stainforth had asked for a supercomputer to test their ideas about climate change, they would have been laughed at. In order to push the limits of currently accepted climate models, they wanted to simulate 45 years of global climate while tweaking 21 parameters at once. It would have required a supercomputer's fully dedicated attention over years, preempting the jealously guarded time slots doled out to many other projects. “Doing this kind of experiment wasn't even being considered,” recalls Stainforth, a computer scientist here at Oxford University. So instead, he and Oxford statistician Allen turned to the Internet, where 100,000 people from 150 countries donated the use of their own computers—for free. Although not yet as flexible, their combined effort over the past 2 years created the equivalent of a computer about twice as powerful as the Earth Simulator supercomputer in Yokohama, Japan, one of the world's fastest.

    Stainforth's project is part of a quiet revolution under way in scientific computing. With data sets and models growing ever larger and more complex, supercomputers are looking less super. But since the late 1990s, researchers have been reaching out to the public to help them tackle colossal computing problems. And through the selfless interest of millions of people (see sidebar, p. 812), it's working. “There simply would not be any other way to perform these calculations, even if we were given all of the National Science Foundation's supercomputer centers combined,” says Vijay Pande, a chemical biologist at Stanford University in Palo Alto, California. The first fruits of this revolution are just starting to appear.

    World supercomputer

    Strangely enough, the mass participation of the public in scientific computing began with a project that some scientists believe will never achieve its goal. In 1994, inspired by the 25th anniversary of the moon landing, software designer David Gedye wondered “whether we would ever again see such a singular and positive event,” in which people across the world join in wonder. Perhaps the only thing that could have that effect, thought Gedye, now based in Seattle, Washington, would be the discovery of extraterrestrial intelligence. And after teaming up with David Anderson, his former computer science professor at the University of California, Berkeley, and Woody Sullivan, a science historian at the University of Washington, Seattle, he had an idea how to work toward such an event: Call on the public to get involved with the ongoing Search for Extraterrestrial Intelligence (SETI) project.

    Strength in numbers.

    Millions of computers now crunch data for diverse research projects.


    In a nutshell, SETI enthusiasts argue that we have nothing to lose and everything to gain by scanning electromagnetic radiation such as radio waves—the most efficient method of interstellar communication we know of—from around the galaxy to see if anyone out there is broadcasting. After the idea for SETI was born in 1959, the limiting factor at first was convincing radio astronomy observatories to donate their help. But by the mid-1990s, several SETI projects had secured observing time, heralding a new problem: how to deal with the huge volume of data. One Berkeley SETI project, called SERENDIP, uses the Arecibo Observatory in Puerto Rico, the largest radio telescope in the world, to passively scan the sky around the clock, listening to 168 million radio frequencies at once. Analyzing this data would require full-time use of the Yokohama Earth Simulator, working at its top speed of 35 teraFLOPS (1012 calculations per second).

    Gedye and his friends approached the director of SERENDIP, Berkeley astronomer Daniel Werthimer, and posed this idea: Instead of using one supercomputer, why not break the problem down into millions of small tasks and then solve those on a million small computers running at the same time? This approach, known as distributed computing, had been around since the early 1980s, but most efforts had been limited to a few hundred machines within a single university. Why not expand this to include the millions of personal computers (PCs) connected to the Internet? The average PC spends most of its time idle, and even when in use most of its computing power goes untapped.

    The idea of exploiting spare capacity on PCs was not a new one. Fueled by friendly competition among hackers, as well as cash prizes from a computer security company, thousands of people were already using their PCs to help solve mathematical problems. A trailblazer among these efforts was GIMPS, the Great Internet Mersenne Prime Search, named after the 16th century French monk who discovered a special class of enormous numbers that take the form 2P − 1 (where P is a prime). GIMPS founder George Woltman, a programmer in Florida, and Scott Kurowski, a programmer in California, automated the process and put a freely downloadable program on the Internet. The program allowed PCs to receive a task from the GIMPS server, “crunch” on it in the background, and send the results back without the PC user even noticing.

    Using computer time in this way is not always a blameless activity. In 1999, system administrator David McOwen marshaled hundreds of computers at DeKalb Technical College in Clarkston, Georgia, to crunch prime numbers with a program from a distributed network—but without getting permission. When found out, he was arrested and accused of costing the college more than $400,000 in lost bandwidth time. But the case never came to court, and McOwen accepted penalties of 80 hours of community service and a $2100 fine. The previous year, computer consultant Aaron Blosser got the computers of an entire Colorado phone company busy with GIMPS. Because his supervisor had given him permission to do so, he was not charged, but because at the time it was considered a potential act of Internet terrorism, the FBI confiscated his computers.

    Undaunted, Gedye and his team set about carving up the SETI processing work into bite-sized chunks, and in 1999 the team went public with a screen-saver program called SETI@home. As soon as a PC went idle, the program went to work on 100-second segments of Arecibo radio data automatically downloaded from the Internet, while the screen saver showed images of the signal analysis. It took off like wildfire. Within 1 month, SETI@home was running on 200,000 PCs. By 2001, it had spread to 1 million. Public-resource computing, as Anderson calls it, was born.

    So far at least, SETI@home hasn't found an ET signal, admits Anderson, and the portion of the galaxy searched “is very, very limited.” But the project has already accomplished a great deal: It not only fired up the public imagination, but it also inspired scientists in other fields to turn to the public for help tackling their own computing superproblems.

    Democratizing science?

    Stanford's Pande, who models how proteins fold, was among the first scientists to ride the public-resource computing wave. Proteins are like self-assembling puzzles for which we know all the pieces (the sequence of amino acids in the protein backbone) as well as the final picture (their shape when fully folded), but not what happens in between. It only takes microseconds for a typical protein to fold itself up, but figuring out how it does it is a computing nightmare. Simulating nano-second slices of folding for a medium-sized protein requires an entire day of calculation on the fastest machines and years to finish the job. Breaking through what Pande calls “the microsecond barrier” would not only help us understand the physical chemistry of normal proteins, but it could also shed light on the many diseases caused by misfolding, such as Alzheimer's, Parkinson's, and Creutzfeldt-Jakob disease.

    A year after SETI@home's debut, Pande's research group released a program called Folding@home. After developing new methods to break the problem down into workable chunks, they crossed their fingers, hoping that enough people would take part. For statistical robustness, identical models with slightly tweaked parameters were doled out in parallel to several different PCs at once, so success hinged on mass participation.

    The simulations flooded back. By the end of its first year, Folding@home had run on 20,000 PCs, the equivalent of 5 million days of calculation. And the effort soon proved its worth. Pande's group used Folding@home to simulate how BBA5, a small protein, would fold into shape starting only from the protein's sequence and the laws of physics. A team led by Martin Gruebele, a biochemist at the University of Illinois, Urbana-Champaign, tested it by comparing with real BBA5. The results, reported in 2002 in Nature, showed that Folding@home got it right. This marks “the first time such a convergence between theory and experiment could be made,” says Pande.

    Public-resource computing now has the feel of a gold rush, with scientists of every stripe prospecting for the bonanza of idle computing time (see table, left). Biological projects dominate so far, with some offering screen savers to help study diseases from AIDS to cancer, or predict the distribution of species on Earth. But other fields are staking their own claims. Three observatories in the United States and Germany trying to detect the fleeting gravitational waves from cataclysmic events in space—a prediction of Einstein's—are doling out their data for public crunching through a screen saver called Einstein@home. Meanwhile, CERN, the European particle physics laboratory near Geneva, Switzerland, is tapping the public to help design a new particle accelerator, the Large Hadron Collider. LHC@home simulates the paths of particles whipping through its bowels.

    The projects launched so far have only scraped the surface of available capacity: Less than 1% of the roughly 300 million idle PCs connected to the Internet have been tapped. But there are limits to public-resource computing that make it impractical for some research. For a project to make good use of the free computing, says Stainforth, “it has to be sexy and crunchable.” The first factor is important for attracting PC owners and persuading them to participate. But the second factor is “absolutely limiting,” he says, because not all computational problems can be broken down into small tasks for thousands of independent PCs. “We may have been lucky to have chosen a model that can be run on a typical PC at all,” Stainforth adds.

    In spite of those limitations, the size and number of public-resource computing projects is growing rapidly. Much of this is thanks to software that Anderson developed and released last year, called Berkeley Open Infrastructure for Network Computing (BOINC). Rather than spending time and money developing their own software, researchers can now use BOINC as a universal template for handling the flow of data. In a single stroke, says Anderson, “this has slashed the cost of creating a public-resource computing project from several hundreds of thousands of dollars to a few tens of thousands.” Plus, BOINC vastly improves the efficiency of the entire community by allowing PCs to serve several research projects at once: When one project needs a breather, another can swoop in rather than leaving the PC idle.

    It works, too

    As the data streams in from the many projects running simultaneously on this virtual supercomputer, some researchers are getting surprising results. To the initial dismay of CERN researchers, LHC@home occasionally produced very different outputs for the same model, depending on what kind of PC it ran on. But they soon discovered that it was caused by “an unexpected mathematical problem,” says François Grey, a physicist at CERN: the lack of international standards for handling rounding errors in functions such as exponential and tangent. Although the differences between PCs were minuscule, they were amplified by the sensitive models of chaotic particle orbits. The glitch was fixed by incorporating new standards for such functions into the program.

    The results of have been surprising for a different reason. “No one has found fault with the way our simulations were done,” says Stainforth. Instead, climate scientists are shocked by the predictions. Reporting last January in Nature, a team led by Stainforth and Allen found versions of the currently accepted climate model that predict a much wider range of global warming than was thought. Rather than the consensus of a 1.5° to 4.5°C increase in response to a doubling of atmospheric CO2, some simulations run on the Oxford screen saver predict an 11°C increase, which would be catastrophic. Critics argue that such warming is unrealistic because the paleoclimate record has never revealed anything so dramatic, even in response to the largest volcanic eruptions. Stainforth emphasizes that his method does not yet allow him to attach probabilities to the different outcomes. But the upshot, he says, is that “we can't say what level of atmospheric carbon dioxide is safe.” The finding runs against recent efforts to do so by politicians.

    And according to Stainforth, this illustrates something that makes public-resource computing a special asset to science. Rather than a hurdle to be overcome, “public participation is half of the goal.” This is particularly true for a field like climate prediction, in which the public can influence the very system being studied, but it may also be true for less political topics. “We in the SETI community have always felt that we were doing the search not just for ourselves but on behalf of all people,” says Sullivan. What better way to “democratize” science than to have a research group of several million people?


    Grid Sport: Competitive Crunching

    1. John Bohannon*
    1. John Bohannon is a science writer based in Berlin.

    You won't find the names of Jens Seitler, Honza Cholt, John Keck, or Chris Randles among the authors of scientific papers. Nor, for that matter, the names of any of the millions of other people involved with the colossal computing projects that are predicting climate change, simulating how proteins fold, and analyzing cosmic radio data. But without their uncredited help, these projects would be nonstarters.

    In the 6 years since the SETI@home screen-saver program first appeared, scientists have launched dozens of Internet projects that rely on ordinary people's computers to crunch the data while they sit idle. The result is a virtual computer that dwarfs the top supercomputer in speed and memory by orders of magnitude. The price tag? Nothing. So who are these computer philanthropists? The majority seem to be people who hear about a particular project that piques their interest, download the software, and let it run out of a sense of altruism. Others may not even be aware they are doing it. “I help about a dozen friends with repairs and upgrades to their PCs,” says Christian Diepold, an English literature student from Germany, “and I install the [screen-saver software] as a kind of payment. Sometimes they don't even know it's on there.”

    But roughly half of the data processing contributed to these science projects comes from an entirely different sort of volunteer. They call themselves “crunchers,” and they get kicks from trying to churn through more data than anyone else. As soon as the projects began publishing data-crunching statistics, competition was inevitable. Teams and rank ladders formed, and per capita crunching has skyrocketed. “I'm addicted to the stats,” admits Michael, a member of a cruncher team called Rebel Alliance. To get a sense of what makes them tick, Science interviewed dozens of crunchers in the Internet chat forums where they socialize.

    Team players.

    Honza Cholt says crunchers have deep discussions about the science.


    Interest in crunching does not appear to correlate strongly with background. For their day jobs, hard-core crunchers are parking lot attendants, chemical engineers, stay-at-home moms and dads, insurance consultants, and even, in at least one case, miners. Their distribution, like the Internet, is global. What's the motive? People crunch “for a diversity of reasons,” says Randles, a British accountant who moderates the forum for, but altruism tops the list. “After losing six friends over the last 2 years to cancer, I jumped at the chance to help,” says an electrician in Virginia who goes by the username JTWill and runs the Find-a-Drug program on his five PCs. As a systems administrator named Josh puts it, “Why let a computer sit idle and waste electricity when you could be contributing to a greater cause?”

    But another driving force is the blatant competition. Michael of Rebel Alliance has recently built a computer from scratch for the sole purpose of full-time crunching, but he says he still can't keep up with Stephen, a systems engineer in Missouri and self-proclaimed “stats junkie” who crunches on 150 computers at once. Without the competition, “it wouldn't be as much fun,” says Tim, a member of Team Anandtech who crunches for Folding@home. And like any sport, rivalries are soon simmering. “Members from different teams drop in on each other's forums and taunt each other a bit,” says Andy Jones, a cruncher in Northern Ireland, “but it's all in good humor.” As Anandtech team member Wiz puts it, “What we have here is community.”

    But where does this leave the science? Do crunchers care how the fruits of their labor are used, or do they leave it all to the researchers? It depends on the project, says Cholt, a sociology student in the Czech Republic, “but the communities that form often have long and deep discussions about the science.” What holds the core of the crunching community together, says Seitler, a computer specialist in Germany, is the chance “for normal people to take part in a multitude of scientific projects.” In some cases, crunchers have even challenged the researchers' published conclusions. “Many scientists would groan at the thought of nonscience graduates questioning their work,” says Randles, but “scrutiny beyond peer review seems an important aspect to science.”

    Far from indifferent, crunchers can become virtual members of the research team, says François Grey, a physicist at CERN, the particle physics lab near Geneva, Switzerland, who helps run LHC@home. Above and beyond donating their computers, “they actually help us solve problems and debug software. And you have to keep them informed about what's going on with the project, or they get upset.” Crunchers might not get credited on papers, says Grey, but “scientists have to treat this community with respect.”


    Data-Bots Chart the Internet

    1. Mark Buchanan*
    1. Mark Buchanan is a writer in Cambridge, U.K.

    It's hard to map the global Internet from a small number of viewpoints. The solution may be to enlist computer users worldwide as local cartographers of cyberspace

    Anyone who has tried to study the twists and turns in the data superhighway knows the problem: It is difficult even to get a decent map of the Internet. Because it grew up in a haphazard fashion with no structure imposed, no one knows how the myriad telephone lines and satellite links weave together its more than 300,000,000 computers. Today's best maps offer a badly distorted picture, incomplete and biased by a U.S. viewpoint, hampering computer scientists' efforts to design software that would make the Internet more stable and less prone to attack. But a new mapping effort may succeed where others have failed. “We want to let the Internet measure itself,” says computer scientist Yuval Shavitt of Tel Aviv University in Israel, who, along with colleagues, hopes to enlist many thousands of volunteers worldwide to take part in the effort.

    At the lowest level, the computers that comprise the Internet are known as “routers.” They carry out the basic information housekeeping of the Net, shuttling e-mails and information packets to and fro. At a somewhat higher linked-facility level, however, the Internet can also be viewed as a network of subnetworks, or “autonomous systems,” each of which corresponds to an Internet service provider or other collection of routers gathered together under a single administration. But how is this network of networks wired up?

    Two years ago, computer scientist Kimberly Claffy and colleagues from the Cooperative Association for Internet Data Analysis at the University of California, San Diego, used a form of Internet “tomography” to find out. They sent out information-gathering packets from 25 computers to probe over 1 million different destinations in the Internet. Along the way, each packet recorded the links along which it moved, thereby tracing out a single path through the Internet—a chain of linked autonomous systems. Putting millions of such paths together, the researchers eventually built up a rough picture of more than 12,000 autonomous systems with more than 35,000 links between them (see


    Accurate Internet maps could provide users with data traffic reports.


    Through such efforts, researchers now understand that the Internet has a highly skewed structure, with some autonomous systems playing the role of organizing “hubs” that have far more links than most others. But researchers also know that their very best maps are still seriously incomplete.

    The trouble is that all mapping efforts to date have started out from a fairly small number of sites, 50 at the most. So the maps produced tend to be biased by the locations of those sites. From some computer A, for example, researchers can send probing packets out toward computers B and C and thereby learn paths connecting A to B and A to C. But the probes would be unlikely to explore links between B and C, for the same reason that driving from New York to Boston and from New York to Montreal tells one little about the roads between Boston and Montreal. “If you send probes from only a few points, you naturally get a very partial point of view,” says physicist Alessandro Vespignani, an expert on Internet topology at Indiana University, Bloomington.

    To overcome this problem, Shavitt and colleagues are pioneering a new approach inspired by the idea of distributed computing. Anyone can now download a program from the Web site that will help in a global effort to map the Internet. Using no more than a few percent of the host computer's processing power, the program acts as a software agent, sending out probing packets to map local connections in and around the autonomous system in which the computer sits. “What we ask for is not so much processing power but location,” says Shavitt. “We hope that the more places we have presence in, the more accurate our maps will be.”

    Since the project's inception late last year, individuals have downloaded nearly 800 agents that are now working together to map the Internet from 50 nations spread across all the continents. “We've already mapped out about 40,000 links between about 15,000 distinct autonomous systems, and we can already see that the Internet is about 25% denser than it was previously thought to be,” says Shavitt. “This is a great project with a very new perspective,” says Vespignani, who points out that better maps will help Internet administrators in predicting information bottlenecks and other hot spots.

    Shavitt and his colleagues estimate that once they have about 2000 agents operating, it should be possible to get a complete map of the Internet at the autonomous-system level in less than 2 hours. Once they can do that, they hope to provide individual users with local Internet “weather reports.” Ultimately, they would like to map the Internet at the level of individual routers—getting a more detailed map of the physical Internet. “We'll need about 20,000 agents distributed uniformly over the globe to get a good map at that level,” says Shavitt. Then there'll be no excuse for getting lost in cyberspace.