News this Week

Science  12 Apr 2002:
Vol. 296, Issue 5566, pp. 232
  1. CLIMATE CHANGE

    Battle Over IPCC Chair Renews Debate on U.S. Climate Policy

    1. Andrew Lawler*
    1. With reporting by Pallava Bagla and Richard Stone.

    Global organizations rarely reach meaningful consensus. That makes even more remarkable the decade-long success of the Intergovernmental Panel on Climate Change (IPCC) in forging a common position on the science of global warming. But when scientists from around the world meet next week in Geneva to elect a new chair of the organization, that spirit of consensus will be sorely tested.

    The challenge comes from the U.S. government's decision to back an Indian engineer-economist rather than renominate an American atmospheric chemist. That action sets the stage for an international referendum on the Bush Administration's position on climate change.

    Senior researchers around the world fear that the U.S. move is part of a campaign to undermine the scientific credibility of IPCC, whose reports have shaped the global agenda on climate change. White House and State Department officials strenuously deny that charge, noting that they have nominated a respected U.S. scientist to lead a key IPCC working group. They say that the move to replace Robert Watson after one 5-year term (Science, 26 September 1997, p. 1916) is designed to improve relations with India and elevate a researcher from a developing country. Their candidate is Rajendra Pachauri, now vice chair, who has headed New Delhi's private nonprofit Tata Energy Research Institute for 20 years. He was nominated by the Indian government.

    “A lot of governments say they will support me.”—Robert WatsonCREDIT: EUGENE HOSHIKO/AP

    The U.S. action has alarmed other member nations already irritated with President George W. Bush's rejection of the Kyoto protocol. Representatives from a consortium of European countries as well as Brazil, South Africa, and several island nations say they will support Watson at the Geneva meeting, which begins 17 April. “A lot of governments say they will support me,” says Watson, chief scientist for the World Bank and a top environmental adviser in President Bill Clinton's White House.

    If Watson were reelected, it would be an embarrassing defeat for both the Bush Administration and the Indian government. To avoid a divisive vote, leading delegates are floating a compromise to split the unpaid position between the two men. Watson backs the idea, but Pachauri is having none of it. “I totally reject this proposal,” he says. “Two co-chairs is an unworkable concept except for someone who is desperate to keep the title of chairman in any form.”

    The controversy shines a spotlight on IPCC, set up in 1988 by the World Meteorological Organization and the United Nations to assess the scientific, social, and economic issues related to human-induced climate change. The organization—which includes members from more than 170 countries—pulls together climate data and other information in comprehensive reports painstakingly reviewed and published roughly every 5 years. IPCC has profoundly altered the climate change debate; the 1995 report, for example, led to the 1997 Kyoto protocol in which political leaders acknowledged the need to address global warming.

    Unlike many international bodies, IPCC is small, enormously influential, and mostly run by volunteers. A small Geneva-based bureau, led by a chair and five vice chairs, oversees the panel's work. Working groups examine climate change science, the impacts of climate change, and ways to mitigate and adapt to the problem, including reducing greenhouse gas emissions. Each group has two co-chairs, one from a developed country and one from the developing world, and each report is carefully vetted and then approved by IPCC members. Although each member technically has a vote, the chair typically is elected by acclamation.

    Researchers attribute much of IPCC's scientific credibility to Watson and Bert Bolin of Sweden, the panel's founding chair. “[Watson] has been absolutely extraordinary,” says William Moomaw, a chemist and environmental policy professor at Tufts University in Medford, Massachusetts, who also is a longtime acquaintance of Pachauri. “He's taken on the toughest issues and gotten the best people.” Adds Michael McCracken, a senior scientist with the U.S. global change research program: “[Watson] is up on the science, has the ability to encourage a wide range of information, and knows how to push toward consensus.” A host of other researchers echo that praise. “He's been an impartial and driving force,” says Bolin, who served two terms as IPCC chair.

    “Two co-chairs is an unworkable concept.”—Rajendra PachauriCREDIT: P. BAGLA

    The physical scientists who form the core of IPCC worry that a chair without a track record of research in the field could weaken the organization's reputation. “Without a strong leader, you won't draw the best scientists,” worries James McCarthy, a Harvard University oceanographer who has co-chaired an IPCC working group. But nuclear engineer Tomihiro Taniguchi, head of nuclear safety at the International Atomic Energy Agency (IAEA) in Vienna and a former vice chair of IPCC, says that Pachauri's skills as an economist will be valuable because “the discussion on climate change is moving from the science, which is now well accepted, to the more complex aspects of sustainability.”

    The Bush Administration's support for Pachauri isn't ideological, says State Department deputy spokesperson Philip Reeker. Instead, he says, it's based on his qualifications and the value of having a panel chair from the developing world. Privately, however, Administration officials say that Watson's occasional criticism of the U.S. stance on climate change and his role in the first Clinton Administration made it impossible to renominate him. Watson is also a bête noire to U.S. energy lobbyists. Although Reeker denies that industry played a role in the decision, a February 2001 memo to the White House Council on Environmental Quality from ExxonMobil lobbyist Randy Randol claims that Watson was “handpicked by Al Gore” and should be replaced. The memo was provided to Science by the Natural Resources Defense Council, a New York City-based nonprofit that opposes the Administration's views on global change.

    Pachauri, however, may be less sympathetic to the Bush Administration's stance than Watson is. “I am not a toady of the U.S.,” he says, adding that “I was very critical of the U.S.” for opposing the limits on greenhouses gases laid out in the Kyoto protocol. He also is a strong opponent of concepts favored by developed nations, such as emissions trading. “Free-market solutions will not work,” he says.

    Many researchers see the move as part of a wider campaign by industry and the White House to attack IPCC's credibility. “It is scandalous,” says Princeton University atmospheric scientist Michael Oppenheimer. “This is an invasion of narrow political considerations into a scientific process.”

    But presidential science adviser John Marburger rejects that idea. “There is no evidence of a politically driven conspiracy theory,” says Marburger, who attended several meetings devoted to the IPCC election. As evidence, he cites the U.S. decision to back Susan Solomon, an atmospheric chemist at the National Oceanic and Atmospheric Administration's lab in Boulder, Colorado, as co-chair of the science working group. “That's where the science needs to be focused, and she'll do an excellent job for us,” he adds. Solomon would be the first American to lead that group.

    Climate change scientists will be watching the Bush Administration's every move to judge the accuracy of Marburger's statement. In the meantime, a big part of the job facing the Geneva delegates will be to show that the damage to the usual spirit of consensus can be repaired.

  2. CLIMATE CHANGE

    White House Shakes Up U.S. Program

    1. Andrew Lawler

    In the midst of a fight over who will lead the international group overseeing climate change research, the Bush Administration is quietly shifting oversight of the U.S. Global Change Research Program (GCRP) from a scientific steering group to the Commerce Department. Some researchers fear that the move could undermine the quality of the $1.7 billion effort.

    The current program was set up in the early 1990s and embraces a half-dozen agencies such as NASA, the National Science Foundation, and the Environmental Protection Agency. An interagency office run by researchers coordinates those various programs. Last June, President George W. Bush urged a rethinking of the effort.

    Bush's science adviser John Marburger and Conrad Lautenbacher, chief of the Commerce Department's National Oceanic and Atmospheric Administration, outlined the new plan at a meeting in Washington on 1 April. According to documents obtained by Science, a new organization called the climate change science program office would be headed by the assistant Commerce secretary for oceans and atmosphere—a political appointee. Meteorologist James Mahoney, most recently president of an environmental consulting firm, was sworn into the Commerce job last week.

    Changing climate.

    President Bush gives bigger role to Commerce's Evans (rear).

    CREDIT: J. SCOTT APPLEWHITE/AP

    The present GCRP would be subsumed under the new organization, and a parallel office for climate change technology would be run out of the Energy Department. Both offices would report to an interagency working group, which in turn would report to a committee chaired by the secretary of Commerce.

    The current structure “is not the right design for producing policy recommendations,” says Marburger, who would manage the committee. Giving Commerce Secretary Don Evans oversight of the program will make it easier to convert research findings into policy recommendations, he says, adding that he expects the move will have only “modest impact” on the research itself. Others, however, worry that the move gives politicians too large a voice. “There is a potential perception that you could be tying science to the politics more closely,” says one of several U.S. government researchers who asked not to be identified. The Commerce Department's main job, he noted, is to promote U.S. business, which typically opposes efforts to reduce greenhouse gases.

    Marburger says Bush is sensitive to these concerns. “The president does not want to disrupt the present research program,” he says, noting a $40 million request in the 2003 budget to fill gaps in areas such as climate modeling.

  3. PRIMATE EVOLUTION

    Gene Activity Clocks Brain's Fast Evolution

    1. Elizabeth Pennisi

    A team of molecular biologists has taken a stab at defining what makes us human. Its answer: We're set apart from other primates not so much by differences in the makeup of our genes but by relatively recent changes in how active those genes are. Such changes are most dramatic in the brain, where they've occurred at a faster rate in humans than in other primates, report Svante Pääbo of the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, and his colleagues on page 340.

    In 1975, geneticist Mary-Claire King and the late biochemist Allan Wilson, both then at the University of California, Berkeley, showed that the sets of proteins (and by extension, the genes encoding the proteins' designs) found in chimpanzees and humans were virtually identical. That left open the question of how these two species came to be so different (Science, 4 September 1998, p. 1432). Wilson suggested then that the key might be differences in gene expression, the rate at which messenger RNA and proteins are made from a gene. At long last, Pääbo and his colleagues have experimental evidence that supports this so-called regulatory hypothesis. Furthermore, notes Lawrence Grossman, a molecular biologist at Wayne State University in Detroit, the work “nicely supports the idea that in primates, the action in evolution [is in] the brain.”

    Pääbo and his team, including the Max Planck Institute's Wolfgang Enard and Philipp Khaitovich, collected brain, liver, and blood samples from humans, chimps, macaques, and orangutans that had died of natural causes. They isolated RNA from each sample and passed it over a gene chip with tags for 12,000 human genes. The more RNA registered for a gene, the greater that gene's activity. In a second experiment, they used a membrane-based array to look at about 6000 additional genes. In each experiment, the researchers studied RNA from chimps, humans, and one of the other primates.

    As expected, the researchers found little difference among the species in the liver and blood samples. But in the brain, the species distinguished themselves. The team detected big differences in gene expression between humans and chimps, whereas gene expression in the chimps' and the other primates' brains was about the same.

    Brainpower.

    Studies may show that rates of gene activity separate humans from chimps, but in this movie matchup, Pierre the Chimp is definitely getting the better of actor Jerry Lewis.

    CREDIT: BETTMANN/CORBIS

    By pairing these results with a look at the primate family tree, the team concluded that sometime in the recent evolution of humans, the human brain began evolving faster than those of other primates—faster even than that of the closest relative of humans, the chimp. Macaques and orangutans, which are more distantly related to chimps and humans than chimps and humans are to each other, helped put these rates into perspective. Because gene expression in chimp brains was similar to that in both macaque and orangutan brains, the big boost in brain evolution occurred after chimps and humans split off from their last common ancestor, the researchers report. “This is the first piece of evidence that humans may have a faster rate” of change in the regulation of gene expression, notes Caro-Beth Stewart, a molecular evolutionist at the State University of New York, Albany.

    The researchers' next step is to figure out which genes matter. Based on their RNA studies and parallel work measuring protein concentrations, “we have begun to accumulate lists of genes that have changed their expression in human evolution so that we and others can now go and study those genes in detail,” Pääbo explains.

    One inference drawn by Pääbo and his team is prompting some debate. They speculate that the acceleration of changes in gene expression in the brain occurred during recent human evolution, which some anthropologists say could have been as recent as several hundred thousand years ago. But studies of brain morphology in chimps and australopithecines, human ancestors that lived millions of years ago, indicate that the brain had already taken on human characteristics by the time of these early hominids. The changes Pääbo's team sees in gene expression in the brain “could have happened at any time during the course of hominid evolution,” says Ralph Holloway, an anthropologist at Columbia University in New York City.

    Despite the controversy, Pääbo's group deserves a lot of credit for showing that human evolution involves unusually rapid changes in gene expression, says Stewart, who calls the work “an important advance in our thinking.” But others are not surprised that genes are expressed differently in humans than in other primates. As Edwin McConkey, an emeritus molecular biologist at the University of Colorado, Boulder, says, “If no differences had been found, then we should all have to take a course in metaphysics, and religious fundamentalists would be dancing in the streets.”

  4. ARCHAEOLOGY

    Early Cowboys Herded Cattle in Africa

    1. Erik Stokstad

    Humans and cattle go way back, but the origins of this relationship have been murky. Now on page 336, a team led by Olivier Hanotte, a molecular geneticist at the International Livestock Research Institute (ILRI) in Nairobi, Kenya, and J. Edward Rege of ILRI in Addis Ababa, Ethiopia, shores up the controversial idea that humans domesticated cattle in Africa, not just in Near East Asia as archaeologists have believed for decades. The study also fills in details of how domesticated cattle were herded across the continent, and it could provide insights into how humans learned to produce food.

    Cattle were domesticated several times, archaeological and genetic studies suggest. By 8000 years ago, a humpless sort known as taurine cattle came under the yoke in the Fertile Crescent of Turkey and other countries. And some 6000 years ago, a kind of humped cattle called zebu was domesticated in the Indus Valley in what is now Pakistan. Rock art in the Sahara depicts taurine cattle, which led researchers to conclude that domesticated cattle appeared in Africa via the Isthmus of Suez, perhaps as much as 7800 years ago, when domesticated sheep and goats arrived from the Near East.

    But in the 1980s, archaeologists Fred Wendorf of Southern Methodist University in Dallas, Texas, and Romuald Schild of the Polish Academy of Sciences in Warsaw began to argue for a controversial idea: Cattle were domesticated independently in northeastern Africa some 10,000 years ago. Many thought that their archaeological evidence—poorly preserved bones—was ambiguous, and the idea languished. In the 1990s, however, further analysis of bone morphology and a series of findings in cattle genetics began to make an African domestication seem more plausible.

    Git along.

    Humpless cattle such as these Kuri (top) were domesticated in Africa and widely depicted in rock art (bottom).

    CREDITS: (TOP TO BOTTOM) D. BRADLEY; F. MARSHALL

    Now Hanotte and colleagues have painted the most complete picture yet of the origins of humans' favorite beast of burden. “This is the first genetic history of cattle throughout the African continent,” says Diane Gifford-Gonzalez of the University of California, Santa Cruz. The researchers started out charting the diversity of African cattle to help set priorities in conservation efforts. Since 1994, they have sampled 50 indigenous cattle breeds in 23 African countries—just about the entire range of cattle in Africa today. Realizing that this could shed light on the origins and migrations of domesticated cattle, Hanotte's group teamed up with Dan Bradley of Trinity College in Dublin, Ireland, who had participated in earlier genetic studies of cattle history.

    The researchers used a statistical technique called principal component analysis to figure out the major genetic trends within current cattle populations. They found three major sources, two of which matched the genetic makeup of the types of cattle known to have been domesticated outside Africa. The genetic signature of the zebu breed was most prominent in cattle in the Horn of Africa. From this, the team concluded that zebu were introduced primarily though sea trade rather than by walking into Africa through Egypt. Cattle populations across northern Africa, in contrast, contained genetic influence from taurine cattle, suggesting that these cows' ancestors did travel by land.

    The third component featured neither zebu nor the Near East's taurine influence. Hanotte's team suspects that it represents a unique domestication of native wild cattle in Africa. This component is most prevalent in southern Africa, however. Because thousands of years ago there were no wild cattle in this region, they must have been domesticated elsewhere. Based on analysis of the genetic data, Hanotte's team concludes that the center of this domestication was likely in northeastern Africa; archaeological evidence supports the idea that wild and later domesticated cattle roamed this region. Humans migrating south then herded the domesticated cows through East Africa with them to their current locations, the team proposes.

    To Fiona Marshall of Washington University in St. Louis, Missouri, the picture reinforces the idea that people living in Africa during the last 10,000 or so years took an unusual path to food production: domesticating livestock before plants. Archaeologists suspect that indigenous populations in the Andes have a similar history; all other human populations are thought to have first tamed plants.

    Now that genetic historians are beginning to accept an independent source of domestic cattle in Africa, the question remains of where the beasts were first tamed. “The article does not prove an earlier independent domestication event in Africa,” says Andrew Smith of the University of Cape Town, South Africa. For that, he wants to see archaeological evidence for African cattle domestication that might place it before the same achievement in Near East Asia.

  5. MEXICAN MAIZE

    Transgene Data Deemed Unconvincing

    1. Charles C. Mann

    Last week, the Mexican maize wars took a startling new twist. In an apparently unprecedented “editorial note,” published online, Nature declared that, in retrospect, it should not have published a controversial paper that claimed to have detected illegal transgenic maize growing in Mexico. The note accompanied two highly critical letters attacking the paper's conclusions. But the authors of the article, University of California (UC), Berkeley, biologists David Quist and Ignacio Chapela, refused to accept the journal's judgment. Indeed, they claimed that an additional round of tests “confirms our original detection of transgenic DNA.” (The exchange will be printed in a forthcoming issue of Nature.)

    The maize wars began on 29 November 2001, when the Quist-Chapela article appeared—and created an immediate international furor. The two scientists claimed to have discovered transgenic DNA in traditional varieties of maize grown in Oaxaca, one of Mexico's southernmost states. Because southern Mexico is the “center of diversity” for maize—the place where its native gene pool is based—the Mexican government imposed a moratorium on planting genetically modified versions of the crop in 1998. The Quist-Chapela report not only suggested that transgenic corn had been widely planted, it also reported that the foreign DNA appeared in diverse locations within the maize genome—in other words, the transgenes that were spliced into corn plants were able to jump around the chromosomes. Such movement would pose the risk of disrupting the functioning of other genes.

    Insufficient evidence.

    Nature says its paper on transgenes in corn lacks data to justify publication

    CREDIT: NATURE PUBLISHING GROUP

    Even before the paper was published, Chapela briefed Mexican officials on its contents. In September the Mexican environmental ministry unofficially confirmed the UC Berkeley scientists' findings. Within days Greenpeace demanded that the government ban all transgenic maize (the moratorium covers only planting maize, not selling or eating it), “develop an emergency plan” for “de-contamination” of Oaxaca, and sue all companies responsible for “transgenetic organisms.” Headlines about the “Mexican maize scandal” appeared worldwide. As the media pressure mounted, the Mexican Congress unanimously demanded in December that President Vicente Fox forbid the import of transgenic maize.

    To identify transgenic DNA, Quist and Chapela had used the polymerase chain reaction—a standard procedure, but one that is prone to false positives. Almost immediately, other molecular biologists wrote critical letters to Nature. “I knew as soon as I read the paper that something was wrong,” says biologist Wayne Parrott of the University of Georgia in Athens. Even greater skepticism greeted the report of transgenic instability. “Nobody has ever observed anything like it in years of working with corn,” says UC Berkeley biologist Peggy Lemaux. These and other criticisms are spelled out in the two letters Nature is publishing.

    In a highly unusual move, Nature asked Chapela and Quist to come up with further data to “prove beyond a reasonable doubt that transgenes have indeed become integrated into the maize genome.” Using another technique, “dot blotting,” the two scientists produced data that in their view did just that. But the results did not convince a Nature referee, which led editor Philip Campbell to decide that “the evidence available is not sufficient to justify the publication of the original paper.” Nature is, however, publishing Chapela and Quist's response, including their new data, along with the critical letters, to “allow readers to judge the science for themselves.”

    Surprisingly, all sides agree that transgenic maize is probably growing in Mexico. Thousands of government-subsidized stores sell low-cost staples, including the maize kernels used to make tortillas. Much of the maize is imported from the United States; preliminary government tests indicate that up to 40% is transgenic. Because the kernels can be planted, it is widely assumed that some small farmers have done so. In consequence, the dispute is less over the likely presence of transgenic maize than whether Chapela and Quist actually demonstrated it, and whether foreign DNA is as widespread and unstable as they claim.

    Because of the political stakes, the debate has not been purely scientific. Chapela has charged that some of the criticism was fomented by biotech firms that feared the discovery would derail plans to end the European Union's de facto ban on agricultural biotechnology. On 19 February the Institute for Food and Development Policy (Food First) released a letter from 140 groups decrying “the use of intimidatory tactics to silence potentially ‘dissident’ scientists.” Three days later, more than 100 scientists responded with a statement “in support of scientific discourse” (Science, 1 March, p. 1617).

    Unsurprisingly, the latest exchange hasn't ended the dispute. The Competitive Enterprise Institute, a pro-market advocacy group in Washington, D.C., hailed the reversal as proving that “antibiotechnology activists often rely on faulty data.” Meanwhile, the antibiotech ETC Group charged that Nature's “flip-flop” is “just an obfuscation of the real issue … that a Centre of Crop Genetic Diversity has been contaminated, and no one is doing anything about it.”

  6. U.S. EXPORT CONTROLS

    Rules Eased on Satellite Projects

    1. Andrew Lawler

    The U.S. State Department last week loosened its export rules on scientific satellite projects and told the university community that those regulations aren't intended to stifle scientific research. Researchers, who have campaigned for 3 years to ease the irksome restrictions, say that they are encouraged by this move but that their work isn't finished.

    “This is a big deal, but it doesn't solve the problem fully,” says Claude Canizares, an astrophysicist at the Massachusetts Institute of Technology (MIT). Researchers say that the new rules are fuzzy about collaborative work abroad, don't address cooperative efforts with industry, and will lead to discrimination against graduate students from outside Europe and Japan.

    The regulations followed a series of scandals in the late 1990s involving the alleged transfer of sensitive U.S. satellite technology to China (Science, 24 March 2000, p. 2138). In response, the State Department and agencies that fund academic research tightened oversight of research satellite efforts. Canadians became the only non-U.S. researchers allowed to work on such projects without U.S. government approval, and exports to even friendly nations required licensing. Outraged U.S. researchers complained that the rules hindered the contributions of foreign-born graduate students and non-U.S. universities.

    Hands on.

    New satellite rules make room for foreign scientists.

    CREDIT: STANFORD UNIVERSITY, GRAVITY PROBE B

    Under the new rules, students or scientists from Canada, Europe, Japan, and a few other U.S. allies may participate in most satellite projects without licenses. But some scientists say that the change, although welcome, could divide students into those from friendly nations and those considered untrustworthy. “Any university worth its salt will not do this,” says Eugene Skolnikoff, an MIT political scientist who has closely monitored the regulations.

    The new rules also will allow shipments of nonsensitive technology to a friendly nation without a license. But it's not clear whether the government will hold U.S. researchers responsible for blocking access by citizens of countries not considered U.S. allies. “There's just no way to control the other end,” says Canizares. Skolnikoff adds, “It's simply unworkable.” Universities are still puzzled about how to manage their increasing collaboration with industry, which comes under related but different rules.

    With export-control officials worried that unfriendly countries will still try to get their hands on sensors or radiation-hardened components, further loosening of the rules seems unlikely. “[The rules] will make life easier for universities, even if they don't give them 100% of what they want,” says one Administration official. At the same time, thankful researchers don't want to complain too loudly about not having all their wishes for fewer restrictions granted. The Administration, they note, has made a strong and public first step. Says Skolnikoff: “This tells the bureaucracy that this is important.”

  7. EMBRYONIC STEM CELLS

    Australian Agreement Allows New Lines

    1. Leigh Dayton*
    1. Leigh Dayton writes from Sydney.

    SYDNEY—Australian researchers are relieved that it's not worse, although many wish it were better. Last week federal, state, and territory leaders attempted to resolve a raucous national debate over the use of human embryonic stem (ES) cells by agreeing to allow some research to continue under a strict regulatory regime.

    The proposed legislation, to be introduced in June, would not only allow scientists to work with ES cell lines that have already been established but would also permit them to derive new cell lines from surplus in vitro fertilization (IVF) embryos created before 5 April that would otherwise be destroyed. The rules would, however, prohibit all forms of cloning, including so-called therapeutic cloning: the transplantation of a nucleus from an adult cell into an ES cell to generate cells for tissue engineering. The technique, which is still a long way off, holds the promise of producing tissue that is genetically matched to a patient. An ethics committee would be established to review protocols, and the National Health and Medical Research Council will report within 12 months on the adequacy of the supply and distribution of embryos. The provisions on IVF embryos would expire after 3 years.

    Half-full glass.

    Monash University's Alan Trounson (left) and Martin Pera say that the new agreement permits derivation of new ES cell lines.

    CREDIT: JASON SOUTH

    The new rules are more flexible than the conditions imposed on federally funded U.S. researchers, who can use ES cells only from cell lines created before 9 August 2001 (Science, 17 August 2001, p. 1242). Australian researchers estimate that some 70,000 frozen embryos are potentially available, although the agreement says that donors must give their permission before the embryos can be used. “This is very good news for researchers who are working to cure diseases and save lives,” says Bob Carr, the premier of New South Wales and an outspoken supporter of research involving ES cells. “It means that research can go ahead with a minimum of inhibitions.”

    The legislation would reconcile what until now has been a patchwork of state and territory rules. “Getting a national consensus is terrific,” comments John White of the Australian Academy of Science. “But let's take the next step to enable [therapeutic cloning] to follow.” It's also a compromise between research advocates, who wanted greater freedom, and conservative politicians and religious leaders, who sought a ban on all embryo research. An “Open Letter” on 2 April from 80 prominent critics in Melbourne's newspaper The Age, for example, branded therapeutic cloning as “the manufacture of a new race of laboratory humans.” In September 2001, a parliamentary committee recommended a delay in drawing up any rules, but in the following months its chair, Minister of Ageing Kevin Andrews, led a campaign to stop all such research (Science, 1 March, p. 1619).

    Martin Pera of Monash University's Centre for Early Human Development says that the new agreement allows him and his colleagues to keep their Melbourne lab intact (Science, 8 March, p. 1818). “We'll be able to derive new cell lines to support research elsewhere and also in Australia,” he says. Steve Bracks, premier of Victoria state, where Monash is located, calls the agreement “a victory for common sense.”

    Others are less sanguine. Paul Simmons, who works with adult stem cells at the Peter MacCallum Cancer Institute in Melbourne, says that Australian scientists and clinicians will be “disadvantaged” compared to groups in nations such as the United Kingdom and China that allow work on ES cells for developing new therapies. “We'll be put out of the game for a period of time,” he says. “How do you compete?”

  8. ASTRONOMY

    If It Quarks Like a Star, It Must Be ... Strange?

    1. Charles Seife

    Astronomers may have discovered two of the strangest objects in the universe. Observations by the orbiting Chandra X-ray Observatory imply that stars named RXJ1856 and 3C58 are too small to be familiar neutron stars but might instead be a more exotic breed composed of degenerate quark matter. If so, the two would be the first credible examples of so-called strange stars, presenting theorists with a chance to pin down some of the properties of exotic matter.

    “It's a very big ‘if’ right now,” says Michael Turner, a cosmologist at the University of Chicago. “But this could tell us a lot about the mass of the strange quark—it could tell us a lot about quantum chromodynamics.”

    A strange star, also known as a quark star, is the last incarnation of a medium-mass sun. (The heaviest stars become black holes.) When a star dies, it collapses under the influence of its own gravity. If the dead star is more than about 1.44 times the mass of the sun, its gravity squeezes together electrons and protons in the stellar material, forming neutrons. At still greater masses, in theory, neutrons might break down into their component quarks. Under enough pressure, half of the neutrons' “down” quarks might turn into strange quarks, creating a more compact type of matter. As Science went to press, NASA was planning to announce the possible discovery of two such strange star candidates.

    Odd ball.

    Born in a supernova's blast, 3C58 seems too cool to be made of normal matter.

    CREDIT: P. SLANE ET AL./CXC/CFA/NASA

    The first, RXJ1856, is a neutron star about 400 light-years away in the constellation Corona Australis. When Jeremy Drake of the Harvard-Smithsonian Center for Astrophysics in Cambridge, Massachusetts, and colleagues analyzed the light coming from the star, they were able to figure out its temperature—information that reveals how many x-ray photons should come off a hot body of any given size. Thus, Chandra's measurement of x-ray brightness reveals how big the star is. And that's the rub. “It's about 50% smaller than the range of sizes neutron stars can be,” says Drake. Such dense matter, theorists believe, could exist within a strange star—and nowhere else that they can easily imagine.

    The second star, 3C58, is about 10,000 light-years away in the constellation Cassiopeia. Born in a supernova explosion that Chinese and Japanese sky-watchers noted in August 1181, the star had cooled down faster than a neutron star is expected to. “It's too low by a factor of 2 in temperature and a factor of 16 in luminosity,” says David Helfand, an astronomer at Columbia University and a member of the Chandra observation team.

    Although both measurements are solid, the interpretations may not be. The too-small star, RXJ1856, might be bigger than calculated if a so-far-undetected hot spot on the star's surface has messed up the calculation of size based upon brightness by making it appear too hot. The too-cool star, on the other hand, could be a neutron star after all if theorists have underestimated the cooling rate of dense neutron matter, a calculation that no one has been able to test in detail. “It's possible that there are other, more prosaic explanations,” says Helfand. “I'd like to see other examples [of strange stars] and reduce the chance of an unfortunate geometric conspiracy.”

    If these two candidates are indeed strange stars, they should help astronomers better understand the nature of subatomic particles. “You can't produce huge chunks of matter at nuclear densities in the lab,” says Turner. “There are big uncertainties here, but you take what you can get.”

  9. GENOME CANADA

    New Awards Bolster Canada's Global Role

    1. Wayne Kondro*
    1. Wayne Kondro writes from Ottawa.

    OTTAWA—When geneticist Tom Hudson of McGill University in Montreal learned last week that he would receive $9.5 million to finance Canada's 10% stake in a proposed international research consortium, he wondered for a moment whether he was still living in Canada. “It's unbelievable. This is going to be one of the most high-profile genome projects in the world, and we're the first group funded,” enthused Hudson, the director of the Montreal Genomics Centre.

    Hudson will be participating in a project to help researchers refine their search for genes implicated in diseases by mapping long stretches of DNA called haplotypes (Science, 27 July 2001, p. 583). It's one of 34 projects funded last week by Genome Canada, a nonprofit agency created 2 years ago to boost Canada's capacity in genomics and proteomics likely to benefit key industrial sectors such as health, agriculture, forestry, and fisheries as well as the environment (see graphic). The agency has raised a total of $400 million from federal, provincial, and industry sources. Combined with an earlier round of awards (Science, 13 April 2001, p. 186), the $195 million committed last week will buy Canada a prominent place in a host of international research consortia, says Genome Canada president Martin Godbout.

    A healthy lead.

    Health-related research tops Genome Canada's agenda, followed by other economically important sectors.

    SOURCE: GENOME CANADA

    Hudson won't know which chromosome his group will be mapping until his expected partners, including the Whitehead Institute for Biomedical Research/MIT Center for Genome Research in Cambridge, Massachusetts, and the Sanger Centre in Hinxton, U.K., line up funding from U.S. and U.K. sources. But it's a heady experience to be leading the pack. Except for a few financially modest individual efforts, Canadian scientists sat on the sidelines during the torrid race to sequence the human genome and lamented government cutbacks that tied their hands.

    The new funding will allow Canada to move ahead on several fronts. In addition to the haplotype map, the second group of awards includes $15.75 million for a massive public database on protein interactions (Science, 8 June 2001, p. 1813), $4 million for Marco Marra and Steven Jones of the British Columbia Cancer Research Centre in Vancouver to study the regulatory elements of gene expression, and $6.3 million for University of Calgary, Alberta, molecular biologist Christoph Sensen to develop a new software program for analyzing genomics data.

    The money will also let Canada carry its weight in international circles, says Godbout. A grant of $6.73 million was awarded to molecular biologist David Baillie of Simon Fraser University in Burnaby, British Columbia, to determine protein function in the soil nematode Caenorhabditis elegans, and microbiologist Sherif Abou Elela of the University of Sherbrooke, Quebec, received $3.75 million to test modified nucleic acid technologies in determining gene function. Both projects will be done jointly with the Karolinska Institute in Stockholm. Genome Canada is negotiating with two other nations to build a consortium to map the genome of the potato, Godbout says, and with Norway to develop a consortium in fisheries. Negotiations are nearly complete on collaborative agreements with the Netherlands and Spain under which scientists from those countries will compete for funding on topics of mutual interest. Talks have also been launched with Germany, Japan, France, and the United Kingdom. “Our vision for the next 5 years will be focused on top-down strategic initiatives,” Godbout says. “But we had to first build up the base.”

    An analysis for the government of published papers showed Canada clinging to the third tier, along with Italy, Australia, and Switzerland, while the United States led the way and the United Kingdom, Japan, Germany, and France were bunched in second place. The investments by Genome Canada should help it move up the ladder, says Francis Collins, director of the U.S. National Human Genome Research Institute, which has recently announced a $32 million competition to work on the haplotype map. “Until Genome Canada, Canada did not have available the kind of funding capabilities that make it possible to be a player on the big stage,” Collins says. The money has been especially useful in providing world-class facilities and equipment, adds Thomas Caskey, president of Houston's Cogene Biotech Ventures Ltd. and head of the 32-member international peer-review panel that waded through the $1.1 billion worth of applications in the second round.

  10. TOXICOLOGY

    Fruit Bats Linked to Mystery Disease

    1. Richard Stone

    Like many scientists who have spent time on Guam, Paul Alan Cox was intrigued by a mysterious malady that stalks the South Pacific island. Victims of the disease that the indigenous Chamorros call “lytico-bodig” may become paralyzed, develop tremors and move sluggishly, or slide into dementia. Unmasking the cause of this invariably fatal neurodegenerative disorder could offer insights into major killers such as Parkinson's and Alzheimer's. But lytico-bodig is dying out—and threatening to take its secrets to the grave. “I felt we were missing a chance to solve an enigma,” says Cox, an ethnobotanist who runs the National Tropical Botanical Garden in Kalaheo, Hawaii. Now, he has fashioned a provocative hypothesis around a pair of remarkable coincidences.

    In the 26 March issue of Neurology, Cox and neurologist Oliver Sacks correlate a sharp rise in flying fox consumption among the Chamorros after World War II with a presumed crest in the disease. They also found that much of the decline in lytico-bodig happened as flying foxes were hunted nearly to oblivion. Cox and Sacks, of the Albert Einstein College of Medicine in New York City, speculate that Guam's flying foxes may be biological weapons with wings, chock-full of neurotoxins accumulated in their tissues from a favorite food: cycad seeds.

    The perplexing Guam disorder first intrigued scientists in the early 1950s, when U.S. investigators reported that the Chamorro population was afflicted with a form of amyotrophic lateral sclerosis (ALS), or Lou Gehrig's disease, at a rate roughly 100 times the global average. Some patients displayed the tremors and rigidity of parkinsonism often coupled with an Alzheimer-like dementia—a unique malady called parkinsonism-dementia complex (PDC). Eventually, researchers concluded that the disparate symptoms were all part of the same disease, lytico-bodig. The U.S. National Institutes of Health set up a center on Guam in 1956 to search for a cause.

    Nearly a half-century later, the culprit remains elusive, with suspects ranging from faulty genes to mineral deficiencies and parasites to neurotoxins. Only a handful of Guamanians born after 1960 are known to have developed the disease, and according to neurologist John Steele of the University of Guam, the principal manifestation now is of dementia in women late in life.

    Recipe for disaster?

    Guam's flying foxes, like this one prepared in coconut cream for a traditional Chamorro feast, may be laced with cycad neurotoxins.

    CREDIT: MERLIN TUTTLE/BAT CONSERVATION INTERNATIONAL

    Cycad seeds have long been among the suspects. In the traditional Chamorro diet, the seeds of Cycas rumphii Miquel, a tree native to Guam, are ground into flour for a kind of tortilla. The Chamorros know that the seeds are toxic and rinse the flour several times to remove the poison. In the lean years after World War II, however, cycad tortillas were a staple of the Chamorro diet, and researchers have speculated that the large quantities ingested, coupled with incomplete detoxification of the flour, might have delivered enough cycad neurotoxin to trigger ALS-PDC. But lab animals fed the most likely cycad neurotoxin, cycasin, fail to develop an illness resembling ALS-PDC.

    “I was puzzled like everybody else,” says Cox, “but thought there might be something that ethnobotany could bring to the table.” He learned that the Chamorros relish the meat of the flying fox, a kind of fruit bat that on Guam subsists largely on cycad seeds. Cox noted that after World War II, hunting with firearms largely replaced the traditional technique of snaring the animals in thorny vines. The shooting coincided with an apparent sharp increase in ALS-PDC. The hunting took a heavy toll: One of Guam's two species of flying foxes had vanished by the mid-1970s, and the other had dwindled to fewer than 100 individuals. As the bats grew scarce, so did cases of the disease. Last year Cox met Sacks in New York and sketched out his scenario. “My first thought was that it was charming, ingenious—and unlikely,” says Sacks. But Sacks eventually agreed that at least part of the fatal disease's decline could be tied to the demise of a furry flying alembic for neurotoxins.

    Some veteran ALS-PDC researchers, however, view the hypothesis as speculation built on a shaky foundation. “If we knew that cycad was the cause of the disease, the paper would be enticing,” says Ralph Garruto, a biomedical anthropologist at the State University of New York, Binghamton. Peter Spencer, a neurotoxicologist at the Oregon Health and Science University in Portland, argues that if cycasin was indeed the culprit, the bat theory won't fly: The toxin is soluble in water and thus would not have built up in flying fox tissue. And preliminary inquiries by researchers on Guam suggest that some ALS-PDC patients may never have consumed flying fox.

    Sacks says that the flying fox hypothesis was not conceived as an “exclusive cause” for ALS-PDC. He points to unpublished data on a fat-soluble neurotoxin in cycad that may be a candidate for biomagnification. If that were to pan out, “a 1-pound bat is as good as half a ton of seeds,” he says.

    Cox's group has launched a feeding trial with a common species of flying fox in American Samoa. “We'll see what does bioaccumulate,” he says, and compare the toxicological profile to decades-old archival tissue from flying foxes taken on Guam. Whatever they find, notes Spencer, “it's essential that work continue on this tremendously important disease.” Time is clearly running out.

  11. CARDIOVASCULAR DISEASE

    Does Inflammation Cut to the Heart of the Matter?

    1. Gary Taubes

    A growing body of evidence suggests that a molecular marker for inflammation may be as crucial as cholesterol in assessing risk of heart attack

    In the battle against heart disease, cholesterol has long held the title of molecular root of all evil. Now there's a competitor, or at least a co-conspirator.

    The new molecule, known as C-reactive protein (CRP), is a marker of inflammation that has emerged alongside cholesterol deposition and clogged arteries as a significant factor in understanding heart disease. “What we've discovered is the commonality of inflammation and arteriosclerosis,” says pathologist Michael Gimbrone of Harvard Medical School in Boston. “The very same cells and molecules that mediate inflammation and the response to pathogens and trauma are integral parts of the arteriosclerotic process.” That discovery, the result of considerable work over the past decade, has spawned more questions, starting with the most fundamental one: Does the presence of inflammation trigger rising CRP levels that lead to heart disease, or is inflammation a byproduct of the disease? The answer could greatly affect the standard of care for the disease, from early diagnosis through treatment.

    CRP was discovered in 1929 in the Rockefeller University laboratory of Oswald Avery. Barely measurable in the blood of healthy individuals, CRP is pumped out by the liver in much larger quantities during the early acute phases of disease or trauma. It “closely parallels the clinical course” of disease, Avery wrote in 1941. Since then, researchers have discovered that CRP is synthesized in the liver in response to interleukin-6, which is released by areas of inflammation. As a result, says Gimbrone, the amount of CRP can be considered “a total-body integrated readout of inflammation.”

    The role of inflammation in heart disease is only now becoming clear, as data accumulate suggesting that an overly aggressive response to inflammation is as critical to heart disease as high levels of serum cholesterol. This is a novel concept for a generation of biologists and cardiovascular disease specialists taught in their youth that arteriosclerosis was purely a process of lipid deposition. The more cholesterol in your blood, the argument went, the greater the blockage of your arteries, like sludge accumulating in a rusty pipe, and the greater the chance that a total block would lead to a heart attack. The arteries themselves played only a passive role.

    Inflammatory statements.

    The complex interaction of inflammatory signals involves even endothelial cells as active participants in the inflammatory process.

    ILLUSTRATION: P. LIBBY/HARVARD MEDICAL SCHOOL

    “We thought about the process as a mechanical thing,” says Harvard Medical School pathologist Nader Rifai. “But the puzzling thing was why some people who had significant blockage might never have a coronary event, whereas others with very little blockage would have a massive heart attack. Clearly there was more to the issue than just cholesterol blocking the blood flow.” Indeed, high cholesterol levels alone could only predict at most half of all heart attacks. Now CRP and inflammation, say researchers, may be the variables that were missing in the heart disease-cholesterol equation.

    The tide started to turn in the mid-1970s, at least in biology laboratories. Researchers, led by Gimbrone and his colleagues at Harvard, discovered how to culture and grow the endothelial cells that line the walls of arteries and veins. They spent a decade or more working out the molecular mechanism by which the endothelium could be activated in response to the kinds of molecules, known as cytokines, that normally induce an inflammatory response. These cytokines would activate genes on the endothelium that would initiate the first stages of arteriosclerosis. They also demonstrated that this process could be turned on by molecular risk factors for arteriosclerosis, such as cholesterol-carrying particles known as lipoproteins. Finally, and perhaps most important, says Harvard cardiologist Peter Libby, they demonstrated that endothelial cells not only respond to inflammatory molecules but generate their own such signals. “So, far from being innocent bystanders,” he says, “the cells of the blood vessel wall could actually participate in the inflammatory response in an active sense.”

    By the early 1990s, Libby, Gimbrone, and others had assembled a picture of arteriosclerosis that involved molecules of inflammation, both cytokines and growth factors, from the moment the first oxidized low-density lipoprotein (LDL) particle adhered to the walls of the blood vessel until the moment, perhaps decades later, when the fibrous cap overlying an arteriosclerotic plaque burst open, causing a blood clot in the artery and a heart attack. “It's all a nice, neat little package,” says Libby.

    While vascular biologists were assembling the pieces of the inflammation-arteriosclerosis puzzle in the laboratory, CRP had developed into the subject of a routine medical assay in Europe and Japan. “It's used to detect the presence of inflammation or infection or to see if the patient is responding to treatment,” says University College London biologist Mark Pepys. “It's a specific marker for infection and inflammation, but not specific to the extent that it will actually tell you what type of infection or inflammation the patient has.”

    Healthy individuals have only trace amounts of CRP in their blood: 90% have levels below 3 milligrams per liter. Once inflammation sets in, however, the levels can skyrocket 1000-fold or more. It's an “amazing response,” says Pepys. “In the vast majority of diseases in which CRP goes up, the CRP measurement reflects very accurately—more so than any other objective measurement—how sick the patient is and how extensive the pathology.”

    In the early 1980s, researchers began to assess whether baseline CRP levels—those below, say, 10 mg/l, in apparently healthy individuals—might have predictive power. In particular, Pepys and Attilio Maseri, who is now at the University Vita-Salute in Milan, looked at CRP levels in patients admitted to a hospital with a heart attack and found that those with higher levels coming in had more dire outcomes. But the assays were insensitive and the research stalled for a decade, while cardiologists concentrated first on the role of cholesterol in heart disease and then on the role of hypercoagulation and thrombosis, or blood clots. Finally, in the early 1990s, Pepys and, independently, Russell Tracy and colleagues at the University of Vermont in Burlington developed CRP assays sensitive enough to measure accurately the very low CRP concentrations typical of healthy individuals. The research was primed to take off.

    Independent predictors.

    Reflecting the underlying inflammatory response, CRP levels predict risk of future heart attacks and strokes among healthy men and women with low as well as high cholesterol levels, according to work by Paul Ridker (top) and colleagues.

    CREDITS: (TOP TO BOTTOM) JON CHASE/HARVARD UNIVERSITY NEWS OFFICE; ADAPTED FROM P. RIDKER ET AL., CIRCULATION 103, 1813 (2001)

    Researchers looked for correlations between a host of inflammatory markers and heart disease. CRP levels, they found, not only were an excellent predictor of heart disease but were better than any of the other inflammatory markers. CRP's major advantage over interleukin-6, cellular adhesion molecules, or tissue necrosis factor, for instance, as Harvard Medical School cardiologist and epidemiologist Paul Ridker quickly discovered, was that it had considerably more “clinical appeal.” Ridker, for instance, found that virtually all inflammatory markers were elevated among individuals at risk of future heart attacks or strokes, but none of the other molecules were as biologically stable as CRP or had such a wide range of concentrations. Even among healthy individuals, the distribution of CRP levels in the blood spans a considerable range of values—from 0.01 mg/l up to 10 mg/l—making it relatively easy to compare low, medium, and high levels to disease outcomes. Barring a serious infection in the few weeks before the sample is taken, CRP levels also tend to be rock steady. They have neither diurnal nor seasonal variations, and they don't spike in response to food intake.

    Perhaps most important, CRP levels remain stable and measurable for decades in serum samples. Suddenly, says Pepys, “anyone who had conducted epidemiological studies over the years in cardiovascular disease—measuring cholesterol, clotting factors, and everything else—went back to their freezers and started pulling out samples by the thousands and measuring CRP.”

    The floodgates opened in 1996, with a report by Tracy, University of Pittsburgh epidemiologist Lew Kuller, and colleagues showing that baseline CRP levels in smokers with high cholesterol levels tracked with heart disease risk. Then Ridker and his colleagues showed that even in low-risk individuals, CRP levels acted as very potent predictors of risk of first-time heart attacks or strokes—and they did so even when cholesterol levels were low, which is the case in half of all heart attacks. “Suddenly we had an insight into why so many heart attacks and strokes were being missed by cholesterol screening,” says Ridker. “Maybe inflammation, measured by this simple marker, was picking up a huge chunk of those we missed.”

    “The knock-your-socks-off thing about the observation,” says Gimbrone, “is that you can go back and look at CRP in a serum sample from a decade earlier, and it will be predictive in an individual who yesterday had the heart attack.” This was quickly confirmed by Pepys, Wolfgang Koenig, a cardiologist at Germany's University of Ulm, and collaborators, who reported a linear relation between CRP levels and heart disease in a randomly chosen sample of initially healthy middle-aged men.

    Since 1997, the most systematic CRP work has been done by Ridker and his colleagues at Harvard. In a series of high-profile papers, they demonstrated that CRP levels also predicted heart disease risk in healthy women, that they were a more accurate predictor of risk than cholesterol levels, and that cholesterol levels and CRP levels were unrelated. This meant that CRP measurements could add to the predictive value of cholesterol measurements: Individuals with low cholesterol levels but relatively high concentrations of CRP—or vice versa—would still be at high risk of heart disease. Those with high levels of both CRP and cholesterol would be at the highest risk.

    Ridker's group also looked at the effects on CRP levels and heart disease of aspirin, which is known to be an anti-inflammatory agent, and of statins, the cholesterol-lowering medications that significantly reduce heart disease risk. They showed that the magnitude of the heart disease benefits of a daily aspirin regimen was directly related to the CRP level: the higher the CRP levels, the greater the benefit of the aspirin. As for statins, their mode of action had always been mysterious, because their efficacy at reducing heart disease ran ahead of their known effect on cholesterol. Now Ridker and his colleagues have reported that statins work by lowering CRP levels and inflammation as well.

    Last June, Ridker and his colleagues reported in The New England Journal of Medicine that statins reduced heart disease risk even in patients with otherwise healthy cholesterol levels, provided their CRP levels were above normal. “What was extraordinary,” says Ridker, “was that the drug was just as effective in saving lives in the absence of high cholesterol, if the CRP [level] was high. Not only are these drugs ‘anti-inflammatory,’ as well as lipid lowering, but now there's actually clinical evidence to show that perhaps the way we prescribe these drugs needs to be rethought, because people with low cholesterol can still benefit from these drugs if they have an inflammatory response.”

    To complicate the picture, CRP concentrations are associated not just with heart disease but also with a host of variables and risk factors for the disease. For instance, studies have shown that baseline CRP levels are very strongly linked to body mass index. “People lose 10 pounds,” Ridker says, “and their CRP levels go down.” CRP levels are also higher in patients who have type II diabetes or glucose intolerance, a condition related to diabetes. Higher levels of CRP are also associated with syndrome X, also known as metabolic syndrome, which increases the risk of diabetes, heart attack, and stroke and is characterized by excessive abdominal fat; insulin resistance; elevated blood pressure, blood sugar, and triglycerides; and low levels of the beneficial high-density lipoprotein cholesterol. Smokers also have significantly higher CRP levels, as do alcohol drinkers, although as Koenig and his colleagues have reported, that latter association is U-shaped: Heavy drinkers and nondrinkers have higher CRP levels than do those who drink one or two glasses a day. This is interesting, says Koenig, because the association between heart disease rates and alcohol consumption has proven to be U-shaped as well. Hormone replacement therapy seems to double CRP levels in postmenopausal women, which may help explain why hormone replacement therapy seems not to help stave off heart disease, as originally thought. To top it off, baseline CRP has been shown to increase with age.

    Risky business.

    On average, CRP levels rise with (orange arrows), fall with (green arrows), or are independent of (blue arrows) the incidence of other factors in a population.

    CREDIT: (BOTTOM) ROSS ANANIA/PHOTODISC

    This deluge of epidemiologic associations and clinical possibilities has left researchers struggling to make sense of it all—in particular, says Maseri, “to understand precisely through which mechanism CRP is contributing to cause myocardial infarction.” Or as Pepys puts it, “there's no doubt now that CRP, as a marker of inflammation, is clearly related to arteriosclerotic, thrombotic events in an associative way. But it doesn't tell you anything about causality. Those are the facts; everything after that becomes speculation.”

    Despite the swarm of research surrounding CRP and inflammation, scientists are still debating whether inflammation is a primary event that causes atherosclerosis or a secondary event, perhaps caused by the damage that is initiated by cholesterol deposition on the artery walls or even by smoking or high blood pressure. “CRP is caused by [heart] disease,” says Ernst Schaefer, for instance, who studies lipid metabolism at Tufts University and the U.S. Department of Agriculture's Human Nutrition Research Center on Aging in Boston. “It's not the other way around.”

    But researchers studying CRP and inflammation directly point out that CRP levels do not seem to correlate with the actual extent of arteriosclerotic plaques, which all individuals begin to develop in their late teens. They believe that CRP levels reflect the body's response to inflammation elsewhere in the body, which in turn causes or exacerbates heart disease. Maseri compares CRP's role to that of fever, another response to infection. “Fever is beneficial because it stimulates the body to respond” by creating an environment inhospitable to the cause of the infection, he says. “But if your temperature goes too high—to 103° or 104° [Fahrenheit; 39.4° or 40°C] then you may die. So the response has to be appropriate to the infection: Too much is bad.”

    Case studies of individuals who are predisposed to malignant coronary arteriosclerosis, who lack particular risk factors but whose CRP levels are “off the map,” suggest one possible mechanism by which CRP levels affect disease, says Ron Krauss, head of the department of molecular medicine at Lawrence Berkeley National Laboratory in California. Copious coincidental infections, such as chronic bronchitis, chronic prostatitis, or gingivitis, could lead to a low-grade, smoldering inflammatory response that could accelerate or initiate the development of arteriosclerosis and heart disease. The release of cytokines from these other infections prompts cytokine release in the artery wall, making pathological a relatively benign situation. This “echo phenomenon,” as Harvard's Libby calls it, could be exacerbated in those individuals with a high genetic predisposition to inflammatory response. One theory that has not been supported in clinical studies is that the inflammation arises from direct infections by such bacteria as Chlamydia pneumoniae or Helicobacter pylori.

    Many researchers seeking to make sense of these contradictions evoke evolution to explain the role of inflammation in heart disease, although the explanation, like virtually all in evolutionary biology, is difficult if not impossible to test. In prehistory, so their argument goes, most people died from trauma and infections; only the lucky few lived to see 30. As a result, evolution selected individuals with a genetic predisposition for mounting a heightened immune response. In the 20th century, as people started living well past their 50s, arteriosclerosis arose as a byproduct of that heightened immune response. In other words, says Vermont's Tracy, in an evolutionary sense, “we trade short-term benefit for long-term damage. And that's a trade that we're willing to make genetically, because we were never designed to live the long haul.”

    If the inflammation does indeed lead to heart disease and not the other way around, that still leaves open the question of whether CRP is simply a marker of inflammation or has its own pathological actions. In other words, is it an innocent bystander or a perpetrator of disease?

    Over the past few years, biologists have accumulated considerable data suggesting that CRP is indeed a major player in the disease process. They have shown that CRP is present in arteriosclerotic lesions and that it functions as a chemoattractant to lure monocytes to the site. It has also been implicated directly in increasing the expression of adhesion molecules. It also apparently can activate immune system components known as “complement” proteins, which are important mediators of inflammation. What's more, as Pepys and his collaborators demonstrated in the early 1980s, CRP binds specifically to LDL cholesterol, the foamy stuff of arteriosclerotic plaques. There's also strong evidence that CRP can increase the uptake of LDL by macrophages to form foam cells and that CRP can enhance blood clotting, although that is still controversial. Finally, Pepys and his collaborators have shown that if you put human CRP into rats and then induce a heart attack, the attack is considerably more damaging than an attack induced without CRP, and the amount of heart muscle killed in the attack is greater by 40%. “CRP is clearly enhancing the size of the infarction in the rat model,” says Pepys.

    CRP is beginning to find its way into clinical medicine. Physicians have started to measure it in patients to assess heart disease risk: President Bush reportedly had his CRP level measured, for instance, and he was told it was fine, says Ridker. Research labs in academia, biotechnology, and the pharmaceutical industry are looking into the possibility of using molecules that inhibit CRP binding to reduce the risk of stroke and heart attack or perhaps to reduce the degree of damage afterward. They are also considering attacking other inflammatory mediators. “We're still left with the challenge of trying to sort out what's really important,” says Libby.

    The clinical payoff could be twofold. On the one hand, if the latest data stand up, says Libby, it means that plenty of asymptomatic individuals with no classical risk factors and low cholesterol levels but high CRP levels—perhaps one in every five Americans—are at high risk of heart attacks and could benefit from treatment, including statin drugs. “Our challenge is to learn how to treat these walking well who can benefit from statin therapy,” says Libby. “We might want to use CRP or other markers of inflammation as a way of targeting therapy to these individuals as primary prevention.” Indeed, in March the American Heart Association, the American College of Cardiology, and the Centers for Disease Control and Prevention co-hosted a meeting in Atlanta to develop clinical guidelines for when and how to use CRP measurements in treating patients.

    The ultimate payoff is likely to come from identifying the ideal targets for inhibiting inflammation. That step, in turn, could eliminate at least some of the damage caused by arteriosclerosis. “We have our work cut out for us for the next dozen years,” says Libby.

  12. MARINE ECOLOGY

    Picturing the Perfect Preserve

    1. David Malakoff

    Computerized tools help marine researchers map reserve networks that can pass ecological and political tests

    NEW YORK CITY—Designing modern marine reserves demands a deft touch. Planners must balance the need to protect fragile marine environments against strong economic and political pressures to mine oceanic riches. It sounds like a job for an experienced diplomat, but ironically, a key tool for dealing with such challenges may instead be a computer.

    A growing number of scientists are turning to new mapping software to help them design networks of marine reserves that are both politically viable and ecologically effective. The programs enable planners to test thousands of possible arrangements for achieving conservation goals, such as preserving fragile coral reefs or shielding vulnerable spawning fish from nets. Just as important, cybermapping may allow reserve advocates to sidestep potentially disastrous political conflicts by flagging areas where a protected zone might draw opposition from anglers or other economic interests.

    Such simulations recently allowed a U.S.-Mexican research team to pinpoint potential trouble spots for a proposed network of reserves in Mexico's Gulf of California. Australian researchers are applying the approach to patches of the Great Barrier Reef. Another group of scholars hopes to build models that will improve the effectiveness of one of the world's first major reserve networks, in the Bahamas.

    “Marine reserve modeling is showing some big improvements over where we were just a few years ago,” says Sandy Andelman of the National Center for Ecological Synthesis and Analysis (NCEAS) in Santa Barbara, California, who helped develop the tools. Their growing popularity, she says, reflects the fact that “there are more possible ways of conserving marine biodiversity than we can picture in our heads.”

    Coral jewel.

    Researchers hope that new cybermaps will help preserve coral reefs and other habitats in the Gulf of California.

    CREDIT: (BOTTOM) E. SALA

    Speaking here last month,* marine ecologist Enric Sala of the Scripps Institution of Oceanography in La Jolla, California, described how the new tools can allow a small scientific team to draft a reserve plan for a large area quickly and relatively cheaply. Sala's target was the shallow water habitats of the Gulf of California, a 150,000-square-kilometer slice of water wedged between Mexico's west coast and the Baja Peninsula. When the project began in 1999, Sala says, researchers had little information about the distribution and abundance of the gulf's biological wealth, which is under increasing threat. “We had to start from scratch,” he says.

    To fill the gap, he and two students from Mexico's Autonomous University of South Baja California in La Paz made hundreds of dives at 84 spots along the gulf's coast, surveying sea life and documenting habitat types. They also interviewed local fishers for information about the spawning sites of seven economically important species of fish and looked carefully for nursery areas. Back at Sala's lab, another trio of researchers fed the information into a computer model designed to achieve preset goals.

    In this case, Sala's team proposed a network that would protect all coral reefs, sea-grass beds, and known spawning sites, at least 50% of coastal mangroves, and at least 20% of all other habitat types—in a minimal area. To allow sea life to flow from one site to another, they decreed that no reserve should be more than 100 kilometers from the next one in the chain.

    Data dive.

    Researcher Gustavo Paredes takes a sea-life survey used to design a marine reserve network for the Gulf of California.

    CREDIT: E. SALA

    With those rules in place, the software—based on code developed by Ian Ball of Australia's Antarctic Division in Kingston, Tasmania, Hugh Possingham of the University of East Queensland in Brisbane, and NCEAS scientists—then spent hours sorting through thousands of possible combinations. The winning map, Sala reported at the meeting, showed that 18 reserves covering just 12% of the marine habitat could do the trick. As a bonus, it protected even more mangroves and other habitats—from sandy bottoms to submerged cliffs—than Sala's rules called for.

    Sala's team wasn't finished, however. Knowing that reserve plans can founder on opposition from commercial anglers and other interests, it incorporated data on fishing boat activity collected by the World Wildlife Fund (WWF), one of the project's partners. The software identified several potential conflict zones, then reconfigured the network to avoid heavily fished areas but still satisfy the conservation goals, Sala said.

    “It's a really elegant project” that is sure to influence other reserve planning projects, says marine policy expert Liz Lauck of the Wildlife Conservation Society in New York City. Most impressive, says coral specialist Jeremy Jackson of Scripps, is that the job took less than 3 years and cost only $400,000, provided by funders including the Moore Family and Tinker foundations. “It shows how quickly you can gather useful information,” he says.

    How Sala's findings will play in Mexico, however, remains to be seen. WWF and other groups are working with government officials to develop a long-term conservation plan for the gulf, and Sala's work is just one piece of the puzzle. Still, says Juan Carlos Barrera of WWF-Mexico in Hermosillo, Sonora, “the ability to consider social and economic factors along with ecological concerns is very helpful.”

    Other researchers are pursuing similar work. Leanne Fernandes of Australia's Great Barrier Reef Marine Park Authority reported that her agency has turned to related software to help identify a network that will protect 70 “representative” bioregions along the reef. “The idea isn't to come up with the [ecologically sound] solution and [send] it in to the minister but to have a plan that already takes into account the concerns of the many stakeholders,” she says.

    Meanwhile, in the Bahamas, a team led by Dan Brumbaugh of the American Museum of Natural History in New York City hopes to build a dynamic model to finger the shifting social and biological forces that determine a reserve network's fate. Backed by a 5-year, $2.5 million grant from the National Science Foundation's Biocomplexity in the Environment program, Brumbaugh has assembled social and biological scientists from nearly a dozen institutions. A key question they hope to answer is whether networks designed to win community support can work as well over time as those focused on ecological goals.

    The project demonstrates how marine reserve advocates, traditionally biologists, have begun to incorporate economic and social concerns into their thinking, says Brumbaugh. Successful efforts to design and evaluate reserves depend on “finding people who are willing to play nice with each other and overcome disciplinary suspicions,” he adds. And a little silicon-based helper doesn't hurt, either.

    • *“Sustaining Seascapes: The Science and Policy of Marine Resource Management,” American Museum of Natural History, New York City, 7–8 March.

  13. TECHNOLOGY

    Microchips That Never Forget

    1. Adrian Cho*
    1. Adrian Cho is a freelance writer in Boone, North Carolina.

    Magnetic memory promises computers that turn on instantly, smarter gadgets, and a revolution in chip design—if it can elbow its way into the market

    It's 2 a.m. and you're at your computer typing up that 20-page report that's due in 6 hours. You're on your eighth cup of coffee and your fourth candy bar, and just maybe you'll finish in time to take a shower before dashing off to work. You haven't saved your document in hours. Then you accidentally kick the electrical cord and unplug your computer.

    No problem. Plug the cord back into the wall socket, and the machine instantly blinks back to life. It also remembers every t you've crossed and i you've dotted, so you continue to type as if nothing has happened.

    This fanciful scenario could become a reality in the not-too-distant future, thanks to magnetic memory that can store information even when it loses power. The first commercial prototype chips should hit the market within 2 years. But magnetoresistive random access memory (MRAM) may not only allow your computer to turn on and off instantly without forgetting what it was doing; it could also reshape all of electronics.

    The emerging MRAM combines the best features of the currently existing electronic memory technologies, says Saied Tehrani, an electrical engineer at Motorola in Tempe, Arizona. It therefore could potentially replace all of them. “MRAM really has the potential to be a universal memory,” Tehrani says. MRAM bits can also mix with the transistors in standard silicon chips, so the technology could allow chip designers to put an entire computer on a single chip, making portable devices such as cell phones and personal data assistants far more powerful.

    But it isn't certain that MRAM can topple the reigning champion of computer memory, an electronic technology called dynamic random access memory (DRAM), says Bob Buhrman, a physicist at Cornell University in Ithaca, New York. “I think that if DRAM and MRAM were starting at the same point, MRAM would win,” Buhrman says. But DRAM has a huge head start, he says, and it isn't clear whether MRAM can catch up and compete economically. DRAM may have drawbacks, but chip manufacturers can pump it out for a few tenths of a cent per megabit, and production is a multibillion-dollar industry.

    Researchers, investors, and consumers may soon find out if MRAM can live up to its promise, as major electronics companies are nudging the fledgling technology out of the lab and into production. Motorola plans to introduce a 4-megabit chip by the end of next year, and IBM will introduce a 256-megabit chip in 2004. Meanwhile Honeywell, Hewlett-Packard, and several other companies have their own MRAM programs. “I wouldn't be too surprised if someone gets something out a little quicker” than IBM and Motorola, says Jim Daughton, an electrical engineer at NVE Corp. in Eden Prairie, Minnesota.

    High-tech layer cake

    RAM serves as a scratch pad on which a number-crunching chip, or processor, keeps data and instructions, including its own operating system. Such memory is called “random access” because the processor can reach into any part of it at any time. Most current RAM can't hold information without power, which is why your computer writes everything to the hard drive before it shuts down. The machine must retrieve that information when it turns back on, which is why it comes on slowly.

    Researchers have been trying for decades to produce magnetic random access memory devices that avoid those hassles. In the 1980s, Honeywell developed a technology that exploited the fact that electricity flows most easily through a magnet if the current flows in the direction of the magnetism. But that effect, known as anisotropic magnetoresistance, is small—a few percent—so the memory proved slow, says Daughton, who headed the research. Then in 1988, physicists discovered a bigger effect called gigantic magnetoresistance (GMR) in films in which a layer of nonmagnetic metal lies between two magnetic layers. The resistance is low when the outer layers are magnetized in the same direction but increases dramatically if the layers are magnetized in opposite directions. However, GMR memory devices tend to be bulky or hard to read because the total resistance of a bit of film is low.

    The new MRAM devices largely grew out of a 5-year effort funded by the Defense Advanced Research Projects Agency (DARPA). In 1996, DARPA began supporting IBM, Motorola, Honeywell, and academic researchers to develop novel magnetic materials and devices. IBM researchers realized that a tiny high-tech layer cake known as a “magnetic tunnel junction” would provide resistance changes up to 30%, even beefier than the GMR effect. A memory device in which every bit consisted of a junction would be faster and more compact than a GMR memory, they reasoned. Motorola also switched to tunnel junctions. (Honeywell, the military, and others continue to develop GMR-based memory, in part because it may better withstand radiation and other hazards.)

    The heart of a magnetic tunnel junction consists of two layers of magnetic material, such as nickel iron and cobalt iron, that sandwich a very thin insulator, typically a layer of aluminum oxide only a few atoms thick. Current flows down through the layers, and thanks to quantum mechanics, it meets less resistance when the two magnetic layers are magnetized in the same direction, and more resistance when they're magnetized in opposite directions. The two states serve to encode a 0 or a 1.

    The difference in resistance arises because a magnetized material essentially carries two unequal but independent currents of electrons, with their spins polarized in opposite directions, says Stuart Parkin, a physicist at IBM's Almaden Research Center in San Jose, California. When the magnetic layers are magnetized in the same direction, the larger current on one side of the insulating layer is polarized in the same direction as the larger current on the other side. That alignment allows electrons in the larger current to quantum-mechanically tunnel into and out of the barrier with relative ease. If the two layers are magnetized in opposite directions, then the larger current in one magnetic layer is polarized in the same direction as the smaller current in the other, and that smaller current simply cannot accommodate all the electrons burrowing through the insulating barrier (see figure). “The majority electrons can tunnel into the barrier from one side very easily,” Parkin says, “but they can't get out the other side.”

    The going gets tough.

    When a junction's magnetizations cross, less current can tunnel through the barrier.

    Crucially, the magnetization of a junction's top layer does not change direction too easily. When it points one way, it resists magnetic fields trying to flip it the other way, much as a stiff light switch resists the push of a finger. Only when the fields exceed a certain threshold does the magnetization realign. That's why the memory device retains information when it's turned off.

    To fashion a memory device, millions of junctions are arranged in a square grid with the lower magnetic layers of all the junctions magnetized in the same fixed direction. Parallel conducting stripes called “bit lines” run on top of the junctions, and below the junctions run stripes called “word lines.” Each junction sits at the intersection of a bit line and word line. When currents run through the two lines, they create magnetic fields that add together to flip the magnetism of the junction's upper layer in the desired direction (see figure). To measure the junction's resistance, a smaller current passes from the bit line, through the junction, and out through a transistor.

    At a crossroads.

    In an MRAM device, magnetic fields combine to flip only the bit at the intersection of a particular bit line and word line.

    CREDIT: ADAPTED FROM IBM/ALMADEN RESEARCH CENTER

    Tunnel junction MRAM boasts a combination of properties that should enable it to take on all of the leading electronic technologies (see sidebar, p. 247). For example, MRAM's ability to retain information without power gives it an edge on the two types of memory used most in computers—DRAM and static random access memory (SRAM)—both of which forget everything as soon as the lights go out. MRAM should also be able to keep pace with the faster SRAM and pack information nearly as densely as the more compact DRAM. On the other hand, MRAM should be much faster and more durable than Flash, a type of electronic memory that can hold its state without power and is used primarily in cell phones and portable devices as well as computers.

    MRAM also possesses several novel properties that give it even greater potential, says Jimmy Zhu, an electrical engineer at Carnegie Mellon University in Pittsburgh, Pennsylvania. For example, MRAM bits can be etched into standard chips. So not only should MRAM allow designers to do away with hard drives, it also should enable them to put a processor and its memory on the same slab of silicon. “You can put a whole computer on a single chip,” Zhu says, and that could mean putting a fully functioning computer inside your cell phone or personal data assistant.

    MRAM bits might even mix with the transistors of traditional electronics to produce chips that reconfigure themselves as they run, says Bill Black, an electrical engineer at Xilinx Inc. in Ames, Iowa. For example, a tiny circuit that multiplies two numbers at one moment might change on the fly into a circuit that divides two numbers. Such morphing chips could radically alter chip design and even the relation between hardware and software.

    Hurdles ahead

    Before tunnel junction MRAM can live up to that promise, manufacturers must show that they can meet several technical challenges while cranking out loads of chips with few failures and at low cost, as both Motorola and IBM are now trying to do. For example, the high and low resistance values must be nearly the same from junction to junction. But those resistances vary exponentially with the thickness of the tunneling barrier, so chipmakers have to ensure that this exquisitely thin layer is smooth and uniform across the entire chip. Techniques developed in the last decade make this possible, but researchers must show that those techniques work reliably at high volumes.

    Researchers must also ensure that they can flip precisely the bits they intend. When current runs through a bit line and a word line, only the bit at the intersection of the two—where their magnetic fields combine—must flip. But all the bits along the bit line feel a single, weaker magnetic field, as do all those along the word line. To prevent these bits from inadvertently flipping, researchers must exercise fine control over the strength of the magnetic fields and the size and shape of the bits. Moreover, as the junctions are made smaller, researchers must make sure that heat does not make them flip spontaneously.

    Finally, if MRAM is going to succeed economically, production methods must allow it to follow the decades-long trend in which the size of transistors and other features on microchips shrinks by half every 18 months. Chip manufacturers need to see that MRAM has the potential to “scale” through several size reductions before they will be willing to invest in it, says Stuart Wolf, a physicist and manager of DARPA's magnetic materials and devices program in Arlington, Virginia. Wolf is confident that the technological problems can be solved, however. “The advances will come and it will scale,” he says. “We just don't have all the answers right now.”

    More daunting may be the economic and even cultural barriers to working MRAM into commercially viable devices, says Xilinx's Black. “There's a little bit of a disconnect between the ultimate users of the products and the guys doing the research,” Black says, “and I think that has served to slow progress.”

    However, Bill Gallagher, a physicist at IBM's Thomas J. Watson Research Center in Yorktown Heights, New York, and manager of the company's MRAM project, says his team is aware of the challenges of bringing a new technology to market and is working closely with the manufacturing experts at IBM's partner in MRAM development, German-based chipmaker Infineon Technologies. Gallagher envisions starting modestly by adding MRAM to existing devices. “We're not going to go in and replace something big right away,” he says. “That's not a good strategy.”

    Motorola's Tehrani says that to succeed, manufacturers will have to find markets in which MRAM offers a performance advantage that justifies the extra expense of the chips, which at first are likely to cost more than the several cents per megabit that SRAM and Flash fetch. Those markets might include wireless communications, portable devices, and automotive applications, Tehrani says. “As we go through the learning curve in these markets,” he says, “we'll see MRAM moving into other markets.”

    All agree that MRAM can take on DRAM only after it has established itself in other applications. For the moment, DRAM sits secure in its perch as the king of computer memory.

    If MRAM is going to make it in the market, it must soon begin to pay its own way. DARPA support for the IBM and Motorola efforts will end this year, and chip manufacturers are likely to pay for more development only if the technology promises to turn a profit in the foreseeable future. “It's like your child,” Gallagher says. “If it's going to make it, it's got to learn to walk on its own two feet.” The bottom line is developers must soon turn magnetic bits into megabucks, and that may be as much an economic challenge as it is a scientific one.

  14. TECHNOLOGY

    A Bit of This and That

    1. Adrian Cho

    Today's computers rely mainly on dynamic random access memory (DRAM), which stores bits of information by charging or discharging myriad tiny capacitors. DRAM sets the standard for packing the most information into the least amount of space, with the smallest bits currently measuring just over a tenth of a square micrometer in area. But the capacitors leak charge, so DRAM must be refreshed thousands of times a second.

    For some tasks, computers use faster and more expensive static random access memory (SRAM). Each bit of SRAM consists of a small network of transistors that flips between two stable conditions. SRAM doesn't require constant refreshing, but it takes up a lot more space. And like DRAM, SRAM forgets everything as soon as it's turned off.

    On the other hand, Flash memory can hold information without power. A Flash bit consists of a pair of transistors separated by a tunneling barrier that can be made more or less permeable by briefly applying an electric field. Flash wears out after it's been rewritten roughly a million times, so it is used mainly in cell phones and other portable devices, although computers use it to store some internal settings.

    An emerging electronic technology called ferroelectric random access memory (FeRAM) works much like DRAM but uses a capacitor that can be set to a high or low value. FeRAM retains information without power and could compete directly with magnetoresistive RAM. However, it may be relatively expensive, difficult to manufacture, and hard to integrate with standard chip technology.

  15. MATERIALS SCIENCE

    Biology Reveals New Ways to Hold on Tight

    1. Elizabeth Pennisi

    Researchers are figuring out the chemistry behind natural adhesives—useful data for developing synthetic glues

    Sticking can be nature's way of moving as well as of staying put. The comic-book character Spiderman gets his inspiration from the spider that effortlessly dashes up walls and across ceilings. Contrast this arachnid's lifestyle with that of many mollusks, which put down adhesive roots that withstand the roughest surf. Even blood cells alternately stick and roll as they navigate their way through the human circulatory system.

    At an unusual recent meeting,* biologists and materials scientists swapped notes about how natural and artificial adhesives work. The materials scientists discussed physical or chemical properties that biologists should consider as they try to figure out how nature performs its sticky tricks. The biologists described how various organisms—from octopi to limpets—stay put. By bringing the two disciplines together, the Defense Advanced Research Projects Agency, which funded the symposium, hoped to stimulate insights that might one day lead to more effective adhesives.

    If materials scientists can indeed take their cues from nature, the payoffs could be numerous: Think better suction cups, nontoxic and quick-acting marine adhesives, strong but temporary glues, extraterrestrial rovers—even new drug-delivery systems that stick to the target cell. But designing adhesives based on nature's own glues will be tough, says Anand Jagota, a materials scientist at DuPont in Wilmington, Delaware. “The art is to find the one critical thing that you need to mimic.”

    Microscopic handholds

    Although nature has a multitude of ways to ensure that organisms get a solid foothold, they share some common elements. “Over many different scales—from cells to geckos to mussels—the mechanisms [of adhesion] have great similarity,” noted Dan Hammer, a bioengineer at the University of Pennsylvania in Philadelphia. For example, several very different species employ tiny extensions—variously called fibrils, denticles, pili, or setae—on their stick-to-it surfaces. These so-called microstructures result in better-than-average adhesion and attachment by providing a slightly rough surface, says Jagota.

    Take the octopus. New research is rewriting the book on this animal's incredible maneuverability, which depends in large part on rows of muscular suckers along its arms and tentacles. Biologist William Kier of the University of North Carolina, Chapel Hill, recently found that there's more to the story. Octopus suckers' tiny projections called denticles are 3-micrometer-diameter pegs that provide more intimate contact with the surface underneath. With these denticles, the suckers “can grip a remarkable range of objects, including objects smaller than the suckers [themselves],” Kier said.

    Tight seals.

    Electron micrograph (bottom) reveals the complex sucker structure that enables the octopus (top) to be so agile.

    CREDITS: WILLIAM KIER/UNIVERSITY OF NORTH CAROLINA

    Suction cup manufacturers should take note, suggests Kellar Autumn, one of the symposium organizers and a biologist at Lewis and Clark College in Portland, Oregon. “We have suction cups all over, and not a single one has anything but a flat surface.” Adding microstructures would be “a revolution” for these devices, he said.

    Gecko history is also under revision—again, with potential practical applications. For years, researchers suspected that the gecko's sticking power came from friction between its toes and the surface. But 2 years ago, engineers developed the technology to actually measure the microscale forces behind gecko attachment—and they, Robert Full, an integrative biologist at the University of California, Berkeley, and Autumn found that the attachment force exceeded the frictional forces by a factor of 600. At about the same time, Full's team determined that the pads of these acrobatic creatures contained thousands of branching hairs that create a tremendous surface area (Science, 9 June 2000, p. 1717).

    Autumn and Full suspected that when these split ends get close enough to a surface, each generates a weak intermolecular force, called a van der Waals force; they add up to guarantee a secure foothold. At the meeting, Autumn reported that they had demonstrated that is indeed the case.

    Anthony Russell, a functional morphologist at the University of Calgary, Alberta, wants to find out how geckos attach and release their toes so incredibly quickly. To do so, he is comparing footpads and setae structure on as many of the dozens of species of geckos as he can. “He's giving us the principles by which you can build the [artificial gecko pad],” says Full, who already has a prototype in his lab. One day, he speculates, the pad might be used to design rovers that can move more easily across the rough terrains of other planets.

    Molecular stick and go

    Hammer is probing a system far removed from planetary explorers: white blood cells. Typically, white blood cells roll their way through the bloodstream, yet they are able to anchor themselves where they are needed to defend against foreign invaders, heal wounds, or form blood clots. Hammer hopes that if he can devise materials that mimic that roll-and-stick ability, he'll be able to devise a new targeted drug-delivery system.

    Like gecko feet and octopus suckers, white blood cells also gain their acrobatic and adhesive abilities through projections—in this case, surface proteins called selectins that stick out of the cell surface. The white cells use these selectins to grab onto the inner surface of the blood vessel; as the current of blood pushes against the cell, its rear bonds release and it tumbles forward, establishing new links, thereby cartwheeling down the vessel.

    Touch and go.

    Rows of tiny hairs called setae (bottom) use van der Waals forces to help the gecko (top) scoot up walls and along ceilings.

    CREDITS: KELLAR AUTUMN/LEWIS AND CLARK COLLEGE

    This rolling requires “dynamic regulation of binding and unbinding between molecules in fluid [and the surface],” Hammer explains. “The fluid is pushing the cell along; bonds are forming in front and breaking in the back.”

    Hammer and his colleagues have coated microscopic beads with various selectins and followed their start-and-stop tumbling in an artificial blood vessel. Just a slight difference in the starchy components of the selectins “can have large effects on the dynamics of rolling,” he reported. “These cells make protein backbones [selectins] and with great precision modify them to get the adhesion that they want,” giving them “remarkable” control over how fast they move and where they attach.

    As a first step toward harnessing this transport system, Hammer and his colleagues have switched to porous, biodegradable beads that one day can be filled with drugs. By changing the starchy components of the selectin coat, he hopes to control exactly where the sphere docks in the body—for instance, delivering an anti-inflammatory medicine to sites undergoing inflammatory responses.

    Nature's glues

    Whereas some creatures rely on projections to stay put, others secrete glue, either permanent or temporary. New studies are showing how each adhesive's properties depend in part on modifications of proteins that make up their constituents. “When and how these transformations occur and to what degree is quintessentially linked to the organism's life history and what it needs to adhere [to],” explains Herbert Waite, a marine biochemist at the University of California, Santa Barbara.

    For example, Andrew Smith, a biologist at Ithaca College in New York, is learning how limpets make glue strong enough to keep them from being washed away at high tide but temporary enough to let the limpet resume its foraging when the tide recedes. By studying several limpet species, he has found that the animals secrete mucus that is full of short proteins, with a small percentage of carbohydrates. The critical factor appears to be an activating protein that prompts the carbohydrates and proteins to intertwine.

    What's surprising, he reported, is that this oozing goo traps moisture in its network of proteins. Smith had expected that the opposite would happen, as most glues harden only after expelling any water they contain. But for the limpet, water accounts for 90% of its adhesive.

    Smith has been tinkering with the natural chemical mix to create his own glue. Success, he has found, depends on the lengths of the protein polymers, the degree to which they branch, and the mix of protein and carbohydrate. At this point the stickiness of his concoctions is “not impressive compared to what limpets do,” he reported. Smith is surveying the makeup of these glues in a variety of other snails for tips about what works best.

    Sticky defense.

    Sea cucumbers eject sticky threads that entrap predators, enabling the sea cucumber a leisurely escape.

    CREDIT: PATRICK FLAMMANG/UNIVERSITY OF MONS-HAINAUT

    Waite has been probing the secrets of a different mollusk, one that glues itself permanently to a rock. Mussels extrude thin threads to attach themselves to rocks. “The biochemistry of [their] adhesion is still incomplete,” says Waite, yet he's fairly certain that mussel glue works something like epoxy, a two-part glue created by mixing hardener and resin together. Mussel threads have two isolated compartments, one containing resinlike proteins that have a lot of reactive branches and the other full of chemicals, including enzymatic factors, that act as the hardeners. As the thread exits the mussel, the hardening chemicals mix with the proteins; this glue takes a mere 5 minutes to set. Mussel glue, Smith points out, works quite differently from limpet glue. “You wouldn't put epoxy glue on your feet if you wanted to keep moving about,” he explains.

    Some of these insights from mussels have already been put to use, Waite adds. One automaker now treats the raw steel auto body with mussel-inspired compounds that make paint stick better. Waite predicts a variety of applications, because “sticking opportunistically to all kinds of surfaces underwater is a useful capability for dentistry, surgery, and the manufacture of things that have regular contact with moisture, such as roads and the exteriors of cars, houses, and ships.”

    The sea cucumber, a relative of the starfish, produces a distinctly different type of thread. A few species of this cylindrical invertebrate protect themselves from predators by ejecting, in a matter of seconds, fine, sticky threads that entangle an attacker and enable the sea cucumber to sneak away. Before they are ejected, the threads, which consist of an outer and an inner layer of cells, are quite short and not sticky at all, says Patrick Flammang, a marine biologist at the University of Mons, Belgium. But as the cucumber ejects them, the emerging threads shed their outer cell layer, enabling the inner cell layer to spring open and elongate. At the same time, the inner cells secrete granules of insoluble proteins that stick together and adhere to whatever they come in contact with in the water, he explains. Such quick-acting, underwater glues are tempting alternatives to existing marine adhesives, which tend to take longer to cure, says Flammang.

    Autumn expects that nature holds many more adhesion secrets that can be put to work. “Biodiversity is a library of engineering applications,” says Autumn, “and while we don't yet have the right model for designing the best adhesive, it's out there.”

    • *Society for Integrative and Comparative Biology annual meeting, Anaheim, California, 2–6 January.

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution