News this Week

Science  16 Jun 2000:
Vol. 288, Issue 5473, pp. 1940
  1. MILITARY RESEARCH

    Researchers Target Flaws in Ballistic Missile Defense Plan

    1. David Malakoff,
    2. Adrian Cho

    Researchers are stepping up efforts to shoot down a proposed U.S. missile defense system. More than three dozen scientists journeyed to Washington, D.C., this week to warn lawmakers that the $60 billion system, designed to knock incoming warheads out of the sky, is technically flawed because it can't pick out real warheads from decoys. Pentagon officials heatedly deny a new report by one scientist that contractors have rigged trials to hide the problem, although they admit that some tests were simplified to save time. In the wake of these events, a leading Democrat is urging President Bill Clinton to delay a pending decision on building the system.

    The national missile defense (NMD) system, one of several antimissile technologies being developed by the Department of Defense (DOD), is supposed to seek out and destroy intercontinental ballistic missile warheads as they approach their targets. Its interceptors, guided by onboard sensors and ground- and space-based radars, would smash into the warheads at speeds approaching 24,000 kilometers an hour, shattering them with brute force. Current plans call for a limited defense, with 20 Alaska-based interceptors by 2005 and 100 in 2007, that could blunt a missile threat from North Korea, Iraq, and other so-called “rogue states.” The Pentagon has conducted four tests in the past 4 years, but Clinton has said he will wait until after a fifth interceptor test next month to decide whether to proceed.

    Some scientists say such a rigid deadline is a mistake and that the carefully controlled test won't determine if the system can foil a real attack. “The [system] is not capable of handling countermeasures” such as decoy balloons, physicist Kurt Gottfried of Cornell University in Ithaca, New York, told attendees at a 12 June rally held at the Capitol. The Union of Concerned Scientists, which organized the rally, and the American Physical Society (APS) have reached similar conclusions (Science, 14 April, p. 243), with the APS Council noting on 29 April that the tests conducted so far “fall far short of those required to provide confidence in the [system's] ‘technical feasibility.’” Added physicist Joseph Lach of the Fermi National Accelerator Laboratory in Batavia, Illinois: “Before you plunk down the money and start up the factories, make sure it works.”

    One blistering technical attack comes from physicist and nuclear engineer Theodore Postol of the Security Studies Program at the Massachusetts Institute of Technology, who calls the tests to date “a scientific hoax.” Postol analyzed data from a 1997 flight that tested the ability of an early version of the kill vehicle to distinguish between a real target and nine decoys. He concluded that engineers—despite public claims to the contrary—had failed to solve the system's main technical challenge of devising a sensor aboard the missile that could home in on the twinkling, erratic infrared signal produced by a real warhead tumbling through the vacuum of space. “The signals from both the warhead and balloons had no features that could be exploited to tell one from the other using credible scientific methods,” he wrote in an 11 May letter to White House Chief of Staff John Podesta demanding an independent review of the program.

    To cover up that failure, Postol says, engineers at the Pentagon's Ballistic Missile Defense Organization and major contractor, TRW Inc. of Redondo Beach, California, selectively analyzed data and then simplified three subsequent tests by reducing the number and complexity of the decoys. They also scheduled trials to take advantage of beneficial light conditions, Postol claims. He compares the changes to “rolling a pair of dice and throwing away all the outcomes that did not give snake eyes.” Similar charges have been leveled at TRW by Nira Schwartz, a former project engineer who was dismissed in 1996. Schwartz claims that she was fired after urging the company to share its knowledge of the kill vehicle's flawed vision. Legal documents generated in the course of Schwartz's suit are the basis for much of the information that Postol reviewed.

    Pentagon officials concede that they simplified some tests to speed development. But they insist that the changes were not meant to hide any flaws in the system and do not undermine its credibility. “I will categorically deny that we're fixing the flights,” Jacques Gansler, undersecretary of defense for acquisition and technology, told reporters shortly after The New York Times described Postol's analysis in its 9 June issue. Postol's review was limited to an early flight, DOD officials note, and thus ignores improvements in the kill vehicle, such as the addition of new sensors. It also fails to account for expected improvements in radars and computers. The first tests showed “proof of principle,” Pentagon officials say, adding that it will take years for all the pieces of the system to be integrated into a smoothly functioning defense shield.

    Postol, however, doubts that new equipment will be able to decode the confusing infrared signals. A major problem, he says, is that the kill vehicle must rely on its own infrared “eyes” in the final minute or two before impact, and that no amount of systems integration can correct for its inability to spot the right target. In addition, he says it is “incredible” that the government expects nations capable of developing nuclear-tipped missiles to be unable to deploy effective decoys. NMD is “a system in search of a cooperative enemy,” says Representative Rush Holt (D-NJ), formerly a physicist at Princeton University.

    Both NMD opponents and supporters agree that the technical dispute has further inflamed an international diplomatic debate over whether the United States should deploy the system. Russia and many U.S. allies oppose deployment on the grounds that it would require rewriting a 1972 treaty limiting such defenses. Such a step, they say, would increase the global risk of nuclear attack. Pointing to the system's technical problems and rising costs, even some senior Democrats are trying to persuade Clinton to leave the decision to the next president. If the system can't tell “a phony [missile] from a real one,” says the Senate's top Democrat, South Dakota's Tom Daschle, “I don't know that we're ready to commit resources.” That kind of uncertainty is music to the ears of the researchers who gathered here this week.

  2. PUBLIC HEALTH

    Deaths Among Heroin Users Present a Puzzle

    1. Michael Hagmann

    The first symptom is an abscess where the needle broke the skin. Next, inflammation tears through the body, triggering a steep drop in blood pressure. The number of white blood cells skyrockets. Within hours, the victim's organs shut off one by one. More than 30 heroin users in Scotland and Ireland have died this dreadful way in the past 6 weeks, and health officials had reason to suspect that they were looking at the handiwork of a pathogen whose occasional appearance invariably is cause for alarm: anthrax, the notorious biological warfare agent.

    The suspicions aroused a lightning-fast response from microbe hunters on both sides of the Atlantic. Their analyses, first posted on 1 June on Eurosurveillance Weekly—an Internet site that tracks infectious diseases in Europe—and in more detail in last week's Morbidity and Mortality Weekly Report, offer a somewhat reassuring conclusion: Anthrax did not kill the heroin users. It's unclear what did, but a new suspect has emerged.

    “Drug addicts die all the time,” says Syed Ahmed of the Greater Glasgow Health Board in Scotland. Even so, when Glasgow-area hospitals realized in early May that heroin users were succumbing to a mysterious malady, it was obvious that the cases painted “a very different picture” from an overdose, Ahmed says. Rather, a pathogen appeared to be responsible. Then on 6 May, Per Lausund of the Norwegian Army Medical School in Oslo posted a notice on ProMED, an Internet forum for infectious disease specialists. It described the case of a heroin addict in Norway who had died of anthrax the week before. Lausund had not yet heard about the Scottish victims.

    Researchers suspected that a batch of heroin of unknown origin had been contaminated, knowingly or otherwise, with the anthrax bacillus, the spores of which can lie dormant in harsh conditions for years. Although anthrax is not transmitted from person to person, the possibility of any commodity being spiked with the bacillus raised red flags. Springing to action was the U.K. Department of Health's Centre for Applied Microbiology and Research (CAMR) in Porton Down, a lab that keeps samples of many exotic diseases. “Anthrax is one of our specialties,” says CAMR microbiologist Phil Luton. Its investigation drew intense public interest in the wake of news reports speculating about a budding anthrax epidemic.

    Since the U.K. Department of Health issued a Europe-wide alert on 19 May, the death tally among heroin users has climbed to 18 in Scotland, seven in Ireland, and seven in England and Wales. In a conference call on 30 May, U.K. and Irish health officials concluded that they were “dealing with the same phenomenon,” says Joe Barry of Ireland's Eastern Regional Health Authority in Dublin. The authority, like its Scottish counterpart, shipped samples from patients to the U.S. Centers for Disease Control and Prevention (CDC) in Atlanta for analysis.

    The CDC and the CAMR returned good news: no anthrax. But the mystery deepened. The only bacteria the labs isolated are common ones unlikely to trigger such severe symptoms, says Ahmed. Although the outbreak appears to be subsiding, he says, “we still don't know what [the drug users] are dying of.” The Norwegian death appears to be unrelated, he adds.

    Suspicion now centers on Clostridia, a family of more than 30 species including the bacteria that cause botulism, tetanus, and gas gangrene. Like anthrax, Clostridia form spores hardy enough to survive the high temperatures reached when heroin is dissolved before injection. And some Clostridia are hard to culture, which may explain why pathogenic strains have not yet been detected conclusively in tissue samples from the heroin users.

    But the circumstantial evidence is mounting. Most of the victims had dissolved the heroin in citric acid before injecting it into their muscles. Citric acid damages tissue, perhaps providing a hospitable oxygen-starved environment for Clostridia spores to flourish, says Ahmed. What's more, toxins churned out by many Clostridia species would account for the rapid progression of symptoms and death. “Once the toxin is produced, an antibiotic treatment is too late,” says Brian Duerden of the Public Health Laboratory Service in London, the British version of the CDC.

    Researchers haven't ruled out other possibilities, however. “It may be a new pathogen or something that makes you slap your head and say ‘Gee, why haven't I thought of that before,’” says Martin Hugh-Jones of Louisiana State University in Baton Rouge. With roughly half of powder sold as heroin cut with filler, he says, “there's a lot of space to let you inject God knows what.” And whatever that might be is likely to kill again.

  3. TOXICOLOGY

    Just How Bad Is Dioxin?

    1. Jocelyn Kaiser

    The verdict is in—again: Dioxin is even worse for human health than previously believed. But, as has been true with earlier pronouncements on dioxin's risks, that judgment is controversial and may be appealed.

    This latest assessment comes in an eagerly awaited draft report from the U.S. Environmental Protection Agency (EPA), which concludes that many Americans may have enough dioxin in their bodies to trigger such subtle harmful effects as developmental delays and hormonal changes in men. But the draft's most explosive finding is that the risk of getting cancer from dioxin is 10 times higher than previously estimated—a conclusion based largely on new data linking dioxin to cancer in workers.

    That conclusion has flabbergasted many outside researchers, who first heard about it when the report was leaked to the press last month (Science, 26 May, p. 1313). A few told Science that they are concerned that EPA scientists may have fumbled again—when this was their chance to finally get it right. Indeed, agency scientists have spent the past 6 years revising the dioxin report, analyzing new data and reassessing earlier data after portions of their last draft were blasted by outside reviewers. “After all this time, if it doesn't fly, it will be an embarrassment to the agency,” says environmental scientist Morton Lippmann of New York University, who chaired the earlier review panel and will lead the new one.

    Dioxins are chlorinated chemicals produced mainly by incinerators and paper bleaching. They accumulate in the food chain, winding up in body fat when people eat animal products. In the 1980s, EPA concluded there was no safe level of dioxin—even the lowest exposure was hazardous. Then in the late 1980s, molecular biologists suggested that more than one dioxin molecule, perhaps considerably more, have to latch onto the cell receptor for dioxin to trigger toxic effects. Dioxin experts thought EPA may have overestimated the risk, so the agency set out to reassess it again in 1991.

    Instead of downgrading the risk, agency scientists came back in 1994 with a draft report that supported EPA's earlier conclusion that there is no exposure threshold below which dioxin is harmless. But EPA's Science Advisory Board (SAB), while praising much of the reassessment, sent two key chapters back for revision, charging that agency scientists mixed science and policy and failed to mention alternate hypotheses and data that contradicted their conclusions (Science, 26 May 1995, p. 1124).

    As requested, EPA has now rewritten the report's summary based on new dose-response modeling. It also added a new chapter to clarify how agency scientists reached their conclusions about the cumulative risks from dioxin-like chemicals by assigning each a “toxicity equivalency factor” and adding up their effects. The agency has “significantly updated” the report, says William Farland, chief of risk assessment in EPA's Office of Research and Development. “We have quite a bit of new information”—for example, from a study of Dutch infants exposed to polychlorinated biphenyls and dioxins—that even at background levels, dioxin may cause subtle neurobehavioral and immune effects.

    As for cancer effects, the report upgrades dioxin from a “probable” to a “known” human carcinogen. For the most exposed people, such as those eating a diet high in animal fat, EPA puts the risk of developing cancer at between 1 in 1000 and 1 in 100. Farland says this controversial number comes from two changes in EPA's analysis. First, when scientists extrapolated results from rats to humans, they used a new metric that factors in dioxin's far longer half-life in human tissue than in rats. Second, EPA drew on new studies of three worker populations exposed to dioxin in the United States, Germany, and Holland. Those studies include information on the levels of dioxin to which workers were exposed, enabling experts to calculate how cancer risk rises with a given dose. That analysis, which “overlaps” with dose-response estimates from animal studies, results in a dioxin cancer potency that is 30 times higher than the 1985 estimate, Farland says. The agency factors in the threefold drop in dioxin exposure since the mid-1980s to conclude that the cancer risk today is 10 times higher.

    Farland acknowledges that this number can be confusing to the public, explaining that this is the highest possible risk for the most exposed individuals, but for most people the risk will likely be lower or even zero. Even so, Farland says the report's new findings that dioxin in soil, water, and sediments may be a major source of exposure could warrant new measures to protect the food supply, for example, by cutting back on feeding lard and fish meal to cattle and pigs.

    Whether such steps are reasonable will depend on whether the report passes muster with skeptical outside scientists. Several who spoke with Science asserted that the new worker studies of cancer effects are inconclusive. Even to those who have closely watched EPA's new analysis, the 10-fold increase “is a lot more than anybody expected,” says Dennis Paustenbach, a risk assessment consultant with Exponent in Menlo Park, California. “It's going to require a lot of discussion before there's widespread acceptance.”

    That scrutiny will come in the form of public comments, a review by an outside science panel in late July, and another review by the SAB in September. Farland is urging scientists to take a close look at the report and the new data before passing judgment: “We'll have to see what they think after they've read the document.”

  4. CANADA

    New Virtual Institutes for Biomed Research

    1. Wayne Kondro*
    1. Wayne Kondro writes from Ottawa.

    OTTAWA—A prominent Canadian cancer researcher has taken on the job of leading a new biomedical research institution that is modeled after the U.S. National Institutes of Health—but which reflects 21st century practices and priorities.

    Last week Alan Bernstein, 52, was named president of the Canadian Institutes of Health Research (CIHR). The new entity, which officially opened its doors on 7 June, replaces the Medical Research Council as the country's primary source of extramural grants for basic biomedical, clinical, population-based, and health systems research. It's been given a $39 million budget increase, to $330 million, for the fiscal year beginning in April, and the promise of $72 million more in 2001-02 (Science, 26 February 1999, p. 1241). But instead of presiding over a leafy campus and a massive infrastructure, Bernstein will be midwife to a national network of a dozen or so “virtual” research institutes, grouped by scientific theme, that will weave together work in each field. He must also decide the proper scope of the CIHR, working in tandem with a 19-member governing council of senior academics and health care officials also appointed last week.

    “This is a great challenge and a great opportunity,” says Bernstein, who this week sat down here with 40 of the country's leading scientists to gather suggestions for the council's first meeting later this month. “It's really a bold and unique vision for funding, organizing, and stimulating health research.” The appointment of Bernstein, who since 1994 has been director of the Samuel Lunenfeld Research Institute at the University of Toronto's Mount Sinai Hospital, is seen by scientists as a sign of the government's commitment to basic biomedical research.

    The structure of CIHR is expected to closely follow the recommendations of an interim council, which released its final report last week. The group strongly suggested forming institutes in eight areas: cell function and cancer; aboriginal and indigenous people's health; immunity and infection; musculoskeletal health and fitness; nutrition, hormones, and metabolic health; cardiovascular and respiratory health; mental health, addiction, and the brain; and health systems: care, healing, and recovery. The council debated but didn't reach a conclusion on whether to create as many as four institutes to handle work in two other areas—the social, environmental, and genetic influences on health; and human development and health throughout life. Although the final roster is up to the new council, Bernstein says that he hopes the debate doesn't steal time from getting CIHR up and running: “We have to operationalize this bold vision, not go back and start from scratch.”

    Each institute will be headed by a scientific director and an independent advisory board that will oversee a pot of money to support networking initiatives, training grants, workshops, and what one official calls “cutting-edge thinking.” Scientists will continue to apply to the CIHR itself, which will operate a centralized review system, but they will be asked to designate the institute with which they wish to affiliate.

    Still unresolved is the management of $105 million worth of health research programs administered by two existing granting councils in the natural and social sciences. Bernstein says that he doesn't favor a hostile takeover of the health components of the two granting councils, although the CIHR has already swallowed Health Canada's $40 million national health R&D program. “I want researchers now served by other agencies to feel at home in the CIHR,” he says. “But I would encourage those communities to adopt CIHR as their first home, because we have the broadest mandate.”

    Bernstein plans to take a practical approach to resolving the issue. “My guideline is: Does this make sense? Is it the best way to organize science and get the best science done for the least amount of bucks?” The answer, he adds, should also help the country meet Prime Minister Jean Chretien's vow, in announcing the CIHR, to make Canada “the place to be in the 21st century.”

  5. ASTRONOMY

    Test Flight Added for Future Space Telescope

    1. Robert Irion

    ROCHESTER, NEW YORK—Last month, a national panel of astronomers picked the proposed Next Generation Space Telescope (NGST) as the field's top priority during the next 10 years (Science, 26 May, p. 1310). Now, it appears that researchers will have to wait nearly the full decade for the Hubble Space Telescope's successor to take wing. NASA officials have decided to test NGST's demanding technology in a small-scale version, described here last week at a meeting of the American Astronomical Society, before forging ahead with the real thing. The technological delays that led to the shakedown mission will push the launch of the full telescope back another year to 2009 at the earliest.

    Astronomers believe that NGST will extend their vision to the era when galaxies first formed and will expose new details of how stars and planets arise in our own galaxy. Its mirror will be about 8 meters across, more than three times as wide as Hubble's and rivaling the largest optical telescopes on Earth. Such a mirror is too big and heavy to launch in one piece, so engineers must devise a way to deploy a segmented mirror in space. Moreover, NGST will orbit around a gravitationally stable point in space about a million kilometers from Earth, beyond the reach of space-shuttle repair missions. Teams from Lockheed Martin and TRW/Ball Aerospace are developing competing plans for the telescope, and NASA will select the winning design by the end of 2001.

    The contractor will then have 3 years to prepare a $200 million prototype called “Nexus,” which will fly to the distant orbit and mimic NGST's technology on a one-third scale. NASA added Nexus to its lineup this spring after setbacks with prototypes of the telescope's systems convinced mission planners that it was too daring to build NGST without testing its folding mirrors, solar heat shield, and other unproven technology in space. “We're not quite ready to pursue the aggressive schedule we had before,” says project scientist John Mather of NASA's Goddard Space Flight Center in Greenbelt, Maryland. The costs for Nexus, Mather says, are part of the technology development budget for NGST and will not increase the telescope's $1.3 billion price tag.

    Nexus will employ three small mirrors that unfold to a diameter of 2.8 meters—wider than Hubble's glass eye, but with less collecting area because the segments won't fill an entire circle. Although Nexus will be a powerful telescope in its own right, it will carry just one simple camera to verify that it can view the heavens sharply. “The goal is not science,” says mission leader Richard Burg of NASA Goddard. “Nexus is an engineering pathfinder for NGST to reduce and eliminate risk.”

    The mirrors in particular will stretch the ingenuity of opticians. They must be exceedingly lightweight and adjustable so that the segments align precisely after they unfold, and their mechanical systems must operate at a frigid 50_C above absolute zero. Several groups at universities and optical laboratories are working on 10 possible designs. Mirrors based on beryllium, silicon carbide, and thin layers of glass each have shown promise, says optical physicist H. Philip Stahl of NASA's Marshall Space Flight Center in Huntsville, Alabama. Still, the teams have faced cracking, warping, and other hazards of pushing materials to their limits. “Progress has been slower than we hoped,” Mather acknowledges.

    Meanwhile, a previously scheduled test of NGST's protective shade will occur as planned in October 2001. Space shuttle astronauts will unfurl a one-third scale model of the thin shield called ISIS, for “inflatable sunshade in space.” The test will reveal the stability and thermal properties of the shade, which must cool the telescope but not jiggle it. Indeed, NGST will have to point at its distant targets with an accuracy of less than a millionth of an angular degree, making a motionless shield essential.

    Researchers who hope to use NGST think Nexus and the resulting delay are wise. “It's a very good decision,” says astronomer Pierre Bely of the Space Telescope Science Institute in Baltimore, Maryland. “Nexus is insurance to make sure we understand the problems in going from Hubble to NGST.” Outside observers are also watching with keen interest. “They've got real technical challenges,” says Paul Vanden Bout, director of the National Radio Astronomy Observatory in Charlottesville, Virginia. “If they pull all that off, it's a huge step.”

  6. IMMUNOLOGY

    A New Way to Keep Immune Cells in Check

    1. Michael Hagmann

    To avoid being killed by friendly fire from the body's immune system, normal cells carry a white flag of sorts—proteins on their surfaces that mark them as “self.” Until now, researchers have identified only one type of white flag: so-called class I major histocompatibility complex (MHC) proteins—also known as transplantation antigens—that are present in abundance on the surface of most healthy cells. But new findings have broken the MHC proteins' exclusive hold on the self marker business.

    Self identifier.

    By binding to SIRPα, CD47 on red blood cells (RBCs) can prevent phagocytosis by macrophages.

    CREDIT: K.SUTLIFF

    The MHC proteins deliver a peaceful “everything is fine” signal to natural killer (NK) cells, a caste of immune warriors that primarily destroys cells that have turned cancerous or have been infected by a virus and, as a result, carry abnormally low amounts of MHC molecules. Now, on page 2051, a team led by Frederik Lindberg at Washington University School of Medicine in St. Louis, Missouri, reports that macrophages, the immune system's scavenger cells, recognize a different inhibitory signal—a protein called CD47.

    Lewis Lanier of the University of California, San Francisco (UCSF), says that the new findings demonstrate that “negative regulation permeates the immune system much more broadly than just NK cells.” Indeed, adds Marco Colonna of the Basel Institute of Immunology in Switzerland, CD47 may only be the tip of the iceberg. “Chances are,” he predicts, “that a lot more self markers will pop up in the future.”

    The new findings also shed light on the role of CD47, a surface protein present on basically every cell type—and long a molecule in search of a function. Lindberg and colleague Eric Brown, now at UCSF, cloned the CD47 gene in the early 1990s and then inactivated or “knocked out” the gene in mice in an effort to pin down its function. But to Lindberg's disappointment, the resulting animals were almost normal. They “didn't really give us any hint as to [the gene's] function,” he recalls.

    But a discovery last year did. Scientists found that CD47 binds to SIRPα, a protein present in high concentrations on many white blood cells. SIRPα's structure suggests that it might be an inhibitory receptor similar to the ones on NK cells that bind MHC proteins, so Lindberg and his colleagues decided to find out if CD47 binding to SIRPα might also lead to immune cell inhibition.

    To test this, the researchers used red blood cells (RBCs) on which CD47, but not class I MHC proteins, normally abound. When they transfused fluorescently labeled normal RBCs into either their CD47 knockouts or normal mice of the same strain, the cells persisted. But CD47-lacking RBCs from the knockouts were rapidly destroyed in normal mice. “After 1 day there was nothing left,” says Lindberg. That indicated that CD47 prevents the RBC destruction.

    Other experiments pointed to macrophages as the purveyors of the destruction. For example, the CD47-deficient RBCs were quickly destroyed when they were transfused into mice that lacked functional B and T cells, indicating that those cells were not involved. In contrast, removing the spleen, the organ where old and faulty RBCs are usually disposed of by macrophages, prevented RBCs from being eliminated. Evidence confirming that macrophages use SIRPα to recognize CD47 came when the researchers added an antibody against SIRPα to a mixture of macrophages and RBCs. Now, the “blindfolded” immune cells eliminated even normal CD47-bearing RBCs.

    Taken together, Lindberg says, these results suggest that “CD47 is a safeguard against macrophages going off too easily. If CD47 is present [on a target cell], the macrophage leaves it alone, but if it's absent the macrophage goes: ‘Let's get cracking!’”

    To Colonna, CD47's new job makes perfect sense. “RBCs don't express MHC molecules, so they need something else to mark them as self,” he says. Lindberg adds that his findings might also explain the anemia seen in individuals who fail to express Rh blood markers on their RBCs. These patients also have a drastically reduced CD47 density on their RBC surfaces, and this may make them more prone to elimination by macrophages, speculates Lindberg.

    Still to be determined, however, is whether changes in CD47 concentrations on cells play a role in other pathological conditions. There are hints that they might. For example, ovarian cancer cells express CD47 at a much higher than normal level. “This may signal ‘I'm self, don't kill me,’” Lindberg says.

    Another open question centers on the role of SIRPα in other tissues. Unlike other inhibitory immune receptors, SIRPα is found on brain cells, for instance. “I'm very intrigued by that,” says UCSF's Lanier. “There may be an even broader context in which we need to think about these inhibitory receptors. I guess now the neuroscience people need to get busy.”

  7. ALTERNATIVE MEDICINE

    Herbal Product Linked to Cancer

    1. Liese Greensfelder*
    1. Liese Greensfelder is a writer in Nevada County, California.

    A Chinese herb that damaged the kidneys of dozens of Belgian dieters in the 1990s appears to pack a vicious second punch—cancer and precancerous lesions, according to a report in the 8 June issue of The New England Journal of Medicine. These findings draw one of the strongest links yet between use of a herbal product and cancer and, critics argue, serve as a grim warning that dietary supplements need more regulation.

    The unfortunate subjects of this study are a subset of some 10,000 Belgian dieters, who between 1990 and 1992 took a mixture of Chinese herbs and Western drugs prescribed by weight-loss clinics. After dozens of dieters developed symptoms of kidney failure, investigators discovered that Belgian pharmacists had been using mislabeled Chinese herbs to concoct the diet pills. Instead of Stephania tetrandra, pharmacists had packed the pills with derivatives of the herb Aristolochia fangchi, known to damage kidneys and to cause cancer in animals. At least 70 people experienced complete kidney failure, and some 50 more suffered kidney damage severe enough to require treatment.

    The first urinary tract cancers were found among these patients in 1994. To deter onset of the disease in others, doctors at Erasme Hospital in Brussels counseled patients whose kidneys and ureters had stopped functioning to consider surgical removal of the organs. Thirty-nine people opted for the operation over the past several years. When a team of researchers—coordinated by kidney specialist Joélle Nortier—inspected the excised tissues, they were startled to discover that cancer had already developed in 18 patients, and precancerous lesions (dysplasia) were present in 19 others. Prescription records confirmed that patients who had taken the highest cumulative doses of Aristolochia were most likely to have cancer. As further evidence that Aristolochia was to blame, the team found in all 39 patients evidence that Aristolochia had bound to DNA, a process that could trigger mutations.

    Belgium banned the import of Aristolochia in 1992. But there's little to prevent a similar herbal disaster in the United States, asserts David Kessler, dean of Yale University School of Medicine and former commissioner of the Food and Drug Administration (FDA)—especially because he was just able to purchase Aristolochia in capsule form, he writes in an accompanying editorial. Unlike food additives and drugs, which are subjected to strict premarket tests for safety and effectiveness, products labeled “dietary supplement” may enter the market untested, thanks to the 1994 Dietary Supplement Act. In effect, FDA cannot restrict the use of supplements unless substantial harm has been proven, Kessler says. “You shouldn't have to wait for harm to occur before you do a systematic safety review,” Kessler told Science. “It's time to have a premarket safety system.”

    Others argue that FDA's hands are not tied as tightly as Kessler implies. Varro Tyler, a retired dean of the School of Pharmacy at Purdue University in West Lafayette, Indiana, considers company-sponsored premarket testing impractical—the manufacturers simply can't afford it. Instead, he backs a recommendation by a 1997 presidential commission that called for FDA to convene an expert committee to review the wealth of information that already exists on botanicals and then inform consumers and manufacturers about unsafe preparations. “No company in its right mind” would market preparations deemed unsafe, he says. “That would be signing their own death warrant in terms of legal actions.”

    Last month, the FDA distributed warnings to health professionals and the supplements industry about the dangers of Aristolochia. In a few weeks, the agency plans to block the herb's entry into the United States. The action is long overdue, says Norman Farnsworth, director of the Center for Dietary Supplements Research on Botanicals at the University of Illinois, Chicago. The dangers of Aristolochia are so well known, he says, that Germany banned it in 1981 and the World Health Organization issued a warning on the herb in 1982. If the FDA “had been doing its job,” he says, “they would have banned this stuff 10 to 15 years ago.”

  8. ASTROPHYSICS

    Galaxies, Black Holes Shared Their Youths

    1. Robert Irion

    ROCHESTER, NEW YORK—The origin of massive black holes and the galaxies that surround them is a chicken-and-egg conundrum. In one model of galaxy formation, whopping black holes arose early in the history of the universe. Then, gas spiraling into each hole powered quasars, while great whorls of more distant gas gradually collapsed inward to form the galaxy's stars. An opposing hypothesis holds that galaxies coalesced first, and then black holes slowly grew at their hearts. A popular third model, which astronomer Virginia Trimble of the University of California (UC), Irvine, calls the “potato-salad model,” maintains that galaxies and their black holes matured simultaneously. “Of the three possibilities, this one always seemed the most intuitively obvious,” says Trimble.

    The best census to date of massive black holes justifies that intuition: Researchers reported here last week at a meeting of the American Astronomical Society that giant black holes and their host galaxies appear to be cosmic siblings that grew up at the same time. That conclusion rests on a newly unveiled relationship between the masses of black holes and how tightly gravity binds together the billions of stars around them.

    A team of 15 astronomers used a spectrograph aboard the Hubble Space Telescope to peer at the centers of galaxies within 120 million light-years of the Milky Way. The team found eight new black holes and included six more new ones recently discovered by other scientists in its study, bringing the number of well-identified massive black holes to 33. That's enough for a good statistical analysis, says team member Richard Green, director of Kitt Peak National Observatory in Tucson, Arizona. “Our knowledge about black holes has moved beyond individual detections to a real look at the population,” he says.

    In tandem.

    Orbital speeds of stars (curved arrows) suggest that black holes and galaxies grow up together.

    CREDIT: ADAPTED FROM NASA AND K. GEBHARDT/LICK OBSERVATORY

    The spectrograph revealed the rapid orbital motions of stars in the core of each galaxy, providing a firm estimate of the mass of the hidden black hole. Those masses ranged from 3 million to 2 billion times the mass of our sun. Then the team went beyond previous analyses by examining how stars move throughout each galaxy's central “bulge,” a spherical cloud containing billions of suns. Those stars are too distant to feel the gravitational pull of the black hole, says astronomer Karl Gebhardt of UC Santa Cruz. Instead, their orbits through space reveal the mass and compactness of the overall bulge. That's a measure of how tightly the galaxy's parent gas cloud collapsed in on itself in the young universe.

    The team was surprised to find a near-perfect correlation between the orbital speeds of these far-flung stars in the galactic bulges and the masses of the black holes in the middle. The more massive the hole, the faster the stars throughout the bulge moved. “Normally, we don't see such a tight relationship,” Gebhardt says. “It's a slap in the face that something fundamental is going on.”

    That something, says astronomer John Kormendy of the University of Texas, Austin, almost certainly is an evolutionary partnership between galactic bulges and massive black holes. “If black holes are unusually massive whenever galaxies are unusually collapsed, then the black hole masses must be determined by the collapse process,” Kormendy says. “The major events that made the bulges and the black holes were the same events.” An alternative explanation for the partnership between bulges and black holes—that massive black holes arose first and then attracted massive bulges around them—is much less palatable, he notes. Baby black holes of such size would ignite powerful quasars, and their intense radiation pressures would drive apart the cocoons of gas around them rather than drawing them in tightly.

    Other astronomers aren't quite ready to discard alternative scenarios. “The new correlation is impressive,” says Andrew Wilson of the University of Maryland, College Park. “But I think we have a long way to go before we understand the symbiosis between black holes and galactic bulges.” Trimble sounds another caution: “It's difficult to know whether they have studied a really representative sample of galaxies.” If Hubble takes a broader census of black holes in galaxies chosen at random and finds a similar pattern, she says, more astronomers will find the case convincing.

    The study raises another puzzle: Each black hole is about 1/500th as massive as the bulge of stars it inhabits. No one knows why that figure is so consistent from one galaxy to the next, although Kormendy views it as another indication that the growths of galaxies and black holes are intimately linked. Outpourings of energy from quasars may stall torrents of gas from feeding black holes when they reach a particular size in relation to their host galaxies, says astrophysicist David Merritt of Rutgers University in Piscataway, New Jersey. However, no models or computer simulations have yet shown how nature arrives at that magical ratio.

  9. OCEANOGRAPHY

    Missing Mixing Found in the Deep Sea

    1. Richard A. Kerr

    Climate modelers thought they could get away with a simple-minded ocean. In their simulations of how the sea's ponderous flows of water and heat affect climate, including future greenhouse warming, they assumed that something—they were vague as to what—evenly stirred the world ocean from top to bottom around the globe. But oceanographers gauging the tides of the world's seas from a satellite perch have found intense patches of tidally driven mixing deep within the open ocean. Once modelers include these patches, they should see some changes in model predictions of global warming. Climate prediction models “have a long way to go,” says oceanographer Raymond Schmitt of the Woods Hole Oceanographic Institution in Massachusetts, before they match the tidal mixing of the real ocean.

    The study of tides has already come a ways. “Tides are considered an old subject, a ‘solved problem,’ of no interest to most oceanographers,” laments physical oceanographer Carl Wunsch of the Massachusetts Institute of Technology. Beginning early in the 18th century, it became obvious that the moon raises sea level by meters along the shore, although by only a few tens of centimeters in the open ocean. Where the water is shallow enough, thinking went, tidally induced currents drag on the bottom and stir the shallow seas and waters of the continental shelves. Thus, through the tides, the moon's orbital energy slowly trickles into stirring the shallow ocean.

    But although tidal mixing seemed to be solved, some loose ends remained. It wasn't clear to everyone that the shallow ocean could account for the dissipation of all the tidal energy the moon imparts. And oceanographers had trouble with what they call the conveyor belt, which carries heat from the tropics to the poles. Dense, cold water sinks into the deep sea near the poles, travels to the tropics, and rises to the surface. To do so, it must mix with warmer, more buoyant waters along the way. Oceanographers have discovered one or two places where tidal currents flowing across a rough sea floor greatly enhance mixing (Science, 10 January 1997, p. 160). But they have found only a tenth of the mixing required to lift deep waters.

    Oceanographers needed a global tide gauge, which they found in the TOPEX/POSEIDON satellite. Every 10 days for 7 years, it has measured the height of the tides over the world ocean to an accuracy of about 1 centimeter. In this week's issue of Nature, geophysicists Gary Egbert of Oregon State University in Corvallis and Richard Ray of NASA's Goddard Space Flight Center in Greenbelt, Maryland, report how they used TOPEX/POSEIDON altimeter data to map tidal currents, tidal energy, and finally where tidal energy was being dissipated and mixing the ocean. Shelves and shallow seas such as the Yellow Sea off Asia show up clearly as areas of strong tidal dissipation, but some deep-sea, rough-bottomed areas show considerable mixing as well—the ridge joining the Hawaiian Islands, the Mid-Atlantic Ridge, and ridges of the southwest Pacific, among others. All told, open-ocean tidal mixing seems to account for enough energy dissipation to keep the conveyor belt running.

    “That's supportive of deep tidal mixing,” says Wunsch. “The answer is interesting.” It reaffirms the theoretical inference that “mixing is the driving force” behind the conveyor belt, says Schmitt. If not for tidal mixing in the deep sea, he notes, the ocean would fill to near the top with cold water and the conveyor would shut down. Because deep-sea tidal mixing determines the rate at which the conveyor runs, changes in the way tidal currents interact with the bottom could have changed the way the climate system worked in the past, Wunsch notes. The rearrangement of the continents and sea-floor ridges in plate tectonics may have altered tidal mixing, for example. And the uniform mixing of climate model oceans—in contrast to the actual patchy mixing—could skew model forecasts of ocean behavior and therefore greenhouse warming. Those are far-reaching conclusions to draw from imperceptible tidal bulges on the open sea.

  10. CLINICAL TRIALS

    Harvard's Koski to Lead Human Subjects Office

    1. Eliot Marshall

    As Congress steps up oversight of human clinical trials, the Administration is getting a high-level manager of its own to watch out for the interests of volunteers in U.S.-financed research. As expected, Secretary of Health and Human Services (HHS) Donna Shalala last week named anesthesiologist E. Greg Koski, 50, to run a new HHS Office for Human Research Protections (Science, 26 May, p. 1315).

    One of Koski's first jobs when he takes over in September will be to conclude more than 170 pending investigations into alleged infractions of regulations governing human experimentation. Koski may also be swept into a debate on the adequacy of those regulations. Last week a bill, H.R. 4605, was introduced by Representatives Diana DeGette (D-CO), Henry Waxman (D-CA), and John Mica (R-FL) that would extend HHS's authority over patients in federally funded studies and certain private studies that are not now subject to federal monitoring. Arguing for a “comprehensive reform,” the sponsors propose to bring all human research under a single standard, “independent of setting and funding source.” They also would like to create a “nonprofit entity” to accredit the local Institutional Review Boards that examine and approve clinical trials.

    Koski, an M.D.-Ph.D. associate professor at Harvard Medical School in Boston, has spent 30 years in the Harvard community, most recently as director of human research affairs for Partners HealthCare System Inc., which oversees the Massachusetts General Hospital in Boston and other Harvard-affiliated hospitals. As chief U.S. protector of research subjects, Koski will report directly to the assistant secretary of health, Surgeon General David Satcher. An agency reorganization that created his job at HHS also created a 12-person advisory panel—not yet named—that will give outsiders a chance to drive policy from the back seat.

    Koski could not be reached for comment, but scientific leaders say they welcome him to Washington. Jordan Cohen, president of the Association of American Medical Colleges, said last week that Koski “is highly respected within the academic medical community and brings to the new office a strong track record in the area of human subject protections.” The administration, he added, “is to be applauded for attracting a person of Dr. Koski's caliber.” Cohen also supports H.R. 4605.

    But Vera Hassner Sharav, a leader of one of the patient advocacy groups that has faulted the government for weak enforcement of regulations, Citizens for Responsible Care and Research in New York City, offers a more wary endorsement of HHS's new scheme. Sharav says she is concerned that a boost in the status of the human subjects office, formerly headed by Gary Ellis, does not necessarily confer independence. “The new office,” she says, “should be judged on its actions—on how vigorously and expeditiously it investigates allegations of research violations.”

  11. SCIENCE INTERVIEW

    China's Leader Commits to Basic Research, Global Science

    In an exclusive interview with Science, President Jiang Zemin offers a glimpse of a new China that is encouraging young scientists to use the Internet for their work—and reveals his secret past as a nuclear engineer

    BEIJING—The specially numbered car was waved through the western gate and entered an enormous compound that, not so long ago, for security reasons didn't appear on city maps. But residents have always known where it was, and some have now dubbed it the New Imperial Palace. Nestled against the western wall of the Forbidden City and a short walk from Tiananmen Square, the compound is the home of China's central government. It is a sprawling, scenic, lakeside expanse of small buildings, with ornately carved temple roofs, mahogany furniture, and golden cushions. It was in one of these buildings that the president of China, Jiang Zemin, greeted Science on the afternoon of 17 May for what in the guise of a friendly conversation proved to be a unique interview—an exchange that lasted just short of 2 hours.

    Jiang is a man of contrasts as striking as those of the nation he leads. He studied civil engineering in secondary school and power engineering at university. After graduation, Jiang worked as a mechanic on the power equipment of a food processing factory in Shanghai. After the founding of the New China, he was put in charge of the factory. During the Kuomintang's bombing of the Shanghai Power Plant in 1950, Jiang personally started the standby parallel diesel power generating sets to keep the ice cream in the factory from melting, a feat of which he is very proud. Later, he served as director of the Power Plant of the Auto Works in Changchun—the first auto works in China—chief engineer of Shanghai Electrical Machinery Research Institute, and director of China Nuclear Power Research Institute. The last will be news to most Chinese, because even today Jiang's curriculum vita refers to the facility by its code name: the Wuhan Heat and Power Machinery Research Institute.

    All these accomplishments didn't exempt Jiang from attacks during the Cultural Revolution, however. But his rise to power continued after that chaotic period, first as vice chair of the State Administration for Import and Export, then as minister of the Electronics Industry, and next as mayor of Shanghai. At age 73, he demonstrates a broad range of interests—recounting history and citing Western novels from Madame Bovary to Gone With the Wind. He is fond of quoting Chinese poets, Confucius, and the Gettysburg Address, and refers frequently to his “research in the archives.”

    A protégé of the former Chinese leader Deng Xiaoping, Jiang has learned to show a multiplicity of faces. In his meeting with Science, he was alternatively tough, charming, charismatic, personally warm, and somewhat rambling yet clear on what he wanted to say and what he didn't. And although some observers saw him as a colorless bureaucrat and an intellectual lightweight when he succeeded Deng, Jiang makes clear in this interview with Science Editor Ellis Rubinstein that he is a pragmatist and is committed to major structural change. His comments are edited for brevity and include written answers to questions submitted prior to the interview.

    Science: Compared to the leaders of Western nations, China's government leaders are far more likely to be engineers—like yourself—or scientists. What is your view of the importance of science and technology to the national well-being?

    Jiang: People in China attach great importance to science and technology. China had a long history with splendid achievements in science and technology. But starting from the last years of the Ming dynasty, it began to lag behind other countries in terms of science and technology. From Newton's dynamics, to Einstein's theory of relativity, to the latest development of the Internet, science and technology has developed by leaps and bounds. So I often ask myself why China began to lag behind.

    Generally speaking, the reason lies with the feudal system in China. In the Ming dynasty, the Great Wall was renovated and strengthened. The feudal rulers forbade traveling abroad and later imposed restrictions on entry into and exit from China via sea. That closed the door to external exchanges between China and the rest of the world.

    You must have noticed that many Chinese leaders—Li Peng, Zhu Rongji, and myself—used to be electrical engineers. And many other Chinese leaders also have an engineering background. That is because we all wished to build up our country and rejuvenate our nation by relying on science and technology when we were young.

    International cooperation

    Science: Now China appears to be opening up rapidly, particularly in science. What are some of the promises and perils of opening up?

    Jiang: Fifteen years ago I was mayor of Shanghai, and recently I visited Shanghai again. I was surprised to see so many achievements in housing construction, public transport, commodities supply, and information infrastructure. And all this has come as a result of reform and opening up. Therefore, I've always believed that we need to cast aside … bad legacies [such as closing our borders. And yet] we still need to … promote all the fine traditions of the Chinese civilization.

    Recently, I paid a visit to Greece. I had begun to develop a strong interest in Greek civilization back in middle school days. Therefore, during the visit I discussed with people the comparative studies of the Eastern and Western cultures and the theories and principles advanced by Socrates, Plato, Aristotle, Archimedes, and others. My point is that, on the one hand, the Chinese people have every reason to be proud of their ancient tradition of civilization, but, on the other hand, we should not stop learning—not even for a single day—from all the fine traditions of the world.

    Science: What are the practical consequences of this philosophy?

    Jiang: China has signed agreements on scientific cooperation with the governments of 95 countries and established scientific links with more than 150 countries and regions. Chinese scientists have participated in 800 scientific collaboration projects launched by international organizations. As long as we follow the principles of equality, mutual benefit, sharing achievements, and respecting intellectual property rights, there should be no risks involved. On the contrary, international collaborations, exchanges of scientists, and the sharing of resources and information and research instruments will help advance science, promote economic and trade cooperation, and propel economic globalization.

    Confucius once said that whenever there are three people walking together, one of them is bound to be able to teach you something. And Confucius also said that to say what you know and what you don't know is knowledge. Through international cooperation, the Chinese scientific community has learned modern theory and management expertise, upgraded research and development capabilities, improved engineering and product quality, and produced good economic and social results. At the same time, Chinese scientists have made great contributions to modern scientific progress.

    Science: What new initiatives best show China's commitment to international scientific collaboration?

    Jiang: The Chinese government will, for example, fully support the development of worldwide and cross-region cooperation networks of scientific research and high-tech industries, such as setting up Sino-Israeli, Sino-Australian, and China-APEC [Asia-Pacific Economic Cooperation] scientific collaboration funds. Moreover, we encourage Chinese scientists to participate in the Fifth Research Framework of the European Union and in other major international collaborations on a selective basis. Meanwhile, some state-level scientific programs and research centers are open to foreign research institutions and scientists who are welcome to participate in our basic research and high-tech programs.

    China's research capacity

    Science: Can China afford to build its own large research facilities, or can it gain the necessary benefits from international collaborations with other countries?

    Jiang: Yes, we have put the construction of large research facilities high on the agenda to propel scientific advances. In recent years, we have built a number of large research facilities, such as the Beijing Electron-Positron Collider, Lanzhou Heavy Ion Accelerator, Hefei Tokamak Facility, and Qinghua Low-Temperature Nuclear Reactor, all of which have enhanced our research capability and broadened our capacity to probe the unexplored world. Large research facilities that are still under construction include the Diastrophism Monitoring Network [to measure movement of Earth's crust], the Large Area Multitarget Optical Fiber Spectroscopic Telescope, the Cooling Storage Ring of Lanzhou Heavy Ion Accelerator, and the Hefei Superconduction Tokamak Facility. The Chinese government will intensify its efforts for the construction of such facilities in the Tenth Five-Year Plan to improve the country's basic research capabilities. International collaboration is crucial to scientific advances and benefits all the participants. We hope to expand the channels for scientific collaboration with other countries and take an active part in international collaborations with large research facilities.

    Science: What are your feelings about basic versus applied research?

    Jiang: Looking at the history of science and technology development, we know that the outcome of basic research has been tremendous breakthroughs and progress for the entire human society. And basic research has promoted progress in applied research. In fact, the continuing advance in applied research will inevitably demand further development in basic research.

    I dare to say that without quantum theory, there would be no microelectronics technology and, likewise, without relativity theory, there would be no nuclear bombs, nor would there be any nuclear power stations. Sometimes people may not know in what specific areas a breakthrough in basic research may be applied. The scientists who had established quantum theory would not have been able to predict how microelectronics technology would develop. So I have been telling people that our efforts should be rationally divided between basic research, applied research, and the development of technologies.

    Of course, one has to consider the level of economic development and the realities of an individual country. Personally, I hope that an economically strong country like the United States will give more input to basic research.

    Science:What new initiatives best show China's commitment to fostering basic research?

    Jiang: The state has kept appropriating more funds for basic research. We held another national conference on basic research last March and discussed how to create a more favorable research environment for scientists. We encourage scientists to choose research subjects on their own, and we encourage research institutes to adopt new mechanisms that foster scientific development. The government will continue to increase investment in basic research and encourage the relevant agencies, local governments, enterprises, and the private sector to support basic research in various ways.

    Science: What fields of scientific research do you feel are most likely to provide high payoffs for China's future well-being?

    Jiang: Research results in many new interdisciplinary areas may significantly influence China's future well-being. Personally, I think information science, life science, materials science, and resources and environmental studies will be crucial to China's sustained development in the future.

    Scientific exchanges

    Science: Of all the changes taking place in China, none may be more future-oriented than the new efforts by your administration to bring young people back from the United States and from Europe and to provide them with important positions. Are you personally supporting China's new policies?

    Jiang: It is our policy to ensure [scientists and engineers] the freedom to come and go—to travel abroad and to come back to work here in the motherland. Despite all the different views, we still believe that they should have the freedom to come and go. And we need to further develop the Chinese economy and create better conditions to attract more people back.

    Science: Many in the West think that China's young people are unable to freely engage in Internet discussions with their peers outside China and are limited in their ability to travel freely. Is that true?

    Jiang: I'm sorry that [people would have such a] presumption, [because it is] not reality. In recent years, China's young people are not only free to discuss on the Internet but also have many opportunities to go abroad for study or work. Between 1978 and 1999, nearly 320,000 Chinese students and scholars went abroad to study. The number more than doubled that from 1872 to 1978 (about 130,000). Over the past 20 years of reform and opening up, China has received more than 340,000 students and scholars from 160 countries and regions.

    Moreover, the Chinese government has since 1993 implemented the policy of supporting scholars to study abroad, encouraging them to return, and ensuring them the freedom to come and go. Last year, we issued a regulation on the management of intermediate agencies for self-funded students who intend to study abroad. All this has facilitated both government-funded and self-supported students to study abroad now.

    As to the Internet, its development has nowadays afforded us easier access to a whole wealth of information throughout the world. According to a survey released by the China National Network Information Center, by the end of last year there were 8.9 million netizens in China, most of whom were people aged between 24 and 35. I'd like to point out that the added value of information is reflected by the fact that it is open to all and shared by all. So I hope all young people, both Chinese and foreign, and all scientists and scholars around the world will make the best use of the Internet and other means of communication.

    Science: Are you satisfied with current efforts by the Chinese Academy of Sciences and the government to lure back Chinese scientists working abroad? If not, what more can the government do?

    Jiang: Generally speaking, I'm satisfied. But a great deal of work remains to be done. Competition in scientific research is competition for talent. Many developing countries, China included, have seen brain drain to varying degrees. Since 1978, about 110,000 of the 320,000 Chinese who have gone abroad to study have come back and made contributions to the country. Of course, for various reasons, quite a number of them have decided not to—at least for the moment—come back, which is understandable. The Chinese government is taking measures to attract them back, including introducing more favorable policies and more flexible mechanisms and creating better working and living conditions for them. Meanwhile, all sorts of technology- or business-development parks set up by local governments at all levels have served as incubators for scientific and technological development and for turning research results into products. I believe more and more people will come back as our economy develops and research conditions improve.

    Educational reform and science literacy

    Science: Can China become a world leader in research if its education system continues to school young people in the accumulation of facts and the copying of wise people of earlier eras? If not, what must be done to encourage creativity and innovation?

    Jiang: I myself grew up under China's traditional education system. We must see the two sides of our traditional system. There are many good things in our traditional system. The basic education in China has produced a great number of world-renowned scientists and engineers who have made important contributions to world civilization and progress. I think schooling young people in the accumulation of facts should be encouraged. Of course, it is even more important to encourage creativity based on predecessors' achievements. We are reforming the educational system by promoting all-round development, combining education with scientific research, and cultivating creative and innovative people. I have full confidence in the ongoing educational reform.

    Science: How concerned are you about the level of scientific literacy among China's citizens? What initiatives, if any, do you support to improve science education in grade school?

    Jiang: Fundamentally, future competition hinges on talent, or on the overall quality of the people. To improve our people's overall quality and cultivate talent is a long-term systems engineering challenge. As a Chinese saying goes, it takes a decade to grow a tree and a century to cultivate people. I think China, as a developing country, should keep improving the scientific literacy of all its citizens. On the one hand, efforts should be redoubled to publicize popular science, encourage people to learn science, love science, carry forward the spirit of scientific research and innovation, and advocate scientific methods. On the other, the 9-year compulsory education system should be promoted in a comprehensive manner. Reform of curricula should be deepened and efforts should be made to set up extracurricular activity bases and science museums to enhance students' creative capabilities and promote all-around development.

    The dark side of science and technology

    Science: As was mentioned earlier, scientific breakthroughs often bring their own concerns. In the West, many people worry about genetically modified foods, stem cell research and cloning, genetic testing, the potential for telecommunications and nanotechnology advances to erode an individual's right to privacy, and so forth. What is your view?

    Jiang: We are also very much concerned about these. The prevention of gene-based discrimination, the protection of privacy, the right to information and justice—these are all issues of concern to us. I think it is important to uphold the principle of freedom of science. But advances in science must serve, not harm, humankind. The Chinese government is now mulling over new rules and regulations to guide, promote, regulate, and guarantee a healthy development of science. I believe biotechnology—especially gene research—will bring good to humanity. And I also believe that telecommunications and nanotechnology advances will have profound, positive impacts on the future of our society.

    However, in my conversations with President Clinton and with your former presidents Carter and Bush, I shared my concerns with them: How can we protect our young people from the negative impact of the Internet? The media have been developing very rapidly. And I have been telling people in the media often quite candidly that we are open to opinions and suggestions from people from all walks of life, but one thing must be ensured: that facts should not be distorted. And I think this should also apply to the Internet. Otherwise people will wonder how to tell truth from distortion on the Internet.

    Science: Some in China feel that one of the negatives from the West is the threat they see coming from foreign companies to China's intellectual property, including its genetic resources. Are you concerned about this and, if so, what is China doing to combat that threat?

    Jiang: Intellectual property is a very important issue. I have always stressed that we should respect the intellectual property rights of others and know how to protect our own. I believe a “win-win” outcome can be achieved by collaborating with the West according to the principles of equality, mutual respect, and mutual benefit. As far as genetic resources are concerned, China issued in June 1998 the Provisional Rules on the Management of Human Genetic Resources, designed to promote international cooperation and exchanges under these principles [Science, 18 September 1998, p. 1779]. The Chinese government encourages collaborations between Chinese scientists and their foreign counterparts in the field. What it discourages is nothing more than the collection of samples by individuals or companies for commercial purposes in the name of scientific research. Since the provisional rules took effect, collaborations between Chinese research institutions and Harvard University, NIH [the U.S. National Institutes of Health], and some European research institutions have been going on smoothly.

    Jiang's world view

    Science: Finally, Mr. President, you have studied the West carefully, analyzing strong and weak points. Perhaps you could share some of your views.

    Jiang: Yes, I have made in-depth comparative studies of the United States and European countries. The U.S. is a rather young country with a relatively short history, whereas Europe has a much longer history. So what is the reason for the very rapid development of the U.S.?

    You know that as members of the Chinese Communist Party, we highly respect Marx and Engels. I have read some archive materials and learned that Engels visited the U.S. from August to September of 1888. He realized that the U.S. was a rather open-minded and creative country. Therefore, many talents from all over the world have migrated to the United States, and people in the States are always ready to learn anything good from other peoples. As a result, the U.S. is able to learn and take advantage of all the strong points of people throughout the world.

    Some people believe that Europeans tend to love traditional things more. Hoping to see a united Europe, European countries have introduced the euro and have expected their euro to counter the U.S. dollar to some extent. But the euro is today faced with the devaluation problem. Last year, I visited Switzerland, Italy, Austria, France, and many other European countries. And this question has always been on my mind. Europe has a long history. In spite of the bourgeois revolution in Britain some 300 years ago and the French revolution over 200 years ago, there has still been a legacy of feudalism in Europe in different forms.

    So with my background in science and technology, I have come to this conclusion: The world we live in today is a colorful and diverse one. One should not possibly expect to have one single universally applicable political model. In university days, we had tests with alternate current generators. We used different methods to do the tests, and yet we all got similar results. I have often cited this example to frankly illustrate this point of mine in my discussions and exchanges of views with leaders of many countries.

    One thing that I am rather envious of is that Clinton, Chirac, Blair, and Schroeder are all of a younger age. They are very capable and also eloquent. Each leader has a different domain. For me, science and technology is my field, while President Clinton is a legal expert. Therefore, discussions and exchanges of views between leaders will afford leaders opportunities to learn from each other's strong points and to make up for each other's deficiencies. For this reason, I am a strong advocate of personal contacts between leaders of different countries. Of course, we may also exchange information through telephones and the Internet.

  12. BIOPHYSICS

    What's Shakin' in the Ear?

    1. Adrian Cho

    Auditory researchers agree that hair cells are the ear's miniature amplifiers. They just don't agree about how the curious devices work

    Jimi Hendrix may have deafened many a fan with feedback, but they would have struggled to hear him in the first place without the feedback that powers the ear. The cochlea, the coiled heart of the mammalian ear, not only vibrates in response to sound, but it also pumps energy into its own vibrations to beef them up. Such “active feedback” gives the mammalian ear its exquisite sensitivity and its ability to distinguish pitch. Take it away and the ear becomes an almost useless hole in the head. But no one is sure just how the inner ear pumps up the volume.

    Hear here?

    Outer hair cells might pump like pistons, or trapdoors in stereocilia might snap shut, to amplify vibrations in the ear.

    ILLUSTRATION: K.SUTLIFF AND C.CAIN

    “The notion that the cochlea has a biological amplifier, and that this is essential to normal hearing, has been an imponderable that has excited people for 20 years,” says Paul Fuchs, a neuroscientist at The John Hopkins University School of Medicine in Baltimore. Auditory researchers have known for decades that peculiar cells known as hair cells turn vibrations into electrical signals, and recent advances raise the prospect of countering hearing loss by replacing dead or damaged hair cells. But the cells also appear to amplify the vibrations, and researchers have two distinct ideas about how they do it. Last month, both rival models got a boost from new findings.

    Roughly speaking, it's a question of pistons versus trapdoors. Inside a mammal's ear, a long, narrow membrane called the basilar membrane vibrates in response to sounds. Low tones elicit ripples closer to one end of the membrane, while high tones raise a ruckus nearer the other. The membrane is carpeted by hair cells, and some of these, known as outer hair cells, contract and elongate when zapped with electricity, a behavior known as somatic electromotility. Some researchers believe this pistonlike pumping amplifies the motion of the membrane. The piston picture got a big boost last month. In the 11 May issue of Nature, neurobiologists Peter Dallos, Jing Zheng, and their team at Northwestern University in Evanston, Illinois, reported that they had isolated the protein, which they dubbed prestin, that gives the outer hair cells their unique ability to contract. Score one for electromotility.

    Every hair cell, however, also wears a crown of stiff fibers called stereocilia, which tip to one side as vibrations in the basilar membrane push it up against the overriding tectorial membrane. In response, trapdoorlike ion channels open in the stereocilia and let in potassium and calcium ions. This is the mechanism that converts vibrations into the chemical signals that fire nerves. Some researchers think it also amplifies the vibrations. The calcium ions bind to the channels, snapping them shut, and some believe this pulls the stereocilia in the opposite direction, causing them to push the tectorial membrane and amplify the motion of the basilar membrane. Recently, neuroscientist Jim Hudspeth, physicist Marcelo Magnasco, and their colleagues at The Rockefeller University in New York City developed a mathematical model of such a trapdoor amplifier. In the 29 May issue of Physical Review Letters, they argue that a single property of the model can explain some puzzling characteristics of hearing, such as why the ear registers soft tones and pitches more effectively than loud ones.

    The wrangling over how the ear amplifies sound goes back more than 50 years. At that time, scientists agreed that the cochlea could only respond passively to incoming sounds, much as the strings of a piano will ring if you shout into the guts of the instrument. But in 1948, a 28-year-old maverick astrophysicist named Tommy Gold, then at Cambridge University, pointed out that fluid in the cochlea would damp out vibrations unless the organ somehow amplifies them. Gold's idea fell on deaf ears, so to speak, because it contradicted the best data available at the time. Experiments with cochleas from cadavers led the Hungarian-born physiologist Georg von Békésy to conclude that any amplification must take place between the basilar membrane and the nervous system, not in the cochlea itself. Von Békésy's research won him the Nobel Prize in medicine in 1961.

    In the 1970s, however, new experiments showed that the ear was livelier than von Békésy had reckoned. In 1971, William Rhode, a physiologist at the University of Wisconsin, Madison, discovered that in live tissue samples from squirrel monkeys, the basilar membrane bolstered small vibrations more than it would do without feedback. Other studies gave further evidence of biological amplifiers. In von Békésy's dead cochleas, it seemed, the amplifiers had simply been unplugged.

    But if amplifiers existed, where were they? A possible answer emerged in 1985, when neuroscientist Bill Brownell, then at the University of Geneva in Switzerland, discovered the outer hair cells' unique ability to convert electrical energy into motion. “Nature doesn't devise a completely new mechanism for no reason,” Dallos says. “So clearly outer hair cell motility does something.” The discovery of prestin by Dallos's team gives a boost to the idea that that something is amplification: When the researchers genetically engineered human kidney cells to produce the protein, they found that the cells gained the ability to contract and lengthen in response to electrical signals, just as hair cells do.

    Even so, proponents of the motility amplifier still haven't explained how prestin makes cells contract or precisely how the pumping cells account for the ear's prodigious ability to distinguish between tones. “When people make models of how you should use the electromotility, they include some sort of extra [frequency] filter,” says Mario Ruggero, a neuroscientist at Northwestern. “I find that somewhat dissatisfying.” Researchers working with nonmammalian vertebrates have also begun to question the piston model. The hair cells of birds, amphibians, and reptiles cannot pump, they point out. Yet these animals hear nearly as well as mammals do, albeit at lower frequencies. Seeking an amplifier common to all sharp-eared animals, these researchers point to the stereocilia.

    The stereocilia can explain both the ear's fine tuning and other quirks of hearing in one mechanism. Over the past 2 years, Hudspeth and colleagues have developed a mathematical model in which the stereocilia tune the ear so that it is poised between two stable states, one quiet, the other ringing like a public-address system with the volume turned up too high. That on-the-brink point is called a Hopf bifurcation. In their most recent paper, Hudspeth and his team report that it puts the ear in a nonlinear dynamical situation—one in which the output is not simply proportional to the input. That nonlinear state explains why the ear amplifies softer sounds more intensely than loud ones, and why it is better at discerning their pitch. It also accounts for the third “combination tone” people sometimes hear when two tones are played at once.

    Proponents of stereocilia, however, lack a key piece of their puzzle: In spite of overwhelming circumstantial evidence, no one has ever cloned the ion channel at the heart of the model or proved that it works as researchers claim. “We truly don't know that the stereocilia mechanism exists in the outer hair cells,” Ruggero says.

    The biggest challenge for either theory is to explain how mammals can hear at extremely high frequencies when other animals can't. If trapdoor amplification were the only mechanism at work, then you might expect all creatures with stereocilia to have similar hearing ranges. Yet bats can perceive pitches up to 100 kilohertz, 10 times higher than stereocilia-bearing nonmammals can hear.

    Because only mammals have pumping outer hair cells, it might seem obvious that electromotility accounts for the mammalian ear's startling tonal range. But that idea seems to run up against basic physics. In experiments with cell cultures, Dallos's team has shown that prestin enables cells to change shape within microseconds, fast enough to amplify vibrations at 100 kilohertz. For the protein to react that quickly, however, the voltage difference between the inside and the outside of the cell must change by roughly a millivolt within microseconds. To make that happen, a hefty charge must flow onto and off of the cell. Yet such a charge can't shuttle back and forth fast enough to keep pace, Fuchs of Johns Hopkins says.

    The debate over hair cells may turn on the ear of a mouse. By knocking out the gene for prestin, researchers could turn off the pistons in the mouse's outer hair cells. Such an experiment is the next logical step, all agree. If the ion channel of the stereocilia powers the mammalian ear, the knockout mouse should hear nearly normally. “The problem is going to be the other way around,” Hudspeth says. “If the mouse doesn't have normal hearing, what does that mean?” Knocking out prestin may somehow interfere with the stereocilia, Hudspeth says, so a nearly deaf mouse won't settle the issue. In which case, you'll likely hear more about it, as long as your hair cells hold out.

  13. MATERIALS SCIENCE

    New Tigers in the Fuel Cell Tank

    1. Robert F. Service

    After decades of incremental advances, a spurt of findings suggests that fuel cells that run on good old fossil fuels are almost ready for prime time

    It's no wonder that miniature power plants called fuel cells are a perennial favorite in the quest for cleaner energy: They generate electricity from fossil fuels without burning them and spewing pollutants. But the technology's promise has always seemed just beyond reach. For one, most versions of fuel cells work best on pure hydrogen gas—a fuel, notorious for its role in the Hindenberg zeppelin's fiery demise, that's tricky to store and unwieldy to transport. And a leading alternative design—fuel cells that run on readily available fossil fuels—has lagged because these are prone to choking on their own waste.

    At last, however, researchers have made critical strides in developing commercially viable fuel cells that extract electricity from natural gas, ethane, and other fossil fuels. Conventional ceramic cells, known as solid oxide fuel cells (SOFCs), work this magic by converting, or reforming, the hydrocarbons to hydrogen inside the cells. That demands ultrahigh temperatures, which in turn requires expensive heat-resistant materials. But scientists have found a way to bypass this costly reforming process: a new generation of SOFCs, including one featured on page 2031, that convert hydrocarbons directly into electricity. And even the standard reforming SOFCs are on a roll. A recent demonstration of a system large enough to light up more than 200 homes showed that it is the most efficient large-scale electrical generator ever designed.

    “I think we've turned the corner,” says Mark Williams, who oversees fuel cell research at the National Energy Technology Laboratory in Morgantown, West Virginia. Versions of ceramic fuel cells, experts hope, will power everything from individual homes to municipal electrical grids. The market for the devices, says Subash Singhal, who heads fuel cell research at the Pacific Northwest National Laboratory in Richland, Washington, could reach billions of dollars over the next 10 to 15 years. Says Kevin Kendall, a solid oxide fuel cell expert at Birmingham University in the United Kingdom: “Suddenly things are happening that weren't possible 10 years ago.”

    That's rapid progress indeed for a technology now entering its third century of development. Today's hydrogen-powered fuel cells operate on much the same principles as the first cell invented in 1839 by Sir William Grove, a Welsh judge. They're configured like a battery, with a negatively charged electrode, or cathode, and a positively charged anode separated by a membrane that allows only certain ions to pass through. When hydrogen gas is infused into the space surrounding the anode, a catalyst splits the molecules into protons and electrons. The liberated electrons flow into the anode and out of the cell as electric current that can run attached devices or be fed into electrical grids, before completing the circuit by returning to the cathode. The protons, meanwhile, are impelled through the membrane to the cathode, where they combine with oxygen from the air and electrons from the external circuit to form water and heat.

    Solid oxide cells are like hydrogen cells in reverse. In SOFCs, oxygen grabs electrons streaming into the cell via the cathode, creating negatively charged oxygen ions. These ions migrate across a ceramic membrane, typically a substance such as yttria-stabilized zirconia (YSZ) that conducts oxygen ions well. At the anode, the oxygen ions react with a variety of hydrocarbons to produce electricity, water, and carbon dioxide.

    The problem is that besides ripping apart hydrocarbon chains, SOFC nickel anodes often weld together carbon atoms in the hydrocarbon shards instead of allowing them to bind with oxygen to form CO2. These sooty fragments tend to stick to the anode. “That more or less destroys your fuel cell,” says Scott Barnett, an SOFC expert at Northwestern University in Evanston, Illinois.

    Unless the hydrocarbon feedstock is reformed first, the carbon deposits will accumulate at temperatures—around 1000°C—typically needed to jiggle the oxygen ions enough to force them through the ceramic membrane to the anode. To make it easier for oxygen ions to get to the anode, Barnett and his colleagues used an atomic spray-painting technique to grow YSZ membranes just 5 millionths of a meter thick, much thinner than the standard 150-micrometer membranes. Oxygen ions could slip through this ultrathin membrane at temperatures closer to 600°C. But nickel anodes are dormant at these temperatures, so Barnett's group had to be doubly innovative: They also developed a nickel-spiked cerium-oxide anode that works at lower temperatures than nickel alone.

    The lower operating temperature has two payoffs: It reduces carbon crud buildup and cuts down heat stress on the apparatus itself. That means engineers should be able to build fuel cell components from steel rather than expensive heat-resistant alloys, says Singhal. And that, in turn, should lower the cost of the devices.

    Stacking the deck.

    Using layers of fuel cell elements, planar setups should generate more power per unit space than other designs.

    Other designs are breaking ground, too. In the 16 March issue of Nature, Raymond Gorte and his colleagues at the University of Pennsylvania in Philadelphia describe a different way of reducing carbon buildup. Instead of a nickel-containing anode, they developed one from copper laced with either cerium or samarium oxide that doesn't promote the formation of carbon-carbon bonds. They also slimmed down their YSZ membrane—to about 60 micrometers—to enable the cell to run at around 700°C. By going to an even thinner YSZ membrane like that used by the Northwestern group, Gorte says, the researchers should boost their cell's performance even further.

    The most novel approach so far comes from a team in Japan. On page 2031, Takashi Hibino of the National Industrial Research Institute of Nagoya and his colleagues at Nagoya University describe a unique fuel cell design in which the hydrocarbons and air are pumped into a single chamber, where they surround the electrodes and electrolyte membrane, which is a single wafer made primarily of cerium dioxide. One side of the membrane is dabbed with nickel and serves as the anode, while the other side, the cathode, is a ceramic composite of samarium, strontium, cobalt, and oxygen. The cathode passes extra electrons to the oxygen to form oxygen ions, which migrate through the membrane to the anode. There the oxygen ions react with carbon monoxide and hydrogen—the two molecules produced when gaseous hydrocarbons are broken down by the anode—to form CO2, water, and electricity.

    Hibino's team found that their fuel cell works well at around 500°C. Besides deterring hydrocarbon buildup on the anode, the samarium-doped cerium oxide membrane at cool temperatures is a far better oxygen-ion conductor than the standard YSZ membrane. Singhal warns, however, that it may take years of engineering tinkering to scale up this and other cutting-edge SOFC designs for industrial use.

    Further along in the pipeline are the fuel-reforming SOFCs. The 900-pound gorilla in this arena is a fuel cell, developed by Siemens Westinghouse, that reforms natural gas into hydrogen. After 3 decades of engineering improvements, Westinghouse has unveiled a new design that uses hot, pressurized exhaust gases from a network of fuel cells to drive a microturbine generator, which produces electricity on top of that already generated by the cells themselves. In a test of their 220-kilowatt cogeneration system this spring, the company reported converting nearly 60% of the energy in natural gas to electricity—an efficiency higher than that achieved by any power plant ever built, according to Williams. And that, he says, “is a remarkable achievement.” Westinghouse intends to build a larger version—packing around 1 megawatt—that could replace conventional power plants. And because the cogeneration cell emits only a fraction of the sulfur, nitrogen oxides, and CO2 of a conventional coal- or gas-powered plant to yield a proportional amount of energy, says Williams, it would be more benign to the environment.

    As promising as the cogeneration cell sounds, the design may be the first and last of its kind. That's because other companies are hot on the trail of a potentially more promising approach: “planar” SOFCs made up of stacks of fuel cells, each consisting of thin electrode slabs with electrolyte membranes in between. Such stacking should generate as much as 10 times the power put out by an equivalently sized SOFC of the Westinghouse design, says Williams. Proponents of the planar technology argue that this could drop the price of a kilowatt of generating capacity from $1000—Westinghouse's target for its cogeneration—to $400, in the ballpark of power plants that run on natural gas.

    At the moment, planar SOFCs are small-scale demos putting out tens of kilowatts. That may not last long, as the U.S. government is stepping up its backing of this and other SOFC technology. Earlier this month, the Department of Energy launched a new program—the Solid State Energy Conversion Alliance—to grease the wheels for commercializing such fuel cells. The aim of the $35-million-a-year program, says Singhal, is to get companies to blend recent advances in SOFC materials and design with low-cost manufacturing techniques honed in the semiconductor industry and elsewhere. If successful, he says, within a decade fuel cells that run on natural gas and other abundant fossil fuels should be on the market. Until there's an infrastructure for storing and distributing hydrogen gas, that should make SOFCs the biggest game in town.

  14. AMERICAN SOCIETY FOR MICROBIOLOGY

    Microbes Display Their Versatility at ASM Meeting

    1. Evelyn Strauss

    LOS ANGELES—About 12,000 scientists gathered here from 21 to 25 May for the 100th annual meeting of the American Society for Microbiology (ASM). This year's lineup boasted presentations on a wide array of topics—everything from the body's defenses against microbial pathogens to bacterial involvement in geological processes.

    Triggering Prion Formation in Yeast

    Although tainted beef can cause mad cow disease and inherited mutations can lead to similar forms of human brain degeneration, neither factor triggers most cases of these conditions, known as transmissible spongiform encephalopathies (TSEs). The diseases, named for the spongy appearance of the patients' brains, are apparently caused by the abnormal deposition of infectious proteins, or prions. Most often, though, prions arise spontaneously in people who haven't been exposed to contaminated meat and don't carry an inherited mutation in the suspect gene. Researchers now have a new clue about what spurs the formation of these abnormal proteins—at least in yeast, which can also develop prionlike deposits.

    At the meeting, Reed Wickner, a yeast geneticist at the National Institute of Diabetes and Digestive and Kidney Diseases in Bethesda, Maryland, reported that a yeast gene called Mks1p is required for the generation of these misbehaving proteins. (The results also appear in the 6 June issue of the Proceedings of the National Academy of Sciences.) It's unlikely that the same protein sparks mammalian prion deposition, says Byron Caughey, a TSE biochemist at Rocky Mountain Laboratories in Hamilton, Montana. But, he adds, the new result “encourages us to look for a similar sort of player that might be related to TSE disease.” Finding such a player might eventually lead to new therapies for inhibiting prion diseases.

    Although nonprion proteins have long been thought to affect prion biology, only very recently have researchers started to pin them down, and only in yeast. Wickner and postdoc Herman Edskes have been studying a yeast protein called Ure2p. In its normal form, Ure2p is involved in nitrogen metabolism. But like mammalian prion proteins, Ure2p can clump into insoluble fibers that prevent it from performing its normal activities, and the researchers exploited this phenomenon to meas-ure prion formation. Cells in which Ure2p is in the prion form can import a compound, ureidosuccinate (USA), that is brought in by machinery whose production would be blocked by normal Ure2p.

    Getting started.

    Under the influence of the protein Mks1p, Ure2p forms insoluble prion deposits (bright green spots). Ras pathway activity can block Mks1p function, however.

    CREDIT: R. WICKNER AND H. EDSKES

    Searching for a trigger that initiates Ure2p prion formation, Wickner and Edskes decided to test Mks1p, a protein known to control Ure2p activity. Using the USA test, they found that Ure2p prion formation is undetectable in cells lacking Mks1p. But the incidence of prion-forming cells rose to above normal when the researchers genetically engineered yeast to make slightly more than the normal amounts of Mks1p. Together, these results show that the protein is required for the spontaneous generation of the Ure2p prion, says Wickner.

    “This gene is the first one required for forming the prion seed,” says Susan Liebman, who studies yeast prions at the University of Illinois, Chicago. It may not be the last, however. Liebman's team has evidence for an activity that is required for generating another yeast prion, but has not yet identified the gene responsible for it.

    Once the first Ure2p prion forms, however, Mks1p is apparently not required for spreading prions to unrelated cells and to offspring. Edskes and Wickner transferred cytoplasm from cells that were carrying the Ure2p prion into yeast strains that either could or could not make Mks1p. After multiple generations—by which time any Mks1p from the original donor strain would be diluted to nothing—cells lacking Mks1p were still producing prions. The result indicates that Mks1p is not needed for prion propagation.

    These experiments suggest that prion initiation and propagation in yeast are controlled by different genes. “The same distinction might apply to mammalian systems, but it hasn't been possible to approach the question experimentally,” says David Harris, a cell biologist at Washington University School of Medicine in St. Louis. Exactly how Mks1p fosters Ure2p prion formation remains unclear. But the discovery also provides an intriguing link to one of the cell's major growth control pathways, named the Ras pathway after one of its prominent components.

    Ras activity leads to the inactivation of Mks1p, and Wickner and Edskes showed that a form of Ras that's stuck on “on” decreased the incidence of prion formation more than 750-fold in yeast. This result suggests, says Wickner, that “the general control systems of the cell are impinging on the prion system.” If so, Liebman says, “there is going to be a complicated control of prion generation.”

    No one knows if something similar occurs in humans, but if it does, Liebman adds, “maybe it will be possible to engineer those controls in our favor, once we understand what they are.”

    Pumping Iron?

    Few cells can outdo the macrophage in sheer destructive potential. These immune cells literally eat microbial invaders for breakfast, first engulfing and encasing them in membranous intracellular sacs called phagosomes and then unleashing a host of weapons that tear the microbes apart. Most microbes are goners if a macrophage engulfs them, but some—including those that cause leprosy and leishmaniasis—have developed strategies for surviving in the phagosome's noxious environment. For example, they prevent the phagosome from picking up enzymes and other harmful chemicals that would otherwise destroy them. In an evolutionary arms race, however, the macrophage has evolved countermeasures to keep even these microbes in check. Now, new work reveals the mechanism of one of the weapons the cells deploy in this war of attrition.

    Decades ago, researchers identified a mouse gene called Nramp1 that confers resistance to some phagosome-dwelling microbes, and although they isolated it several years ago, they couldn't pin down its exact function. At the ASM meeting, Philippe Gros, a mouse geneticist at McGill University in Montreal, described results suggesting that the Nramp1 protein contributes to the demise of the microbes by pumping metal ions such as manganese out of the phagosome, depriving them of essential nutrients. “It's been recognized for a long time that certain genetic loci have a major role in nonspecific immunity, but their actions have been pretty mysterious,” says Ferric Fang, a molecular microbiologist at the University of Colorado Health Sciences Center in Denver. “This is a direct demonstration of the molecular mechanism by which a fundamental system of innate immunity is working.”

    Researchers had suspected that Nramp1 might pump metals ions out of the phagosome. It's in the right location, dwelling in the membrane that surrounds the phagosome. And Nramp1's amino acid sequence resembles that of other proteins known to transport metals such as manganese and iron. But they had trouble proving what Nramp1 does, partly because they had no way to measure the flow of molecules across the phagosome membrane. Gros and his colleagues, with Sergio Grinstein at the University of Toronto, solved this problem.

    First, the researchers allowed macrophages from normal and Nramp1-deficient mice to engulf tiny beads coated with Fura-6, a chemical that fluoresces orange except when manganese ions are present. After introducing the metal into the cells, they found that the fluorescence declined much more quickly in the Nramp1-deficient phagosomes than in normal ones, suggesting that the protein either keeps manganese from entering the phagosomes or removes it from them.

    To distinguish between these possibilities, the researchers loaded the two types of macrophages with Fura-6-coated beads whose fluorescence had already been blocked by manganese, and then measured how long it took for fluorescence to reappear. The phagosomes of cells containing normal Nramp1 glowed more quickly and to a greater extent than did those of Nramp1-deficient cells. Together, the experiments showed that Nramp1 ejects manganese from the phagosome. “It's never been demonstrated so convincingly that a host mechanism for getting rid of pathogens is nutritional deprivation in the phagosome,” says Samuel Miller, a microbiologist at the University of Washington, Seattle. “This work shows that pumping ions out of the phagosome is important.”

    Manganese may not be the whole story, however. Gros suspects that Nramp1 may also transport iron, which, like manganese, is essential for microbial growth. Indeed, several bacteria, including Mycobacterium tuberculosis, carry genes that resemble Nramp1, and experts propose that the two cousin molecules might even fight it out for essential cations in the phagosome.

    In addition to identifying Nramp1's natural substrate or substrates, researchers now want to determine whether Nramp1 plays a role in nonspecific immunity in humans, as it does in mice. Some results hint that it might, as several studies on populations in Africa and Vietnam have shown an association between the region of the genome that carries the Nramp1 gene and susceptibility to leprosy and tuberculosis.

    If Nramp1 does in fact contribute to host resistance to the human diseases, it might be possible to design drugs that combat the infections by helping Nramp1 keep metal ions out of the phagosome. After all, even good fighters can use better weapons.

    Wringing Nutrition From Rocks

    Microbes can squeeze just about anything from a stone—an ability that helps us as well as them. At the meeting, Jill Banfield, a mineralogist at the University of Wisconsin, Madison, presented evidence that microbes live on phosphate-containing crystals on rock surfaces, apparently by dissolving the mineral to obtain phosphate. The work, which reflects a growing interplay of geology and microbiology, might help explain how the tiny rock inhabitants get their phosphate, an essential nutrient for life. It could also lead to new insights about how soil fertility is established and maintained.

    In the pits.

    This micrograph shows fungi growing on aluminum phosphate in a rock fragment from soil.

    CREDIT: J. BANFIELD AND A. TAUNTON

    Because phosphate is locked up in insoluble minerals, its availability often limits the growth of living things in soil. And although scientists know that microbes play a role in releasing it and other essential chemicals, the extent of their involvement has been unclear. “Most geologists have focused on inorganic processes until recently,” says Susan Brantley, a geochemist at Pennsylvania State University, University Park. “There's a huge deficit in our knowledge of [microbial] roles in element cycling.”

    To help fill that gap, Banfield and graduate student Anne Taunton made scanning electron micrographs of rock fragments from samples in the top 2 meters of the soil, where microbes are most plentiful. The researchers found that microbes are not evenly distributed. Instead, they cling to tiny spots of phosphate-containing minerals called lanthanide phosphates and don't seem to colonize other parts of rock fragments.

    Banfield and Taunton also noted that the crystals remain intact at lower depths of soil, where microbes are sparse, but as one approaches the surface, microbe-occupied pits appear in the rock fragments as the phosphate is apparently dissolved away. Eventually the microbes disappear, too, leaving only empty holes. “The hypothesis is that microbes are inhabiting the pits and removing phosphate,” Banfield says.

    Banfield suggests that they dissolve the phosphate by releasing chemicals such as oxalate and carbonate. She and Taunton found that microbes cultured from soil samples release the chemicals and also showed that addition of oxalate and carbonate to lanthanide phosphate in solution increases the solubility of the phosphate. “One of the really important take-home messages is the effect of these low-molecular weight organic molecules that microbes make. They increase the rate of dissolution of these mineral phases,” says Jim Fredrickson, a soil microbiologist at the Pacific Northwest National Laboratory in Richland, Washington.

    Banfield and her colleagues are now trying to identify the organisms present in the soil. But while the extent and mechanism of microbial involvement in geological processes remains unclear, researchers are pleased with the growing interdisciplinary nature of the field. “What I find exciting is that a card-carrying geologist is in charge of this work,” says Kenneth Nealson, an environmental microbiologist at the California Institute of Technology in Pasadena. “The field has never moved at the rate it should move, because it's never been populated by people who have a background in both areas. Now it's picking up.”

  15. A North Atlantic Climate Pacemaker for the Centuries

    1. Richard A. Kerr

    Old trees and supercomputers are revealing a slow, multidecadal climate pulse that beats in the Atlantic Ocean and reaches around the globe

    Wiggles are the bane of climate researchers, confusing records of every sort. They're jumbled one atop another on time scales ranging from year-to-year to eon-to-eon. But they can also be a salvation, providing clues to how a single, repeating climate oscillation may be linked to an equally rhythmic cause. Now researchers are picking through the climatic records of recent centuries to track down and explain a squiggle—the first of its kind to emerge—that they hope may clarify variations in the past century's climate and sharpen our ability to recognize greenhouse warming.

    The climate swings coming into view don't officially have a name yet, but Atlantic Multidecadal Oscillation or AMO might do. Oscillation because the climate swings one way then the other, multidecadal because they take roughly 60 years to complete an oscillation, and Atlantic because they are most evident in and around the North Atlantic. Most recently, thermometers picked up a swing early in the 20th century from abnormally cold to unusually warm and back. Before that, trees around the Atlantic recorded similar swings in the climate-induced variation of tree ring width that go back several hundred years.

    “There's no doubt something is happening in the North Atlantic” during the past 150 years that we've been measuring temperature with instruments, says climatologist Christopher Folland of the Hadley Center for Climate Prediction in Bracknell, United Kingdom. “It's very interesting, very important.” Some researchers feel that the AMO might even shed light on the recent rise in global temperatures. “It is possible the enhanced warming in the North Atlantic recently is a superposition of a natural mode plus an anthropogenic mode,” says statistical climatologist Michael Mann of the University of Virginia in Charlottesville. Some researchers, especially climate modelers, suspect that oscillations in the heat-carrying currents of the North Atlantic are to blame for this natural mode.

    Although the AMO is a new label, what it describes was noticed by climatology's pioneers. Jacob Bjerknes, an originator of the modern concept of El Niño, observed in 1964 that a slow warming of the surface of the North Atlantic in the 1910s and '20s could well have been driven by a surge of warm water up the Gulf Stream. This Atlantic warming accompanied a global warming that by the 1940s had produced the highest global temperatures to that point in the records. It was so warm that statistical techniques used in the 1990s to detect the “fingerprint” of greenhouse warming in climate records also show the 1940s having greenhouse warming, according to work by Gabriele Hegerl of Texas A&M University in College Station and her colleagues. The problem with that analysis is that no one believes enough greenhouse gases had reached the atmosphere by then to cause much of a human-induced warming. That inconsistency has led greenhouse contrarians to complain that any recent warming could just as well be natural rather than anthropogenic.

    A warm 1940s gave way to a decades-long cooling that set in over the Atlantic as well as the globe. It started talk of the next ice age, or at least the irrelevance of the growing load of greenhouse gases. But Wallace Broecker, a marine geochemist at the Lamont-Doherty Earth Observatory in Palisades, New York, disputed that interpretation and suspected the cooling was just a phase. His 1975 paper in Science pointed out that coring of the Greenland ice cap had retrieved a record of two climate oscillations, of 80 and 180 years. In the 1970s, these natural climate variations would have counteracted greenhouse warming fueled by fossil fuel burning, Broecker reasoned, but not for long. “We may be in for a climatic surprise,” he warned. Indeed, the North Atlantic soon began warming, the global cooling reversed itself, and temperatures set new record highs in the '80s and '90s.

    The existence of as much as two full climatic swings in the last 150 years seems increasingly clear. In a paper soon to be published in Climate Dynamics, climate modeler Thomas Delworth of the National Oceanic and Atmospheric Administration's Geophysical Fluid Dynamics Laboratory (GFDL) in Princeton, New Jersey, and Mann find “overwhelming evidence for a significant multidecadal variation in the climate system during the past 100 to 150 years, centered in the North Atlantic.”

    This climate variability with a duration of 50 to 70 years—more or less equivalent to the 80-year oscillation seen in Greenland ice—can have some noticeable effects. Winds blowing over a warmer North Atlantic warm the United Kingdom, the rest of Europe, and northern Asia. Delworth and GFDL modeler Thomas Knutson reported recently (Science, 24 March, pp. 2126, 2225) that, in one out of five runs of a climate model that simulates the AMO, a North Atlantic-centered warming bore a marked resemblance to the warming of the 1920s and '30s in timing, amplitude, and geographical distribution. “I think [the AMO] played a role in the 1920s-30s warming,” says Delworth. On the flip side, meteorologist William Gray of Colorado State University in Fort Collins has linked a colder North Atlantic to the dearth of hurricanes in the '70s and '80s as well as to the drought in the Sahel of northern Africa. He even sees the temperature of the North Atlantic influencing the frequency of El Niños.

    Although the North Atlantic may be switching between warm and cold, one or two cycles does not an oscillation make. Meteorological standards call for a half-dozen or more. To go back before the widespread use of thermometers, climatologists have turned to so-called proxy records—the width of a tree ring can reflect the temperature during a growing season; a layer of snow-turned-ice in the middle of the Greenland glacier may record temperature in its oxygen isotopic composition; and a coral growing layer by layer responds to temperature as well. In the past several years, Mann and his colleagues have combined a number of such proxy records into a single record that shows temperature variations around the North Atlantic of several tenths of a degree, with a roughly 70-year oscillation. For comparison, the total global warming from 1860 to the present has been about 0.6°C.

    Despite the considerable uncertainties inherent in proxy records, “the pattern is significant enough to be clearly detectable,” says Mann. “I think there is something going on that is important,” agrees modeler John Marshall of the Massachusetts Institute of Technology. And climatologist Yochanan Kushnir of Lamont-Doherty says that, despite all the reservations, climate is oscillating at time scales of a half-century and beyond in a way distinctly different from El Niño or decadal oscillations.

    Just what is happening, however, is a matter of ongoing discussion. The proposed AMO has some relation to the North Atlantic Oscillation (NAO) (Science, 7 February 1997, p. 754), in which a swinging seesaw of atmospheric pressure with “seats” over Iceland and Lisbon skews climate over and downwind of the North Atlantic. Some meteorologists lump the NAO into a hemisphere-girdling phenomenon called the Arctic Oscillation (Science, 9 April 1999, p. 241), but it is still a flibbertigibbet of an oscillation, fluctuating month to month and year to year with little sign of a preference for an oscillation period of 50 years or longer. Such long cycles, most researchers assume, must be paced by the ocean, where massive reservoirs of heat and ponderously slow currents might provide the required slowly ticking clock.

    To sort out the ocean's role in multidecadal climate change, researchers turn to climate models. In their forthcoming Climate Dynamics paper, Delworth and Mann compare the behavior of a GFDL climate model and the real world as recorded instrumentally and in a 330-year proxy record. “We see a multidecadal mode of variability in the GFDL model,” Mann says, “that looks quite like the pattern of multidecadal variability in the proxy record.” Both involve Atlantic-wide temperature oscillations rather than the geographically more complex variations of the NAO. The proxies give a period of about 70 years, while the model suggests 50 to 60 years. The difference is negligible, says Delworth, given the approximations used to create any model.

    Whereas long-term climate records are limited to Earth's surface, sophisticated climate models use basic physics to build an ocean interior that can be probed for signs of what makes an oscillation tick. In the case of the GFDL model, the AMO seen at the surface reflects a “clock” within the ocean that's “wound” by the atmosphere's NAO. The NAO's seesawing atmospheric pressure alternately cranks up and weakens the cold winds that blow out of the west across the Labrador Sea. The harder they blow, the more heat they extract from surface waters, the denser those waters become, and the easier it becomes for them to sink into the deep sea, drawing more warm water from the south through the Gulf Stream. Thus, in the model, the NAO has a hand on the control valve of the North Atlantic's so-called thermohaline circulation (THC), in which warm water flows north, cools, sinks, and heads back south through the deep sea.

    The model's NAO may be able to turn the THC valve, but it is with a most unsteady hand. The NAO oscillates week to week as much as it does year to year or decade to decade, and does so unpredictably. But then, the real and model oceans pay no mind to most of the NAO's jittering. Being slow to change, the model's North Atlantic prefers to respond only to the NAO's longest, multidecadal swings, says Delworth, and then at a pace set by its own ponderous internal works. In particular, the added warmth delivered by an accelerated THC eventually slows other currents that carry particularly salty, and therefore denser, water northward into the regions where sinking occurs. With less salt to encourage sinking, the THC slows, heat transport slows, and the North Atlantic cools. Eventually, cooling will progress far enough to reverse the oscillation by encouraging more salt transport that will enhance sinking and the THC. The THC's inherently sluggish response to the atmosphere's urgings sets the multidecadal pace of the model's AMO.

    “The atmosphere is noisy, and the noise drives the ocean beneath it,” says modeler Andrew Weaver of the University of Victoria in British Columbia, but only at the more regular pace favored by the ocean. He has also recently found noise acting as a driver in a model, run with Marika Holland of the National Center for Atmospheric Research in Boulder, Colorado, to look at the effect of Arctic sea ice coming into the North Atlantic. “Our work is very similar” to Delworth and Mann's, Weaver says, in that the stream of sea ice out of the north—a major source of fresh, less dense water to the North Atlantic—responds to random variations in the overlying winds. Increased winds in the right direction drive more fresh water into the THC and slow it.

    Although some model oceans may be taking their multidecadal cue from the random jostlings of the atmosphere, other models interact with the atmosphere in more of a give-and-take process. Running a global climate model much like Delworth's GFDL model, Axel Timmermann of the Royal Dutch Meteorological Institute in De Bilt and Mojib Latif of the Max Planck Institute for Meteorology in Hamburg found a two-way interaction between ocean and atmosphere that gives rise to a 35-year oscillation centered on the North Atlantic. In their model, surface waters are warmed by an unusually strong THC. The warmth changes salinity not by altering currents but by strengthening the NAO. While this reinforces the warmth, it also gradually causes a reduction in evaporation of fresh water, so salinity drops. Eventually, the declining salinity slows the THC and cools the North Atlantic, which in turn eventually returns higher salinity and accelerates the THC to complete an oscillation. The model even produces an “atmospheric bridge” from the North Atlantic to the North Pacific that entrains the North Pacific in the THC-related 35-year oscillation.

    Whatever the role of the atmosphere, the temperature oscillations of the 20th century and the model results have engendered “a strong suspicion that the thermohaline circulation is to blame,” says Folland of the Hadley Center. The modeling “is a good step up,” adds Timmermann, “but the models must be more mature to say that the thermohaline circulation is involved.” A number of models are producing a multidecadal oscillation—within the general range of 35 to 70 years—but such obvious differences as the role of the atmosphere from model to model give pause, says Timmermann.

    Still, “we believe more firmly than before that this is real,” says Mann of the AMO. “The evidence for this sort of 50- to 70-year oscillation is accumulating in the instrumental observations, proxy climate records, and the climate models.” If that is correct, the pace of warming could pick up in the next few decades as a naturally warming North Atlantic combines with a stronger greenhouse warming effect. But it may take a lot more old trees and supercomputing time to calculate how much greenhouse warming will remain the next time the NAO swings to the cool side.

  16. The Sun Again Intrudes on Earth's Decadal Climate Change

    1. Richard A. Kerr

    Most climatologists have learned to be skeptical about apparent links between the sun's variability and Earth's climate. Again and again, researchers have uncovered plausible correlations, but the evidence usually crumbled under closer scrutiny. And nobody had come up with a convincing mechanism to explain how tiny changes on the sun might change climate on Earth. But suspicious associations between sun and climate keep cropping up. Now, two such correlations—a 22-year climate cycle recorded in glacial sediments and the tracing of an 11-year cycle from the stratosphere into the lower atmosphere—may be robust enough to give the sun-climate link a touch more respectability.

    “There's more and more evidence of something pretty distinct at this time scale,” says statistical climatologist Michael Mann of the University of Virginia in Charlottesville, a discoverer of the ice age cycle. “I've always been a skeptic” of sun-climate connections, he says, “and I remain a little skeptical, but we can't dismiss it as a statistical anomaly.” Paleoceanographer Theodore Moore of the University of Michigan, Ann Arbor, confirmed the 22-year climate cycle in the same glacial record, and he too remains on the skeptical side. “It's hard for me to say where [that cycle] is coming from,” he says, but “it's a fertile area for research,”

    Geologists Tammy M. Rittenour and Julie Brigham-Grette of the University of Massachusetts, Amherst, and Mann found their 22-year cycle, along with shorter cycles, buried in the layered bottom sediments of now-vanished New England lakes. About 15,000 years ago, sediment-laden waters poured into the lakes from melting glaciers. The warmer a summer's weather, the more ice melted and the more sediment washed into lakes, so the thickness of each annual layer of sediment reflects the temperature during that melt season. Rittenour and her colleagues analyzed 4000 years' worth of layer thicknesses and found statistically significant periodicities falling between cycle lengths of 3 and 5 years. In their 12 May Science paper (p. 1039), they attributed those climate fluctuations to the long-range influence of an ancient El Niño. Superimposed on these fluctuations, the researchers identified a 22-year cycle of varying layer thickness.

    The 22-year oscillation (actually 22.2 ± 0.2 years), which the researchers described in the same paper, provoked little attention, as it was relegated to a few lines of text and a figure label. But 22 is a significant number to scientists looking for sun-climate links. It's twice the 11-year period at which sunspot abundance and, much less dramatically, solar brightness vary, and it equals the length of the cycle in which the sun flips its magnetic poles back and forth. When an 11- or 22-year periodicity shows up in climate records, suspicion falls on the sun, although it's never been clear exactly how a feeble change in solar brightness or the flipping of the sun's magnetic field would trigger measurable climate change.

    “It was a little awkward when we found” the 22-year period, says Mann. “We weren't looking for it. Our immediate guess was that this is from chance sampling variations, that it's a fluke, but we tested its robustness. The thing just holds up. It's a real feature. It's not a dominant signal, but it's always there.” Moore doesn't doubt it's there, but he would be very hesitant to say that sunspots are behind any 11- or 22-year climatic periodicity. “I'm cured of that,” he says. “We should be very open-minded about what that [periodicity] means.” Another, perhaps more palatable, possibility is oscillations inherent in the oceans, he says, that could swing climate to and fro much as one ocean oscillation appears to do on longer, multidecadal time scales of 40 to 80 years (see main text).

    The link between sun and climate would be strengthened if, rather than just pointing out solarlike climate periodicities, researchers could demonstrate that their 11- or 22-year climate variations were in step with the variations on the sun. Last year, climate modeler Drew Shindell of NASA's Goddard Institute for Space Studies in New York City and his colleagues showed how, in their model at least, feeble variations of solar output over the 11-year sunspot cycle could gain leverage in the stratosphere and even propagate temperature changes down to the surface (Science, 9 April 1999, pp. 234 and 305). Meteorologists Karin Labitzke of the Free University of Berlin and Harry van Loon of the National Center for Atmospheric Research (NCAR) in Boulder, Colorado, had shown that something was letting stratospheric temperatures vary in time with the sunspot cycle, but they hadn't been able to trace solar cycle effects into the underlying troposphere, where weather and climate reside (Science, 4 August 1995, p. 633).

    Now, van Loon and Dennis Shea of NCAR have crunched the latest set of temperature data since 1958 and found an 11-year variation of several tenths of a degree in the Northern Hemisphere troposphere that was in step with sunspots over four solar cycles. The effect decreases toward the surface when it is averaged around a latitude band spanning the hemisphere, van Loon notes, but he has yet to comb the data for possible effects varying from place to place along latitude bands. Sun-climate effects “is still a topic that's much alive,” he says.

Log in to view full text