News this Week

Science  23 Aug 2002:
Vol. 297, Issue 5585, pp. 1252

You are currently viewing the .

View Full Text

Log in to view the full text

Log in through your institution

Log in through your institution


    Gene Mutation May Boost Risk of Heart Arrhythmias

    1. Jean Marx

    The rare event often sheds light on the commonplace. And human disease is no exception. Over the past few years, researchers have identified several mutant genes that cause rare and potentially fatal heart arrhythmias. That has helped researchers understand both normal heart cell function and how it might be disrupted, with life-threatening consequence. Now, Mark Keating of Children's Hospital and Harvard Medical School in Boston and his colleagues, including Igor Splawski, also at Children's, have taken a new step. On page 1333, they offer tantalizing evidence that a different variation in one of those genes, dubbed SCN5A, might increase the risk of heart rhythm disturbances in members of the population at large, not just in those few people with the hereditary arrhythmias.

    The variant gene, found primarily in persons of African descent, isn't likely to cause problems on its own. “I'm not saying that people who carry this variant should lose sleep over it,” Keating says. But numerous medications, including some antihistamines and drugs used to treat high blood pressure, have also been linked to an increased risk of cardiac arrhythmias, and persons carrying the variant might be more likely to fall victim to that untoward drug side effect.

    Indeed, people vary widely in their responses to both the benefits and harmful side effects of therapeutic drugs, and researchers think that genetic variations such as the one in SCN5A might be at the root of those differences. The vast majority of those variations have yet to be identified, however. That's why the Keating group's discovery is “important and significant,” says arrhythmia researcher Arthur Moss of the University of Rochester Medical Center in New York state. It contributes to the fledgling science of pharmacogenetics, which seeks to understand how genes affect drug responses. Eventually, researchers hope that, by screening patients for such genetic variations, they will be able to select or even design more effective and less dangerous drug therapies for them.

    A subtle difference.

    The SCN5A protein winds through the membrane of heart muscle cells, forming a channel that opens to let sodium ions flow into the cells. A single amino acid change (red dot) in this large protein may make people more susceptible to heart arrhythmias.


    Keating's interest in SCN5A dates back to 1995, when his group and that of Paul Bennett and Alfred George at Vanderbilt University Medical Center showed that mutations in the SCN5A gene cause “long QT syndrome,” a rare hereditary heart rhythm disturbance that was so named because of a characteristic change in patients' electrocardiograms. Keating then set out to find whether variations in SCN5A might also influence the risk of arrhythmias in the general population.

    He and his colleagues examined the gene in people hospitalized with heart arrhythmias. Eventually they found a particular change—a single-base substitution that changes just one amino acid in the SCN5A protein—in an African-American woman. Her heart problems could not be linked to any of the mutations already known to cause cardiac arrhythmias, raising the possibility that this SCN5A variation might somehow contribute to the irregularity.

    Subsequent screening of the general population showed that the variant is widespread among blacks. The researchers found it in about 19% of 468 West Africans and Caribbeans and in 13% of 205 African Americans. In contrast, the researchers did not find it in 511 Caucasians or 578 Asians, and it turned up in only one of the 123 Hispanics they studied. Moreover, Keating and his colleagues found that the variant is far more common in African Americans being treated for arrhythmias than in healthy controls, suggesting that it does increase risk.

    To explore how the SCN5A variant might increase susceptibility to arrhythmias, the Keating team, with that of Robert Kass of Columbia University's College of Physicians and Surgeons, introduced the gene into human cells. Research had already established that the SCN5A protein forms a sodium channel that opens, when appropriately stimulated, to let sodium ions flow into heart muscle cells, thus providing the trigger for the cells to contract. The team found that a small percentage of the variant channels reopen at a time when they are supposed to be closed, a change that could make the heart more prone to developing arrhythmias.

    Keating notes, however, that the differences weren't large and that the variant alone isn't likely to cause problems for the individuals carrying it. But, he suggests, the variant might increase an individual's susceptibility to the arrhythmias triggered directly or indirectly by certain medications, including the diuretics used to lower high blood pressure, a condition that tends to be more common in blacks than in whites. If that theory is confirmed, Keating says, then it would be relatively easy to devise a test to identify the carriers, who could then avoid taking risky medications.

    Keating cautions, however, that “this paper shouldn't be interpreted as showing that African Americans are at increased risk of arrhythmias.” He just happened to find the variant in that population. Other groups might carry different variations, in either SCN5A or one of the other genes that can cause arrhythmias, that could affect their risk.


    Comet Craft in Pieces, Astronomers Fear

    1. Govert Schilling*
    1. Govert Schilling is an astronomy writer in Utrecht, the Netherlands.

    Scientists may have lost their best shot at understanding how comets evolve with the apparent destruction of the $159 million Contour spacecraft. The NASA payload is thought to have broken into at least two pieces last week while accelerating to leave Earth orbit.

    “We aren't sure that the spacecraft is completely gone,” says Contour mission director Robert Farquhar of the Johns Hopkins University Applied Physics Laboratory in Laurel, Maryland, “but this news is not very encouraging.”

    Contour (short for Comet Nucleus Tour) was launched into Earth orbit 3 July (Science, 5 July, p. 44). On 15 August, ground controllers ordered the spacecraft to fire its onboard solid-propellant rocket motor. The rocket burn should have put Contour on a trajectory to comets Encke and Schwassmann-Wachmann 3, one very old and one relatively new. Astronomers had hoped that high-resolution close-ups of the nuclei of the two comets would shed light on the diversity of these small frozen remnants of the solar system's birth.

    The rocket maneuver took place 225 kilometers above the Indian Ocean, out of sight of NASA's Deep Space Network antennas. But some 45 minutes later, when Contour's signal should have been picked up again, the screens at mission control stayed black. At first, mission operators were optimistic that radio contact with the spacecraft would be restored. When radar and optical telescopes showed no sign of Contour in Earth orbit, scientists assumed that the rocket burn had occurred on Thursday as expected. “We had a lot of hope,” Farquhar says.


    Circled blips (bottom) might be fragments of NASA comet probe.


    But on Friday, astronomers at the University of Arizona in Tucson detected two objects close to Contour's predicted path. Using the 1.8-meter Spacewatch telescope at Kitt Peak, which normally hunts for near-Earth asteroids, they spotted the faint objects some 460,000 kilometers from Earth. The objects were 460 kilometers apart, suggesting that the two pieces are moving away from each other at a relative velocity of just over 20 kilometers per hour. That information might help investigators gauge the force of the explosion that blew them apart, information that could narrow down what went wrong.

    “The loss of Contour would be a basic setback for the near future of cometary science,” says Gerhard Schwehm of the European Space Research and Technology Centre in Noordwijk, the Netherlands, the project scientist for the upcoming European Rosetta mission to comet Wirtanen. By studying both the old comet Encke and the “fresh” comet Schwassmann-Wachmann 3, whose nucleus split into three parts in 1995, Contour could have shed light on the evolution of these icy objects—something other comet probes are not expected to do. But an even bigger impact, Schwehm says, might be a loss of confidence in the “faster, cheaper, better” approach of NASA's Discovery program, of which Contour is the sixth mission. Earlier Discovery missions also encountered technical mishaps, although none this serious.

    The cause of the breakup might never be learned unless mission operators improbably succeed in reestablishing radio contact with Contour. The rocket burn maneuver “was not considered to be very risky,” Farquhar says. “I was more worried about the launch.” As for taking another shot at the comets, Farquhar says that any proposed replacement would have to rejoin the queue: “By then, it's unclear how this mission would fit in the general scheme” of cometary science.


    Medals Honor Work on Linkages and Proof

    1. Charles Seife

    Two bridge builders have won mathematics' highest honor. The 2002 Fields Medals—presented this week at the opening ceremonies of the International Congress of Mathematicians in Beijing—went to Laurent Lafforgue of the Institute for Advanced Scientific Study in Bures-sur-Yvette, France, and to Vladimir Voevodsky of the Institute for Advanced Study in Princeton, New Jersey. Madhu Sudan, an information theorist at the Massachusetts Institute of Technology (MIT), received the Rolf Nevanlinna Prize, an analogous award for work in computer science.

    The Fields and Rolf Nevanlinna awards are presented together every 4 years at the congress, traditionally to researchers under 40 years of age. Both Lafforgue and Voevodsky worked on questions that linked seemingly dissimilar subdisciplines of mathematics.

    Lafforgue, age 35, was honored for his work on the Langlands Program, an ambitious mathematical quest begun in 1967 by Robert Langlands, then a young professor at Princeton University. Langlands conjectured that two different-looking mathematical beasts—automorphic forms and Galois representations—were intimately connected. Broadly speaking, automorphic forms are mathematical objects that can be distorted in certain ways and still retain their original properties. Galois representations, on the other hand, reveal the relations between solutions of equations.

    These two abstract ideas live in very different sections of the mathematical zoo, yet both are related to some of the deepest problems in mathematics. When Princeton mathematician Andrew Wiles proved Fermat's Last Theorem in 1994, for example, he relied on a proof that the Langlands Conjecture was true under very specific conditions. “A very strange thing about the Langlands Program is that it is so beautiful, that it is so seductive, that what it sets forth is so simple, and that it puts together so many different phenomena,” says Lafforgue. “Since the formulation by Langlands, everybody is absolutely convinced that the set of conjectures is true.”


    Fields medalists Voevodsky (top) and Lafforgue took laurels for finding hidden connections in mathematics; Nevanlinna winner Sudan (bottom), for wedding probability to proof.


    In 1999, Lafforgue won mathematical acclaim by proving the Langlands Conjecture for a very broad class of objects known as function fields (Science, 4 February 2000, p. 792). “I knew that it was an important result. For the following 2 years, many mathematicians around me thought that it would receive the Fields, but I preferred not to think about that,” says Lafforgue. “Today, of course, I feel deeply honored and happy to obtain so much recognition for my work.”

    The other new Fields medalist, 36-year-old Vladimir Voevodsky, toiled at the intersection of two other mathematical subjects: topology, which studies shapes in space, and algebra, which studies the symmetries and relations of abstract mathematical operations. Mathematicians have had remarkable success in trying to teach the two fields to speak the same language, but some areas still can't communicate, even though they seem to have similar structures. In 1970, mathematician John Milnor of the State University of New York, Stony Brook, conjectured that two such uncommunicative realms—ways of describing properties of different kinds of surfaces known as Galois cohomology and K-theory—were in fact related. The Milnor Conjecture remained the biggest problem in that area of mathematics until 1996, when Voevodsky created new mathematical tools that enabled him to solve the conjecture.

    The Rolf Nevanlinna Prize honors MIT's Madhu Sudan, age 35, for his work on the very concept of mathematical proof. A proof is a series of logical statements, each linked to the next according to strict rules of inference. If the statements are correct and the links obey the rules, the proof is valid; otherwise it is flawed.

    Sudan added shades of gray to this black-and-white dichotomy by showing that, in theory, a mathematician could figure out the probability that a new proof is correct. He knew that, in a sense, valid proofs are points floating in an abstract space that describes all logical statements. “Distance” makes sense in the abstract space of logical statements, just as it does in familiar geometric space.

    Sudan was among the first to realize that the concept of logical distance could be used to measure how far from truth a putative proof might be. “You can show whether any proof is completely correct, [whether it] can be formulated into one that is completely correct, or whether it is so distant from correctness that it is unfixable,” Sudan says. And although the idea isn't going to lead to automatic proof-checkers, it has helped Sudan make inroads against the most important question in computer science, the P = NP problem (Science, 26 May 2000, p. 1328).


    Cool Cats Lose Out in the Mane Event

    1. Jay Withgott*
    1. Jay Withgott is a science writer based in San Francisco.

    The image of a roaring male African lion with full flowing mane is for many people the very icon of wild nature. But precisely why lions have manes has never been nailed down. On page 1339, researchers provide evidence for the often-cited assumption that the mane is a signal advertising the animal's condition, which females use to choose mates and males use to assess rivals.

    In an apparent evolutionary tradeoff, however, manes also impose a cost on males by increasing their heat load. “This is one of the few cases of a sexually selected trait where a physiological cost has been demonstrated,” says evolutionary biologist John Endler of the University of California, Santa Barbara.

    Animal behaviorist Craig Packer of the University of Minnesota, Twin Cities, has studied lions at Serengeti National Park in Tanzania for 24 years, continuing the program that pioneering zoologist George Schaller began in the 1960s. Over the years, his group has built up a huge database on individual animals.

    When Peyton West arrived as a graduate student to work with Packer, she used this information to ask why lions have manes. Are they akin to a peacock's tail: a useless ornament favored by sexual selection? Or does the mane protect the neck and head during fights, another frequent speculation? “There's been this archaic notion of protection and also this inchoate sense that it's probably sexual selection,” says Packer.

    Hello, handsome.

    Female lions prefer males with dark manes, but such decoration comes at a cost.


    West and Packer matched photographs of individual lions to records of their age and condition since 1964. Lions with longer and darker manes, they found, were more mature, had higher testosterone levels, lacked injuries, and were better nourished.

    To determine experimentally whether manes advertised health and status, West and Packer set out life-sized models of male lions with manes of different colors and lengths and observed nearby lions' behavior. Males avoided models with dark, long manes, preferring the company of light- and short-maned cats—apparently to sidestep conflict with macho alpha males, the researchers reason. In a second experiment, West and Packer broadcast female lion calls and found that dark-maned males led the way toward the new females. Evidently, dark manes tell other males to back off.

    Females reacted to the experimental models differently. They sidled up to the dark-maned males, confirming a preference detected in observational data. The females' preference makes sense, the researchers point out, if dark manes reflect maturity and good physical condition. A male in top condition can better defend its females and cubs from attack by other teams of males.

    But great manes come at a cost: Imagine wrapping a long woolen scarf around your neck on a hot summer day—or, more accurately, five or six scarves. Using an infrared camera, the researchers could plainly see that manes, especially long and dark ones, were a major source of heat, a liability on the hot African savanna. Indeed, the long-term data revealed that manes grew shorter and lighter colored with warmer seasonal temperatures. This pattern holds across large geographic regions, too; lion subspecies in cooler climes such as Morocco and South Africa's Cape region have extensive manes, whereas many in Kenya's scorching-hot Tsavo National Park lack manes altogether. All this suggests that manes impose a significant physiological cost on the animal.

    Thus, say the researchers, any male that can put up with this cost and still look good must be a real stud. Sexual selection theory would term the mane an honest indicator of good quality.

    Behavioral ecologist Göran Spong of Uppsala University in Sweden, for one, is not convinced, asserting that the study still doesn't resolve whether the mane is a “badge” for signaling or a “shield” for protection in fights. While granting that the work is “impressive,” Spong says that no single line of evidence is conclusive: “Whether [to] believe the weighed sum of all their arguments, I think, is more a matter of taste than objective deduction.”

    But Endler praises the study, particularly for its attention to the costs of temperature. “Most studies only speculate about costs or regard predation risk as a cost,” he notes. “So now we have predation, locomotion, and thermoregulatory costs as known factors balancing advantage to sexually selected traits. It's getting more interesting each time a new factor is explored.”


    Labs Spared as Climate Change Gets Top Billing

    1. Gretchen Vogel

    BERLIN—The record-breaking floods that overwhelmed villages and centuries-old historical monuments in central Europe last week spared most science institutes in the region. They also raised the profile of climate change researchers, who were bombarded with questions about whether the floods were proof of global warming.

    A decision by the newly opened Max Planck Institute of Molecular Cell Biology and Genetics in Dresden to seal its foundation “like a watertight bathtub” kept the low-lying institute dry and fully operational throughout the catastrophe, says director Kai Simons. “We're fine,” Simons says, despite the fact that the Elbe River topped by half a meter its previous record height in 1845. Its riverside neighbor, the university clinics at Technical University Dresden, wasn't so lucky, evacuating patients through waterlogged streets. Upstream in Prague, the city's main science institutes and museums also escaped severe damage.

    Information flow.

    Floods in Dresden and elsewhere triggered a fresh debate over global warming.


    Television images of historic palaces under water and homes washed off their foundations triggered new interest in climatology throughout Germany. The events, coming just a month before national elections, also gave politicians an opportunity to debate the merits of environmental taxes and research on renewable energy sources. “Until now, many people thought that global climate change was happening elsewhere and would not affect our weather patterns here in Europe. This has now dramatically changed,” says climate researcher Mojib Latif, spokesperson for the Max Planck Institute for Meteorology in Hamburg.

    Latif, who fielded 30 calls a day for several days, has argued that extreme weather events will become more common as human-triggered carbon dioxide levels rise and global temperatures increase. Christian Schönwiese of the University of Frankfurt am Main, who also fielded dozens of press calls, is more cautious. The unusual weather behind this flood is not necessarily linked to humanmade increases in carbon dioxide levels, he explained. However, he told the Berliner Zeitung that “I can put up with a few misinterpretations” of the details if the floods leave behind greater public awareness about the potential dangers of global warming.


    Environmental Impact Seen as Biggest Risk

    1. Erik Stokstad

    The biggest risk of developing genetically modified (GM) animals is that they might alter the environment, according to a new report from the National Research Council. The NRC panel also questioned the wisdom of having the Food and Drug Administration (FDA) be one of three federal agencies that are regulating the environmental impact of this emerging technology.

    Last year, FDA asked NRC for a list of science-based concerns to consider when reviewing products of GM animals. The report identifies three main categories of potential risk. It places environmental hazards at the top of the list, followed by threats to human health from xenotransplantation (the placement of organs from GM animals into humans) and from the consumption of GM foods. By separating major and minor risks, “we hope we can help this technology be applied as safely as possible,” says John Vandenbergh, a behavioral endocrinologist at North Carolina State University in Raleigh and chair of the NRC committee.

    Wild card?

    Mobile GM animals, such as sterile pink bollworms, might harm ecosystems in unknown ways.


    FDA officials sought advice because they are evaluating several GM animals, including salmon. Some of these animals are intended for the dinner table; FDA is regulating them because it considers the proteins expressed by their foreign genes to be new animal drugs. No transgenic animals have yet been approved for human consumption.

    What most alarmed the committee was the prospect of GM animals entering the environment. “We don't know much about what those animals would do if released,” Vandenbergh says. He points to fleet animals such as fish or insects that might compete with native populations or interbreed easily with wild relatives, introducing new genes.

    The ability of transplanted organs to spread disease to humans is another concern. Pigs carry about 50 retroviruses in their genome, which could become pathogenic and contagious in a human host. The panel also worried that people might accidentally eat animals engineered to produce potentially harmful industrial compounds in their milk, or eat GM products containing a substance that could produce an allergic reaction.

    Although the NRC committee wasn't asked to comment on regulatory policy, it did question FDA's authority to evaluate environmental effects. The Federal Food, Drug, and Cosmetic Act, which covers the “health of man or animal,” is not an environmental law and might not cover impacts on ecosystems, the committee says. The panel worried that FDA does not have relevant in-house expertise and that its mandate might not hold up if outside groups challenge future regulations.

    FDA declined to comment until the report was publicly released, which occurred as Science went to press. But Sanford Miller, a food safety expert at the Center for Food and Nutrition Policy in Alexandria, Virginia, predicts that the report “is going to get the FDA thinking much harder about what priorities they're going to put their money into—or realize they can't do everything.”


    DOE Cites Competition in Killing PubSCIENCE

    1. Charles Seife

    A free 3-year-old government information service and Web site for the physical sciences has lost out to commercial publishers in a battle for eyeballs. On 7 August the Department of Energy (DOE) announced that it was pulling the plug on PubSCIENCE, which provided access to bibliographic records in the physical sciences, because it overlapped with similar projects by private publishers.

    DOE created PubSCIENCE in 1999 as part of an effort to disseminate and improve access to scientific information (Science, 6 August 1999, p. 811). But Walter Warnick, director of DOE's Office of Scientific and Technical Information, says it quickly became superfluous. “We think that portion of our mission is adequately filled by Infotrieve and Scirus,” two privately run, free-to-search databases owned by the Los Angeles-based Infotrieve corporation and Amsterdam-based Elsevier Science.

    Pulling the Pub.

    A federal Web site for the physical sciences is about to go dark.

    PubSCIENCE was modeled after PubMED, the National Institutes of Health's popular online collection of journal citations and abstracts. Although publishers such as Elsevier Science, the American Physical Society, and the American Association for the Advancement of Science (publisher of Science) cooperated with DOE, PubSCIENCE never gathered the momentum that its medical counterpart did. “I think one of the advantages PubMED always had over PubSCIENCE was that it was always very comprehensive in the disciplines that it covers,” says Warnick. “PubSCIENCE was never as comprehensive.”

    The diversity of the physical sciences also made it hard for PubSCIENCE to win a toehold in a very competitive market. “With the whole dot-com industry, it's not so easy to establish something that attracts a lot of traffic,” says Frank Vrancken Peeters, the managing director for Elsevier's ScienceDirect service, which includes Scirus.

    Peeters speculates that the difference between physicists and medical researchers might have contributed to PubSCIENCE's demise. “The physical sciences are much more fragmented, more niche,” he says. Monica Bradford, Science's executive editor, says that PubSCIENCE “wasn't around long enough to establish itself.” Although she says she's “personally disappointed” that it failed, “I don't think it will be missed.”


    U.S. Visa Crackdown Disrupts Meetings

    1. Richard Stone

    The U.S. State Department has begun performing extra security checks on visa applications from scientists and technologists from around the world, delaying visa decisions by weeks. The delays have led to the cancellation or rescheduling of several recent meetings by U.S. organizations. Most heavily affected so far have been scientists from the former Soviet Union (FSU) and China who are involved in research on weapons or other areas deemed sensitive to national security.

    The visa crackdown is a late response to last fall's terrorist attacks, a State Department official told Science. Organizations first began noticing the delays in late spring, but it wasn't until last month that most U.S. nonproliferation experts learned that any foreign scientist could be subjected to such checks. The delays stem from more frequent interviews with visa applicants in U.S. embassies in their home countries as well as new measures, such as stiff vetting by the FBI and other intelligence agencies. “This is to ensure that the wrong people aren't slipping through the cracks,” says a State Department official.

    Ironically, the new security measures could have a negative impact, some say, by making it more difficult to interact with foreign scientists whose skills the U.S. government hopes to divert from weapons programs to civilian activities. “A big part of our engagement program is getting these scientists over here to the West” to work on nonproliferation programs, says the State Department official. To maintain regular contact with such researchers, he says, “we'll have to go to them more often than having them come to us.”

    The changes have dealt a sharp blow to plans by the U.S. Civilian Research and Development Foundation (CRDF) in Arlington, Virginia, to bring together select groups from the FSU and the United States to discuss how to protect civilian populations from terrorist acts. One recent workshop to discuss detection of toxins and pathogens using bioluminescent alarm signals was pushed back from the beginning to the end of August after five FSU scientists failed to obtain U.S. visas. With no resolution in sight and despite the fact that “State and Embassy Moscow are doing everything they can to help,” says CRDF senior vice president Charles T. Owens, “we may have to resort to holding the symposium by video link.”

    So far, the consequences have been relatively minor: canceled plane tickets, lost hotel reservation fees, and the like. But some observers fear that could change for the worse if the proposed U.S. Department of Homeland Security assumes responsibility for screening visa applicants. “We don't expect an upsurge in denials,” says the State Department official, but visa processing time could be lengthened from weeks to months.

    Routine scientific exchanges appear to be less affected by the additional scrutiny. The U.S. National Science Foundation (NSF), for example, reports that none of the collaborations involving foreign scientists that it supports appears to have been affected. However, one NSF official notes that the Chinese Embassy has cited changes in visa processing practices as a reason for delays in the movement of Chinese scholars to the United States. A Department of Energy (DOE) official says that the agency was told a few weeks ago that the State Department's consular division is “tightening up” further on Chinese scientists—“although restrictions were already pretty tight,” he says.

    For CRDF, the first headache came in early July, when two key Russian scientists failed to obtain visas in time for a workshop on portable ion-trap mass spectrometers for the detection of chemical and biological warfare agents. “This workshop gave us an initial clue that there was a problem,” says CRDF president Gerson Sher. In the weeks since, five more of 11 planned antiterrorism workshops, including meetings on anthrax assays and detection of explosives in baggage, have been disrupted—at a “not inconsequential” cost of $35,000 to CRDF, says Owens. DOE's National Nuclear Security Administration has had to postpone several meetings, too. “We just have to plan earlier,” says Barry Gale, director of DOE's Office of International Science and Technology Cooperation.

    Visa policies are likely to be discussed next month at a seminar in Washington, D.C., for science attachés from around the world, sponsored by the American Association for the Advancement of Science (publisher of Science). Although attendees hope to hear a clarification of the Bush Administration's policy from a White House official, no one is predicting a return to a time when obtaining a visa for a foreign scientist was routine.


    New Alzheimer's Treatments That May Ease the Mind

    1. Laura Helmuth

    As researchers come to better understand their foe, they're devising more means of attacking it. The 4000 attendees at last month's Alzheimer's disease conference heard about progress on several fronts

    STOCKHOLM—Children argue about what's scarier: ghosts or monsters? Fire or sharks? Adults aren't above idly comparing their own fears. Which would be worse: cancer or heart disease? A car crash or AIDS? But a strong case can be made that the scariest thing about growing old is the risk of mind-robbing Alzheimer's disease. Its prevalence increases each decade until, by some estimates, people in their 90s stand a 50-50 chance of having developed the disease. As the average life-span increases in most parts of the world, “the more important treatment of this disease becomes,” says Jan Carlstedt-Duke of the Karolinska Institute in Huddinge, Sweden.

    Alzheimer's researchers gathered here* last month with a sense of urgency and optimism about possible treatments—and perhaps preventions—for the disease. Alzheimer's is a wily adversary. Its end product—the widespread death of brain neurons—has been linked to several different insults, including inflammation, oxidative injury, and the deposition of abnormal clumps of a small protein called β amyloid. This complexity makes the disease a challenge to investigate, says Lennart Mucke of the Gladstone Institute of Neurological Disease in San Francisco, “but it implies a diversity of therapeutic opportunities.”

    Researchers have responded to this diversity, and at the meeting they reported on progress or failures with several different approaches. Some strategies are aimed at β amyloid, seeking either to block its production or to clear the abnormal buildup of the peptide from the brain. Others soothe inflammation, absorb oxidative damage, or try to bolster the function of flagging neurons. Although a few of these therapies are already in use or in clinical trials, even more are at a “proof of concept” stage. “What we're seeing now is lots of first steps,” says Colin Masters of the University of Melbourne in Australia.

    Secretase inhibitors

    One of the defining features of Alzheimer's disease is the accumulation of so-called senile plaques at the ends of degenerating brain neurons. The active ingredient in these plaques, and the main player in Alzheimer's damage, is thought to be β amyloid, making it a favorite target for new therapies (Science, 19 July, p. 353).

    β amyloid is produced by two enzymes that clip it from a longer protein called, straightforwardly enough, β-amyloid precursor protein (APP), the function of which is currently unknown. Researchers believe that blocking one or both of these enzymes, dubbed β- and γ-secretase, will decrease the amount of β amyloid in the brain and thus either prevent Alzheimer's disease or slow its progression.

    The race to find inhibitors of the two secretases has been intense. William Thies of the Alzheimer's Association in Chicago estimates that about 100 such compounds have tested positive in mice engineered to develop Alzheimer-like β-amyloid deposits. Results are often hard to come by, however, because much of the work is proprietary and unpublished. But word has it that, for still unknown reasons, most of the effective compounds work on γ-secretase.

    At the meeting, Patrick May of Eli Lilly and Co. presented data on a γ-secretase inhibitor that, at low doses, dramatically reduces the amount of β amyloid in mice without inducing common side effects, such as interfering with an important cell signaling pathway called Notch. The company won't confirm rumors that the compound is in clinical trials. Stephen Freedman of Elan Pharmaceuticals in South San Francisco presented similar data for another γ-secretase inhibitor.

    Time lapse.

    Treatments aim to arrest the shrinking in brains with Alzheimer's disease. Areas smaller after 2 years with the disease are green; red fluid-filled ventricles have expanded.


    In addition, a few teams are reporting some success with β-secretase inhibitors. For instance, Wan-Pin Chang of the Oklahoma Medical Research Foundation in Oklahoma City described several that abolish the production of β amyloid in engineered mice.

    Metal chelation

    Although β amyloid might be a prime instigator of Alzheimer's plaque formation, it doesn't work alone. Its partners in crime include metal ions such as zinc and copper, both of which become more concentrated in the brain with advanced age. In the early 1990s, Ashley Bush of Harvard University discovered that these metals induce β-amyloid aggregation; without them, senile plaques in brain tissue samples dissolve. What's more, the combination of β amyloid with metal ions produces huge amounts of hydrogen peroxide in the brain, causing oxidative damage.

    Masters and Bush have a strategy for removing metal ions from brains of patients with Alzheimer's disease (Science, 17 November 2000, p. 1273). At the meeting, Masters reported the results of a small phase II clinical trial suggesting that their approach might slow progression of the disease. The researchers exploit an antibiotic called clioquinol that chelates (chemically binds) metal ions. Clioquinol was withdrawn in 1970 after it was linked to a deadly vitamin B-12 deficiency in patients in Japan, but the drug never lost its approval from the U.S. Food and Drug Administration. For their trial in Alzheimer's patients, Bush says, the researchers gave participants vitamin B-12 supplements to avoid the potential side effect.

    The trial enrolled 36 patients with moderate Alzheimer's disease; half took clioquinol and half, a placebo. The researchers assessed the patients' mental abilities at the beginning and end of the 9-month trial. Those on placebo seemed to degenerate more rapidly than those taking the drug. Although the difference only approached statistical significance, the results were sufficiently encouraging that the team is now designing a larger treatment trial that will enroll a few hundred patients, says Robert Cherny of the University of Melbourne.


    Other researchers have enlisted the immune system to help clear β amyloid from the brain, although few potential treatments have aroused as much hope and dismay as this one. Pioneered by Dale Schenk of Elan, the idea was to use β amyloid itself to train the immune system to seek out and destroy β-amyloid deposits. Early animal studies showed that vaccination with the peptide prevented buildup of β amyloid in young mice genetically engineered to make plaques and allowed older mice to rid themselves of most of their plaques. Phase I clinical trials in the United States and the United Kingdom suggested that the vaccine was safe (Science, 21 July 2000, p. 375), and a phase II trial was begun with 375 patients in the United States and Europe.

    Christoph Hock of the University of Zürich, Switzerland, reported at the meeting that patients vaccinated with β amyloid produced antibodies that bound to plaques and other forms of β amyloid in tissue taken from other Alzheimer's patients at autopsy. However, the trial had to be halted last year after about 5% of the patients came down with meningoencephalitis, a potentially deadly inflammation of the brain and surrounding membrane. The news was “quite disappointing” and came as a “huge surprise,” said Sangram Sisodia of the University of Chicago at the time.

    Still, many researchers aren't ready to write off the approach. The original vaccine primed the immune system to fight the full β-amyloid peptide, but Elan's Peter Seubert reported that it might be possible to design a safer vaccine using a snippet of the peptide. Mouse studies have shown, he says, that the fragment of β amyloid that induces the immune system to clear out the peptide is different from the fragment that provokes the more dangerous T cell immune response that probably underlies the patients' brain inflammation. It should be possible to design a vaccine that stimulates the beneficial immune response but not the other type, he predicts. Seubert says Elan is also experimenting with mass-produced antibodies that can be injected directly into the body, avoiding the need to stimulate the patient's immune system.


    Cholesterol is another major player in Alzheimer's disease pathology, researchers have come to realize in the last few years (Science, 19 October 2001, p. 508). People with high cholesterol levels are more likely to develop the disease, and epidemiological studies suggest that lowering cholesterol levels, particularly by taking drugs from the statin family, reduces one's risk.

    Source of trouble.

    Two secretase enzymes cut β amyloid free from the longer protein APP, which is located in the cell membrane.


    Cholesterol apparently contributes to Alzheimer's development by fostering β-amyloid production. Indeed, an autopsy study presented here by Miguel Pappolla of the University of South Alabama in Mobile indicated that every 10% increase in blood cholesterol levels doubles the risk of having β-amyloid deposits in the brain.

    At the meeting, Konrad Beyreuther of Heidelberg University in Germany presented results from the first clinical trial of statins in people with mild Alzheimer's disease. In a double-blind 26-week study, 17 patients received a placebo and 20 took the highest allowed dose of the drug. Researchers monitored patients' scores on a standard cognitive test and found some evidence that the statin had helped. Although the difference wasn't significant, those taking placebos had a bigger drop in their standardized test score than those on the statin.

    Bolstered by epidemiological research and this preliminary study, Leon Thal of the University of California, San Diego, and colleagues are launching a larger clinical trial. It will enroll 400 Alzheimer's disease patients starting in September and compare the progress of the disease in those taking statins versus those taking placebos.

    Neurotransmitter targets

    The only drugs currently approved in the United States for treating Alzheimer's disease are cholinesterase inhibitors. They don't attack the disease but help the brain compensate for the loss of neurons that communicate via the neurotransmitter acetylcholine. The treatment, which prevents an enzyme from breaking down acetylcholine, appears to slow progression of the disease, although the improvement isn't dramatic.

    In Germany, however, the standard prescription for Alzheimer's disease is a drug called memantine that works on a different neurotransmitter system. It blocks the action of the neurotransmitter glutamate, which is overproduced in the brains of people with Alzheimer's and other diseases and can overexcite neurons to death.

    Memantine is now in phase III clinical trials in the United States. At the conference, Barry Reisberg of New York University Medical Center reported the results of one trial, which included 252 people with advanced Alzheimer's disease and lasted 52 weeks. It showed that patients who received memantine maintained more mental abilities and were less impaired on a survey of “activities of daily living” than patients who took placebos. The benefits, like those with the cholinesterase inhibitors, were modest. But as Ezio Giacobini of the University of Geneva in Switzerland pointed out, there's no reason the two types of drugs can't be given together or in combination with other treatments yet under development.

    An ounce of prevention …

    Alzheimer's disease usually develops slowly, taking many years to get to the point where symptoms become noticeable. Therefore, many experts think that it might be more promising to try to prevent the disease. Because most cases occur in the elderly, delaying the average onset time of Alzheimer's disease by as little as 5 years would halve the disease's prevalence, says Steven DeKosky of the University of Pittsburgh.

    Researchers are now testing a variety of interventions that have turned up in epidemiological studies as preventing or slowing the onset of Alzheimer's disease. One such idea is to give anti-inflammatory drugs called NSAIDs to people with a family history of Alzheimer's disease (see following story).

    In addition, several long-term observational studies, including some presented at the meeting, suggest that people who get a lot of antioxidants, such as vitamin C or E, in their diet or as supplements are less likely to develop Alzheimer's disease. And some evidence suggests that Alzheimer's disease patients decline more slowly if given vitamin E. DeKosky and colleagues have completed recruitment of a prevention trial to test whether another antioxidant, called ginkgo biloba, can slow Alzheimer's disease. They expect early results in a year and a half.

    Other evidence points to the possibility of using the hormone estrogen to stave off Alzheimer's disease. But that approach became more problematic this summer when leaders of the Women's Health Initiative announced that they were suspending a long-term, placebo-controlled trial of hormone replacement therapy (HRT) due to an unacceptable risk for breast cancer, heart attack, and stroke (Science, 19 July, p. 325). Even so, the National Institutes of Health (NIH) in Bethesda, Maryland, is sponsoring an Alzheimer's disease prevention trial using HRT, and Marcelle Morrison-Bogorad of the National Institute on Aging says there are no plans to suspend recruitment. She points out that animal and cell studies suggest that estrogen helps neurons make connections and interferes with the production of β amyloid.

    The NIH study should reveal whether HRT holds off Alzheimer's disease. But at the meeting, Norman Relkin of Weill Medical College of Cornell University in New York City reported that estrogen supplements given to women who already have Alzheimer's disease didn't help their symptoms and caused an unacceptable number of side effects.

    Several researchers at the meeting remarked on the change in atmosphere since the conference first convened in Las Vegas in 1988. “Fourteen years ago, we couldn't talk about drugs,” says Agneta Nordberg of the Karolinska Institute. Prevention and treatment trials will take years to give clear results, but Melbourne's Masters points out that the field is keeping “a lot of irons in the fire.” Perhaps in another 14 years, Alzheimer's disease patients will raise a toast to researchers with a cocktail of drugs that slows down or stops the disease in its tracks.

    • *The 8th International Conference on Alzheimer's Disease and Related Disorders, Stockholm, 20–25 July.


    Protecting the Brain While Killing Pain?

    1. Laura Helmuth

    Epidemiological studies link use of certain analgesics to a decreased risk of Alzheimer's disease. But the link has yet to be tested experimentally, and researchers are fiercely debating just which drugs to test

    STOCKHOLM—Even to skeptics of epidemiological studies, the data look pretty impressive. More than 20 reports over the past decade have indicated that taking certain painkillers for many years reduces the risk of developing Alzheimer's disease, the dreaded brain disease that robs people of the ability to think. “The observational data are remarkably consistent,” says John Breitner of the University of Washington, Seattle.

    Still, most Alzheimer's disease researchers are reluctant to recommend popping the painkillers, known as NSAIDs (for nonsteroidal anti-inflammatory drugs), routinely, the way some people take calcium supplements to ward off osteoporosis. Because NSAIDs can have serious side effects, such as potentially fatal gastrointestinal bleeding, these researchers are waiting for controlled clinical trials to show that the protective effect is real, robust, and worth the risk. One large-scale clinical prevention trial designed to answer just that question is now getting under way. But at the International Conference on Alzheimer's Disease and Related Disorders held here last month, many participants expressed concern that the trial is testing the wrong drugs.

    The debate arises in part from disagreements over how NSAIDs might protect against Alzheimer's disease. An early and reasonable hypothesis was that NSAIDs soothe what Patrick McGeer of the University of British Columbia in Vancouver calls the “raging inflammation” seen in the brains of people with Alzheimer's disease. Immune cells encircle the abnormal plaques that are one of the defining pathological features of the disease. Subduing this response, the theory goes, prevents hyperactive immune cells from targeting nearby neurons and destroying them.

    Recently, others have begun to question this model, pointing out that NSAIDs have several additional powers that might instead underlie their ability to fight Alzheimer's disease. For example, they can protect against the oxygen radicals also thought to contribute to the brain damage. And since late last year researchers have been buzzing about another new NSAID trick, discovered in cultured cells: Some of the drugs dampen production of the most toxic form of a peptide called β amyloid whose deposition in the brain is thought to seed plaque formation. Chillingly, though, other NSAIDs—including two now being tested for possible protection against Alzheimer's disease—encourage production of this particularly toxic type of β amyloid.

    Raging inflammation.

    A marker for immune cells called microglia shows extensive inflammation in the brain of an Alzheimer's disease patient.

    CREDIT: CAGNIN ET AL., THE LANCET 358, 461 (2001)

    The finding sent epidemiologists scrambling to reanalyze their data for any differences in the effects of various NSAIDs. Conference reports from two massive population studies gave conflicting results: One suggested that any NSAID can protect against Alzheimer's disease; the other suggested that only those NSAIDs that inhibit the more virulent β amyloid are protective. But animal studies reported here confirmed that NSAIDs have the same influences on β amyloid in vivo as they do in vitro, further raising the possibility that some NSAIDs might conceivably exacerbate Alzheimer's disease.

    Aches and gains

    Interest in using NSAIDs to prevent Alzheimer's disease dates to 1990, when physician and neuroscientist McGeer reported a curious coincidence. His team discovered that arthritis patients, who take regular doses of NSAIDs to control their pain, have an unusually low risk of Alzheimer's disease. Since then, this apparent benefit of NSAIDs has shown up in several other epidemiological studies, including some on twins.

    But such epidemiological studies have some inherent problems that leave room for uncertainty about their results. For instance, getting solid information on drug use is tricky, particularly in retrospective surveys in which people have to remember what drugs they once took and in what doses. People with Alzheimer's disease by definition have poor memories, and their relatives might not have kept close track of such things. One large study avoided such problems by gathering the participants' pharmaceutical records for 8 years from a citywide computer system in Rotterdam, the Netherlands. It came to the same conclusion as McGeer.

    Starting in 1990, Bruno Stricker of Erasmus Medical Center in Rotterdam and colleagues followed more than 7000 people over age 55 who did not show signs of dementia at the time the study began. The pharmacy records told them which types of NSAIDs and other drugs people picked up and how often they went back for refills, and the researchers tested the participants for various types of dementia several times during the 1990s. Compared to people who didn't take NSAIDs, the team reported last year in The New England Journal of Medicine, those who took the drugs for at least 2 years had an 80% reduction in their risk of developing Alzheimer's disease.

    Dramatic though such findings might be, Stricker points out, “observational data are never the final proof.” Alzheimer's disease develops slowly, and perhaps, Stricker suggests, people with incipient Alzheimer's disease don't experience pain in the same way as those without the brain degeneration, or they might be unable to express their discomfort and seek help. If so, they might have taken fewer NSAIDs than their healthy peers, but the NSAIDs themselves wouldn't have had any influence over who came down with the disease.

    To get a better fix on NSAIDs' presumed protective effects, both the drug companies and the National Institute on Aging (NIA) in Bethesda, Maryland, have conducted placebo-controlled, blind studies of people already diagnosed with early-stage Alzheimer's disease. They all failed to show a benefit, says Steven Ferris of the Silberstein Aging and Dementia Research Center at New York University. Paul Aisen of Georgetown University in Washington, D.C., described the results of the largest such study, including 351 patients with mild Alzheimer's disease. In one year, he reported, patients taking either the NSAID rofecoxib (sold as Vioxx) or naproxen sodium (sold as Aleve) declined just as rapidly as those on placebo.

    Breitner, for one, wasn't surprised. The epidemiological data show “little benefit [of NSAIDs] once you have dementia,” he says. “Primary protection trials are the only ones with a chance of benefit.”

    Breitner is now leading an NIA-funded prevention trial that aims to determine whether NSAIDs do indeed protect against Alzheimer's disease. His team has recruited about 1000 of the planned 2625 participants at sites throughout the country. Volunteers commit to taking a mystery pill—either placebo, naproxen, or celecoxib (sold as Celebrex)—twice a day for 5 to 7 years.

    Drug choice questioned

    Naproxen was chosen for the prevention trial partly for convenience. The drug company that makes it was already providing unmarked pills for the earlier NIA treatment trial and offered to continue the service. As Marcelle Morrison-Bogorad of NIA says, “the National Institutes of Health is impartial [about which drugs to test], but we try to save money.”

    Some people have been second-guessing this choice, however. Last year, Edward Koo of the University of California, San Diego, and colleagues reported that naproxen, rofecoxib, and celecoxib are among the NSAIDs that spur neurons in lab cultures to produce the more dangerous version of β amyloid, which contains 42 amino acids rather than the standard 40. Other NSAIDs, Koo's team found, reduce β amyloid 42 (Aβ42) production in favor of a relatively harmless form, Aβ38. These include ibuprofen (sold as Advil or Motrin), sulindac, and indomethacin.

    In research presented at the meeting, Koo, Todd Golde of the Mayo Clinic in Jacksonville, Florida, and colleagues provided evidence for a possible mechanism by which NSAIDs might influence which β-amyloid molecule is produced. The peptide is clipped from a larger protein by a poorly understood enzyme called γ-secretase, and the researchers found hints that NSAIDs might alter how that enzyme functions by interacting with one of its components.

    Koo admits that the experiments indicating that NSAIDs influence β-amyloid production have used enormous doses of the drugs, and he says it's too soon to tell whether this mechanism might underlie a protective effect of some NSAIDs: “We're only looking at an Aβ42 effect. Whether that explains all the epidemiology, we don't know.”

    The epidemiology itself provides conflicting evidence about whether reduction of Aβ42 is key to the NSAIDs' protective effects. After hearing about Koo and colleagues' work, Stricker reexamined the data from the Rotterdam study. Although the effect wasn't significant, he says that the protective effect of NSAIDs was apparent only for the drugs shown to lower Aβ42. Breitner, however, came to a different conclusion. In addition to coordinating the prospective trial, he helps run a longitudinal study of people in Cache County, Utah—the longest lived in the United States. His reanalysis suggests that both naproxen and aspirin, neither of which reduces Aβ42, do protect against Alzheimer's.

    Given the current uncertainties about the effects of naproxen and celecoxib, some argue that it's not worth the risk to use them in an expensive prevention trial, particularly if other drugs stand a better chance of protecting volunteers. If some drugs are known to decrease Aβ42, “shouldn't those be the ones [to test]?” asks David Morgan of the University of South Florida, Tampa. “Is it justifiable to continue if you're not sure you have the best one?”

    The problem might be especially acute for naproxen, one of the older NSAIDs that eases pain by blocking two related enzymes, Cox-1 and Cox-2. As a result, it runs a high risk of causing gastrointestinal (GI) bleeding and perforated ulcers. In contrast, celecoxib, a specific Cox-2 inhibitor, is easier on the GI tract. Breitner also points out that there's another reason for testing celecoxib, even though it doesn't have as long a protective history as the older drugs. Cox-2 is overactive in the brains of people with Alzheimer's disease, and he says it's worth seeing whether the relatively safe selective Cox-2 inhibitor will stall the disease as well as the older, nonselective NSAIDs.

    In any event, NIA has no plans to change the study, Morrison-Bogorad says: “There's lots of talk that these might not be the right drugs, but some believe it's as good a combination as any.” Recruitment has been slower than anticipated, but barring unacceptable side effects or other circumstances that would shut down the trial, Breitner expects results by 2009.


    Unconventional Detective Bears Down on a Killer

    1. Jennifer Couzin

    A veteran of the bioweapons treaty wars has taken on a leading role in pressing the FBI to find out who mailed the anthrax letters

    Representatives from nongovernmental organizations were supposed to sit quietly in the gallery as the delegates to a 4-week conference last summer in Geneva debated how to strengthen the Biological and Toxin Weapons Convention (BTWC) protocol. But that rule didn't stop molecular biologist Barbara Hatch Rosenberg from plopping herself down in a seat on the main floor. “I just walked in; nobody said anything,” explains Rosenberg, who serves as chair of the Federation of American Scientists' (FAS's) working group on biological weapons verification. Members of the U.S. delegation were unhappy with the ad hoc seating arrangement, however, and forced her to move back to the gallery.

    Rosenberg's supporters and detractors already knew she was a hard-nosed and vocal activist who's unmovable once she takes a stand. “Barbara obviously makes no bones about her views,” says Stephen Morse, an epidemiologist at Columbia University in New York City and a longtime friend. A government scientist who's battled Rosenberg for years puts a sharper edge on his description of her: “What she brings [to discussions] is an attitude.”

    That attitude has helped Rosenberg become one of the most visible critics of the FBI's investigation into the anthrax mailings last fall that killed five people and sickened at least 17 others. Less predictably, she also has become the leading nongovernment authority on who might have committed the crimes. Coming soon after the 11 September terrorist attacks, the mailings heightened the country's sense of vulnerability from abroad. But within weeks, Rosenberg asserted in a very public setting that the attacker was an American—specifically, a scientist with access to a federal lab that studies biological agents. The FBI's actions in the case have since converged with that profile, in particular, shining a spotlight on Steven Hatfill, a microbiologist who earlier this month vehemently proclaimed his innocence and accused the government and the media of ruining his life (Science, 16 August, p. 1109).

    How did a 70-something academic—she's an environmental science professor at the State University of New York (SUNY), Purchase—and bioweapons expert come to take on such a prominent role in this manhunt? Rosenberg professes surprise at the attention she's received, saying simply, “From what I knew the FBI knew, I knew they should be farther along” in their investigation. “That's why I began making statements.”

    Her motivation, she says, is to deter future assaults by helping solve the first deadly bioweapons attack within the United States. But her profile of the attacker also jibes with other stances she has taken. They include support for a protocol to strengthen the BTWC by advocating inspections to assess bioweapons production—a protocol from which the United States recently walked away (Science, 24 August 2001, p. 1415). She also opposes building more bioweapons labs.

    Profiling the attacker. The anthrax letters that struck down and disrupted lives in New York, Florida, New Jersey, Connecticut, and Washington, D.C., last fall embodied the fears of Rosenberg and many other bioweapons experts, who had long warned that the country was ill prepared to handle such an attack. Her 2 decades of work in bioweapons control have given Rosenberg deep ties in the community; almost immediately following the attacks, she began receiving unsolicited tips from U.S. scientists whose connections with federal programs prevented them from speaking publicly.

    By early November, Rosenberg says that certain clues, including signs that the anthrax strain had come from the U.S. Army Medical Research Institute of Infectious Diseases (USAMRIID) at Fort Detrick, Maryland, convinced her that the perpetrator was an American. She went public with those thoughts on 21 November at a BTWC meeting in Geneva, asserting that New York City “has just been attacked, first by foreign terrorists, then by an American using a weaponized biological agent.”

    Authoritative voice.

    Rosenberg's theories about who's behind the anthrax attacks have received much attention.


    Rosenberg declines to explain why she chose that venue. But her voice rises in anger when she recalls how U.S. officials refused to join with other delegations at the November meeting. “[The U.S. was] accusing everyone else of having bioweapons, when the attack was coming from our program. … I felt that it was necessary to point out.”

    Rosenberg came to believe that the scientist-perpetrator didn't intend to kill—after all, she says, the attacker warned that the letters contained anthrax or that the recipients should take penicillin—but rather nursed a grievance against the government for unfairly neglecting the U.S. bioweapons program. Since November, her theories have been widely disseminated over the Internet and in the media.

    Earlier this year, Rosenberg wrote that the FBI had a suspect in mind but was reluctant to pursue him because “the suspect knows too much and must be controlled forever from the moment of arrest.” She has since grown more circumspect about a possible conspiracy, saying, “I can only speculate as to why” the FBI hadn't been more aggressive.

    That view still doesn't sit well with some scientists, although few are willing to criticize her in public. “My feeling is that if there is such a conspiracy, the FBI is not a part of it,” says Steven Block, a biophysicist at Stanford University who has advised the U.S. government on bioweapons. Some scientists also felt that it wasn't a coincidence that Rosenberg's profile of the attacker fit one person. “She just seems to be too anxious to pin this on [Hatfill],” says Peter Jahrling, a senior USAMRIID researcher, who says Rosenberg's comments about the case led him to decide early on that she had Hatfill in mind. Rosenberg maintains that she never named Hatfill or anyone else in comments to the FBI or in her statements.

    Rosenberg zealously preserves the anonymity of her sources, saying only that they are government scientists and other insiders. Those within and outside government labs agree that her sources seem knowledgeable. Jahrling, however, suggests that Rosenberg doesn't have many friends in the government's biodefense labs because she opposes their planned expansion. Any expansion, she has argued, just adds to the pool of scientists with the means to pull off another bioweapons attack.

    Both admirers and detractors agree that she has pushed the FBI forward. “Without question, she's influenced this investigation,” says Block, who also strongly suspects that the culprit, if not a U.S. citizen himself, has ties to the U.S. bioweapons program. Privately, scientists who support Rosenberg praise her for taking on what they call a thankless job.

    Rosenberg, whom the conservative Weekly Standard ridiculed as “the Miss Marple of SUNY/Purchase” in a recent article, maintains that the importance of finding out who sent the anthrax-tainted letters demanded her involvement and that her celebrity is purely accidental. Mark Wheelis, a microbiologist at the University of California, Davis, and a member of the FAS working group that Rosenberg runs, agrees that she generally shuns the limelight. But her determination, he notes, serves her well here: “The toughness is not part of her normal manner; it's a reserve she can draw on when it's called for.”

    And what if she's wrong? Rosenberg concedes that interrogating Hatfill might not help the FBI crack the case. But she quickly reverts to character. Even if that's true, she says, “the broad principles and the things I've said, I stand behind.”


    Drought Exposes Cracks in India's Monsoon Model

    1. Pallava Bagla

    The annual summer monsoon rains are vital to India's economy. But a drought this summer suggests that a homegrown prediction model might be all wet

    NEW DELHI—India's first serious drought in 15 years is doing more than parching the soil and threatening the country's food supply. It has also stirred up a debate over the robustness of a homegrown climate forecasting model that badly missed predicting this summer's sharp decline in life-sustaining rains over much of the country.

    The summertime monsoon across India is among the toughest climatic phenomena to understand and predict because of the complex atmospheric conditions in the tropics. But its importance to the economythe June-to-September monsoon typically delivers more than 80% of the country's 88 cm of annual precipitationled scientists at the India Meteorological Department (IMD) in the 1980s to create a statistical model that could anticipate the timing and extent of the monsoon. The model incorporates data from 16 meteorological events believed to affect the monsoon, including winter snow cover in the Eurasian mountains and springtime atmospheric pressure over Argentina. Factoring in all these elements, IMD predicted in May that India would experience a normal monsoon.

    But this summer has been anything but normal. Although the first monsoon rains arrived on schedule in early June, they soon petered out. By early August the cumulative rainfall for the country stood at 30% below average, with a particularly heavy impact on the country's grain belt in western India. That's an enormous variation for a weather system in which a 10% variation from the long-term average rainfall gets classified as an extreme event. By that definition, the last drought occurred in 1987, with the most recent serious monsoon-caused floods coming in 1994. At the same time, this month has also seen deadly flooding in the northeast.

    Scientists don't really know why this year's forecast was off by so much. The model also failed badly in predicting the 1994 floods. “Monsoon prediction is at best a statistical gamble whose reliability is marginally above the throwing of the dice,” says Dev Raj Sikka, the man who co-invented the model and who is now chair of the Indian Climate Research Program. Because the model has never really been operationally applied to a drought year, he adds, its “robustness to predict extreme events is questionable.” But S. R. Kalsi, deputy director at IMD in New Delhi, believes that an off year is not surprising. “You cannot order the environment to behave itself,” he says, adding that dry spells “are a normal part of the rich diversity in behavior of the monsoon.”

    Some scientists believe that a string of normal monsoons has made the model seem more reliable than it really is. In a paper due out in the December issue of the Journal of Climate, climatologists Tim DelSole and Jagadish Shukla of George Mason University in Fairfax, Virginia, write that “1989-2000 happens to be a rare period in which predictions based on the climatology of the prior 25 years are unusually good. This reflects the fact that the monsoon rainfall has been near normal every year during this period. Consequently, any forecast model that predicts near-normal rainfall during this period will have a relatively small mean square error.”

    Coming up dry.

    The continuing drought in western India highlights flaws in the country's monsoon prediction model.


    This year's atypical monsoon highlighted the model's shortcomings. It marks the seventh consecutive year in which the gap between predicted and observed rainfall is larger than the model's margin of error (see graph). Curiously, however, none of the predictions has been wrong according to IMD's metric, in which a forecast is correct if both the actual rainfall and the prediction fall within 10 percentage points of the long-term average. Many scientists also are unhappy that they cannot judge the model's performance independently because IMD doesn't make public the values of the 16 predictors or disclose details of how the forecast is developed.

    Shukla, head of the independent Center for Ocean-Land-Atmosphere Studies in Calverton, Maryland, and a frequent visitor to India, speculates that “artificial skill” might play a role. “We are somewhat baffled why the regression model used by India has not produced a forecast in the past 13 years that is very different from normal,” he says. The final product is vetted by the prime minister's office because of the monsoon's tremendous impact on the Indian economy, which has prompted whispers that the forecast could be manipulated for political reasons. But Kalsi vehemently denies that speculation.

    The IMD model isn't the only one that has trouble foretelling complex phenomena, says Muthuvel Chelliah of the U.S. Climate Prediction Center in Camp Springs, Maryland. “The farther ahead in time one wants to forecast the weather in a particular place, the more one needs to know, at the present time, about a larger and larger region surrounding that place,” he explains. No country, including the United States, has the capability to make monsoon predictions accurately, Chelliah adds.

    Knowing which data to collect is a major challenge. The more parameters, say scientists, the more likely they will cancel each other out. “IMD uses too many parameters in their statistical model,” says Chelliah, and Shukla says that “no more than three predictors should be used,” although he admits that nobody really knows which ones are most appropriate.

    The IMD model also depends heavily on data gathered over land. But most scientists believe that the use of general circulation models that couple constantly changing ocean and atmospheric conditions with terrestrial events are essential for understanding large-scale phenomena such as the monsoon. “Prediction of the monsoon will remain a challenging task unless atmospheric data over the oceanic regions surrounding the Indian subcontinent is collected and analyzed,” says Shukla.

    More extensive field observations might help. Such observations began with the Monsoon Trough Boundary Layer Experiment in 1990 and continued with the Bay of Bengal Monsoon Experiment in 1999 and the Arabian Sea Monsoon Experiment in 2002. “The analysis of data from such experiments will ultimately lead to an improvement in the long-range forecast of the monsoon,” says Jayaraman Srinivasan, chair of the Indian Institute of Science's Center for Atmospheric and Oceanic Sciences in Bangalore.

    There's also the need to put what is now being collected to better use. “It is almost a national tragedy that the INSAT [India's national weather satellite] data, collected at such huge cost, is underutilized,” says Shukla. Officials defend their restrictive policies on the grounds of national security and the need to assist scientists who lack the computational facilities to handle large amounts of raw data.

    Despite their disappointment that this year's prediction was wide of the mark, Indian officials say they don't know where else to turn. “I would be willing to prostrate myself in front” of anyone who can do better than IMD has done, says Valangiman Subramanian Ramamurthy, secretary of the Department of Science and Technology, which oversees IMD. Even better models won't help farmers, adds India's minister of agriculture, Ajit Singh: “All the supercomputers and satellites now in use are no substitute for the rain god.”


    Surviving the Long Nights: How Plants Keep Their Cool

    1. Kathryn Brown

    Researchers are exploring plants' hidden strategies for surviving the cold months—such as the Douglas fir that remains green but stops photosynthesis

    When winter creeps in, casting a mottled sky and raw wind, most of Earth's residents take cover. But plants are stuck outside. With nowhere to turn, the plant kingdom has developed its own strategies for surviving—and even using—the cruel cold season. At the recent American Society of Plant Biologists (ASPB) meeting in Denver, scientists reported new research showing just how complex those strategies can be.

    “What happens when plants acclimate to winter?” asked University of Colorado (UC), Boulder, ecophysiologist Barbara Demmig-Adams at the meeting. Her answer: It depends on the plant. Demmig-Adams described a new study in which she, fellowUC Boulder ecophysiologist and husband William Adams, and their colleagues compared winter survival strategies among more than a dozen plant species growing in Colorado's Flatiron Mountains.

    The changes induced by cold ranged from slight to severe. During the study, for instance, the researchers documented a montane Douglas fir and a weed, Malva, growing side by side and sharing the same frigid days—but reacting in very different ways. Despite being an evergreen, or a plant that keeps sunlight-absorbing green chlorophyll year-round, the Douglas fir actually shut down photosynthesis during winter to stop growing. At the same time, the fir upregulated carotenoid pigments—such as zeaxanthin and lutein—to help shed any absorbed sunlight as heat. By contrast, the scrappy Malva kept right on growing through winter, using every above-freezing chance to photosynthesize at full blast.

    The fir's radical strategy is an adaptation to lower temperatures, which impede normal metabolic processes. “Shutting down seems to be the way to go to preserve green leaves in the most extreme winter conditions,” remarks William Adams. Whereas short-lived plants such as Malva and winter cereals can survive at intermediate elevations during winter, only conifers succeed at higher altitudes. By revving up a photoprotective system, the trees avoid accumulating radical oxygen compounds that could build up in winter and damage them.

    In fact, entire evergreen forests appear to wait out winter. This hunkering down is reflected in the rate of CO2 uptake and release, shown in a study of subalpine forest in the Rockies, published this spring in Global Change Biology by UC Boulder ecologist Russell Monson and his colleagues.

    Under the weather.

    With nowhere to go, plants have evolved creative ways to survive winter.


    Over 2 years, Monson's group found that CO2 uptake by the forest plummeted as winter set in. “Even on days when it was quite warm, with the temperature approaching 15° to 20°C, the forest stayed locked down,” Monson says. But as soon as spring hit in late April or early May, the forest jumped to life almost overnight, becoming a huge carbon sponge. “I think this is the true advantage of being an evergreen: not photosynthesizing year-round, as some researchers have assumed, but instead being able to ramp up quickly in spring, to obtain a lot of seasonal carbon and grow,” Monson says.

    Biennial plants even welcome a winter break, speakers at the ASPB meeting said. Biennials—including many crops, garden favorites, and weeds—bloom only in their second season, after exposure to prolonged cold. Getting this reproductive timing right is critical so that flowers can attract pollinators and best disperse seeds.

    In one talk, Richard Amasino, a biochemist at the University of Wisconsin, Madison, described the emerging molecular mechanics behind “vernalization”: a biennial plant's use of the cold season as a time-out, a transition from growing leaves and shoots to preparing for a burst of spring flowering. “These plants have evolved a way to measure winter and wait until it's been cold enough, long enough, to signify spring,” Amasino says.

    In the past few years, Amasino; Caroline Dean, associate research director at the John Innes Centre in Norwich, U.K.; and others have been unraveling the biochemistry that causes Arabidopsis to overwinter and delay flowering. The researchers have found that as cold sets in, a gene dubbed FRIGIDA promotes the buildup of the flowering locus C (FLC) transcript, a repressor protein that blocks genes for flowering. After a period of cold, the plant's FLC levels drop, allowing flowers to emerge when temperatures warm.

    By creating Arabidopsis mutants that flower off schedule, the team has found a handful of additional vernalization genes. Dean's lab last month reported finding the gene VRN1, which helps shut down FLC so spring flowers can bloom (Science, 12 July, p. 243).

    Dean says that Arabidopsis likely bears a key set of floral genes that, when activated, switch on flowering. Vernalization proteins represent just one biochemical pathway that can activate those flowering genes, she says. Other pathways, lined with proteins that sense the day's length (photoperiod), for instance, or developmental changes can also trigger those same genes. “My view is these sets of pathways have somewhat overlapping functions to reinforce each other,” Dean says. “So the plant says, ‘I've had longer days and enough cold, so I'm doubly sure it's O.K.—it's spring, it's time to flower.’”

    The emerging research could help plant breeders improve yields of biennial crops such as alfalfa and sugar beets, adds Jan Zeevaart, a botanist at Michigan State University in East Lansing. With molecular tools in hand, the science should blossom.


    A Fresh Take on Disorder, Or Disorderly Science?

    1. Adrian Cho*
    1. Adrian Cho is a freelance writer in Grosse Pointe Park, Michigan.

    For nearly 80 years the definition of entropy has been literally etched in stone. A few physicists want to carve a new one, but others say the idea is cracked

    Near the middle of Vienna's sprawling Central Cemetery stands the imposing tomb of Ludwig Boltzmann, the 19th century Austrian physicist who first connected the motions of atoms and molecules to temperature, pressure, and other properties of macroscopic objects. Carved into the gravestone, a single short equation serves as the great man's epitaph: S = klnW. No less important than Einstein's E = mc2, the equation provides the mathematical definition of entropy, a measure of disorder that every physical system strives to maximize. The equation serves as the cornerstone of “statistical mechanics,” and it has helped scientists decipher phenomena ranging from the various states of matter to the behavior of black holes to the chemistry of life.

    But roll over, Boltzmann. A maverick physicist has proposed a new definition of entropy, and his idea has split the small and already contentious community of statistical physicists like a cue ball opening a game of pool. Supporters say the new definition extends the reach of statistical mechanics to important new classes of problems. Skeptics counter that the new theory amounts to little more than fiddling with a fudge factor.

    The new definition gives insight into the myriad physical systems that verge on a kind of not-quite-random unpredictability called “chaos,” says Constantino Tsallis of the Brazilian Center for Research in Physics in Rio de Janeiro. Tsallis proposed the definition in 1988, and since then researchers have applied it to subjects from the locomotion of microorganisms to the collisions of subatomic particles, and from the motions of stars to the swings in stock prices. The new definition appears to account for subtleties in the data exceedingly well, Tsallis says. It also probes a gap in Boltzmann's reasoning that Einstein spotted nearly a century ago.

    But many physicists remain highly skeptical. So-called Tsallis entropy simply adds another mathematical parameter that physicists can twiddle to make their formulae better match the data, says Itamar Procaccia of the Weizmann Institute of Science in Rehovot, Israel. “It's just mindless curve-fitting,” he says. Joel Lebowitz of Rutgers University in Piscataway, New Jersey, says that researchers crank out papers on the new entropy by the dozen (Tsallis lists nearly 1000 of them on his Web page, but that most contain few physical insights. “The ratio of papers to ideas has gone to infinity,” he says.

    Several well-respected physicists, however, say that the skeptics have closed their minds to a potentially fruitful innovation. “It's ridiculous to reject this out of hand,” says E. G. D. Cohen of Rockefeller University in New York City. Michel Baranger of the Massachusetts Institute of Technology (MIT) in Cambridge says that behind the skepticism lurks more personal misgivings about Tsallis, who traverses the globe stumping for his idea. “He spends an enormous amount of time making sure his work gets recognition,” Baranger admits, but that doesn't mean his idea isn't a good one.

    Into the mix.

    Proponents hope a new entropy will help physicists untangle tortuous subjects such as turbulence.


    Counting the ways. Anyone who has ever touched a hot burner should have an intuitive feel for the concept of entropy. As heat flows from the metal of the burner into the flesh of a finger, it jiggles the atoms and molecules in the digit, knocking them out of their usual, painless order and leaving them in excruciating disarray. The amount of disorder determines the entropy of the fingertip.

    In the 1870s, when most physicists still doubted the very existence of atoms and molecules, Boltzmann provided the essential mathematical link between the positions and velocities of the tiny particles and macroscopic quantities such as heat and temperature. Boltzmann realized that the positions and velocities of the atoms or molecules within an object could be rearranged in many different ways without changing the object's macroscopic properties. The entropy of the object, he reasoned, simply equals a constant, k in modern notation, times the logarithm of the number of equivalent microscopic arrangements—a gargantuan number denoted W. With that definition Boltzmann bridged the conceptual chasm between the macroscopic and microscopic realms.

    Or rather, he vaulted across it. At a key point in his analysis, Boltzmann simply assumed that the molecules shift from one microscopic configuration to the next in such a way that every possible arrangement is equally likely. But that isn't necessarily true, as Einstein noted in 1910. How the system moves from one configuration to the next depends on the precise interactions between the molecules, and the details of these “dynamics” might make some configurations more likely than others, Einstein observed. If that's the case, Cohen says, the equation for entropy might take a different form: “Tsallis entropy is the first example in classical statistical mechanics that there is something to Einstein's idea.”

    Tsallis allows the probabilities of different configurations of particles to vary only in certain ways. Each configuration can be thought of as a single point in a vast abstract “phase space,” typically with six times as many dimensions as there are particles in the system (three for position, three for velocity). As the particles change configuration, the system traces out a complicated path in this space.

    Boltzmann essentially assumed that the system would wander so that it spent the same amount of time in each equally sized region of phase space. In contrast, Tsallis assumes that the system follows a path that has the shape of a fractal, a curious mathematical object that can have, for example, 2.381 dimensions and that looks essentially the same no matter how much it is magnified (see figure). The fractal limits the ways the system can get from one patch of phase space to another much as an airline's routes might limit the ways a traveler can get from New Orleans to Chicago, Tsallis says. “The two airports aren't connected,” he says, “so you can't go from one to the other without going through Houston.”

    Don't go there.

    Where normal systems wander all over “phase space” (left), Tsallis's stick to patchy fractals.


    To account for such fractal paths, Tsallis changed the mathematical form of the definition of entropy and introduced a new parameter, q (see box). The new definition encompasses the old one, Tsallis says, as the two formulae are identical when q equals 1. But when q differs from 1, the new entropy behaves in important new ways. For example, the entropy of an entire system no longer equals the sum of the entropies of its various parts. Systems that behave this way are called nonextensive, Tsallis says, and many systems on the verge of chaos display this property.

    Doing the Math Tsallis entropy involves a power law: For an isolated system, W is raised to the power 1 - q. But when the parameter q goes to 1, the Tsallis entropy equals the logarithmic Boltzmann entropy.

    Graphic 1

    Although the equation on Boltzmann's grave captures the essence of his insight into entropy, he never wrote it down himself. It was German physicist Max Planck who, in 1900, first put it into the form that became Boltzmann's epitaph.

    Tsallis stresses that conventional entropy still applies whenever an object or system is in thermodynamic equilibrium, a placid state in which it has a well-defined uniform temperature. The new entropy comes into play, Tsallis says, primarily when a system is far from equilibrium, either because of some peculiarity of its dynamics or because an outside force continually perturbs it. But such systems are hardly rare special cases, says MIT's Baranger. “Actually, most of the systems in the universe are not in thermal equilibrium,” he says, so Tsallis's work might open many new avenues of research.

    A theory ofq. Both proponents and skeptics agree that it's not enough to extract the value of q from the data; the case for the new entropy rests on determining what q means and how to predict its value. Christian Beck of the University of London might have taken a key first step in that direction last year with his analysis of turbulence. Beck studied data accumulated by Harry Swinney and Gregory Lewis, of the University of Texas, Austin, who had produced turbulent flows by placing a liquid in the space between two cylinders and then spinning the inner cylinder. Swinney and Lewis compared the velocity of the flow at two different positions around the cylinder.

    Beck showed that the Tsallis approach nicely accounted for the observed variations in velocity—something that Boltzmann's entropy can't do. More important, Beck produced an equation that connected the value of q to temperature variations from place to place in the roiling liquid—the first time anyone had derived q from details of a system's interactions.

    The work doesn't quite clinch the case for Tsallis entropy, Swinney says, because no one has proved that the temperature varies in just the way Beck presumed. Although measuring the temperature distribution won't be easy, Swinney says, “that's a hypothesis that can be tested.”

    Ultimately, nature will reveal whether Tsallis entropy belongs among the established concepts of statistical mechanics or on the scrap heap of bright, but failed, ideas. And if Boltzmann's fate is any guide, even affirmation might come only slowly and cruelly. Boltzmann's work met with hostility during his lifetime, and the physicist hanged himself in 1906—just a few years before his ideas were vindicated. Not until 1933 did authorities move his body to a place of honor and erect the headstone that memorializes his great insight. In the study of entropy, it seems, acceptance comes only at the end of a long and disorderly path.