# News this Week

Science  30 Mar 2001:
Vol. 291, Issue 5513, pp. 2526
1. BIOCHEMISTRY

# Ribosome's Inner Workings Come Into Sharper View

1. Elizabeth Pennisi

In the latest in a series of stunning advances, a team of structural biologists has unveiled the most comprehensive view yet of one of the cell's most critical components: the ribosome.

While DNA carries instructions for building proteins, ribosomes actually do the work. They produce proteins by stitching together amino acids carried in by transfer RNA (tRNA), according to instructions transmitted from DNA in the nucleus by messenger RNA. Biologists have long wondered just how this factory churns out the thousands of different proteins necessary for life.

Now, as reported online this week in Science (www.sciencexpress.org), a team led by Harry Noller at the University of California, Santa Cruz (UCSC), presents a molecular view of a complete bacterial ribosome, describing its structure down to 5.5 angstrom resolution. Although that resolution is not high enough to discern the positions of individual atoms in this giant complex of proteins and RNA—which means that more work needs to be done to verify the new analysis—it “represents a huge step forward,” says Peter Moore, a Yale biochemist.

Over the past few years, Moore and others have obtained progressively more detailed structures of the two major components, called subunits, of the ribosome. Moore, with Yale's Thomas Steitz and their colleagues, recently published the highest resolution structure yet of the large subunit (Science, 11 August 2000, p. 905). But Noller's team is the first to provide a detailed view of the entire molecule. (The group produced a blurrier image, at roughly 7.8 angstrom resolution, in 1999.) Thanks to this new work, researchers can now see “how the two subunits are interconnected and what kind of environment they [provide] the tRNA,” says Joachim Frank, a cryoelectron microscopist at the Wadsworth Center in Albany, New York. “This is exactly the kind of information that is entirely missing from the previous atomic structures.”

To pull off this feat, says Noller collaborator Jamie Cate, who is now at the Whitehead Institute for Biomedical Research in Cambridge, Massachusetts, “we had to try out a bunch of things” without really knowing in advance what it would take to get a higher resolution image.

Cate's colleagues Marat Yusupov and Gulnara Yusupova, now at CNRS's Biology and Structural Genomics Institute in Strasbourg, France, were the chemistry gurus who worked out the conditions for building ribosome crystals good enough to analyze. One secret seemed to be adding two or three tRNAs to the crystallizing ribosome. When these tRNAs moved into their docking sites on the ribosome, they likely stabilized the structure, allowing for a sharper image, Cate suggests. An added bonus: The three-tRNA complex also captured the ribosome when protein production is in full swing—offering new insights into how proteins are made.

Those insights are sorely needed. Despite years of research, biologists still have an incomplete picture of protein production, says Cate. They know, in general, that incoming tRNAs, each bearing a specific amino acid, shuttle through a series of three docking sites along the interface of the ribosome's two subunits. As the tRNAs move across these sites, they release their cargo and make way for the addition of the next amino acid in the growing protein chain.

Last August, Moore and his colleagues demonstrated that the ribosome's RNA—not its 54 proteins, as expected—catalyzed the linking of the amino acids. The new UCSC data, refined and improved upon by the incorporation of existing structural and biochemical results, are now revealing some specifics of this shuttling, called translocation.

“First of all, we could see the distances the tRNAs have to move during translocation, which are considerable—20 to 50 angstroms,” says Noller. For a molecule, that's quite a leap, he adds. Moreover, they can see that to make these moves, the tRNAs have to wriggle past quite a few obstacles, namely, contacts with the surrounding ribosome that stabilize the docked tRNA.

Because the structure clearly reveals placement of the tRNAs in the three docking sites, says Wolfgang Wintermeyer of the University of Witten/Herdecke in Germany, it also reveals the extent of contact—where the tRNA and the ribosome “touch.” Until now, these contacts had only been hinted at in biochemical studies and in the Noller group's fuzzier ribosome structure.

The Noller group observed, for example, that a prominent kink in tRNA—dubbed “the elbow”—somehow reaches out to ribosomal proteins at each of the three docking sites as it moves into them. The team also saw that some ribosomal proteins have spidery “tails” that hang down into these sites and interact with the backbone of the tRNA, helping keep it in place momentarily. As a result, “we know pretty much what is binding the tRNA to the ribosome,” Noller explains, because the work shows that ribosomal proteins, and not just ribosomal RNA, are involved.

As the researchers hoped, the new structure brings into clearer focus parts of the large subunit that were blurred in the structure Moore's team produced nearly 8 months ago. Analysis of the interactions of the large subunit with the small subunit indicate that the two subunits have to move to allow tRNA to traverse the ribosome during translocation, says Wintermeyer. Thus, translocation likely also involves the dissolution of the molecular bridges between the subunits.

In all, the UCSC structure reveals about 30 of these molecular connections, quite a few more than the six first discerned by Frank's cryoelectron microscopy studies in the mid-1990s. Frank thinks that some of these bridges—the exact chemical nature of which is still unclear—help keep the two subunits in register with one another. Others—likely those in contact or close to the tRNAs—communicate the status of protein production, and the rest participate either passively or actively in the movements of the subunits themselves as they ratchet, possibly making room for tRNA movement.

Like Frank, Moore sees these bridges as key: “The making and breaking of these bridges are almost certainly part of the protein synthesis process.” Therefore, a logical next step, which several teams will likely pursue, would be to make mutations that alter these bridges in specific ways to observe the effects of those changes on protein production.

But some biologists, especially Noller, Cate, and their colleagues, will not be diverted from their quest to make even better crystals to get ever closer to a view of the atoms behind the ribosome's many parts. Says Moore: “You never run out of your desire to go after ever higher resolution.”

2. RESEARCH REACTORS

# German Neutron Source Faces New Demands

1. Robert Koenig

BERN, SWITZERLAND—Plans to open a long-awaited neutron source this fall in Garching, near Munich, were thrown into confusion last week after the German cabinet called for a change in the research reactor's fuel source to avert a potential proliferation threat. It also said the State of Bavaria, rather than the federal government, should pick up the tab for building a storage facilitywhich might cost as much as the reactor itselfto dispose of spent fuel elements. Unless a compromise is reached, those new demands could delay the start-up of the $500 million FRM-II reactor and make its operations more expensive. In a 21 March statement, the cabinet said the nearly completed reactor, designed to use highly enriched uranium (HEU) fuel, should shift to medium enriched uranium (MEU) fuel within 5 years. That would bring it in line with an international effort to phase out HEU-fueled research reactors, mainly because of fears that terrorists could divert HEU for nuclear bombmaking. The cabinet's negotiating positiondeveloped with input from both the research and environment ministriessets up delicate talks between federal officials and those in the Bavarian government, which has resisted a rapid conversion to MEU fuel and has insisted that nuclear- waste storage is a federal responsibility. Berlin, however, holds a trump card: FRM-II must receive a final permit from the environment ministry before it goes on line. The political sensitivity of using HEU fuel is not new. In the mid-1990s, the U.S. government pushed for the Technical University of Munich, which will house the FRM-II, to redesign the reactor to use less-enriched uranium (Science, 4 August 1995, p. 628). Bavarian officials refused, and the dispute died down until a coalition of Social Democrats and Greens took power late in 1998 and appointed an expert committee to examine the fuel question. In June 1999, the panel suggested that conversion would be a good thing, but that it would be costly and time consuming to alter the FRM-II's design. Some experts favored postponing conversionwhich would cost as much as$55 millionuntil a new generation of high-density MEU fuel (based on a uranium-molybdenum alloy now used in some Russian reactors) is developed, probably by 2008 (Science, 25 June 1999, p. 2065). Bavaria's science minister, Hans Zehetmair, started talks this week with federal research minister Edelgard Bulmahn. Zehetmair told Science that the cabinet's proposed deadline for the reactor's conversion to MEU1 January 2006 is untenable. “You can't yet set an exact date because scientists are still trying to improve the MEU fuel,” he says. And Bavaria opposes building a separate nuclear- waste storage facility for the FRM-II, Zehetmair says, because “the law makes it clear that this is a federal responsibility.”

Caught in the middle of the dispute are scores of physicists, materials scientists, and structural biologists who have labored for years on instruments for the FRM-II's beam lines. “They need a clear message about its future,” says Winfried Petry, a Technical University physicist who heads the FRM-II's scientific board. “Some of these instruments are unique, and others are the best of their kind worldwide.”

The two dozen instruments for the beam lines (see chart) include the Munich Accelerator for Fission Fragments (MAFF), a machine designed by University of Munich physicist Dieter Habs that would smash neutron-rich nuclei into heavy elements to forge long-lived superheavy elements with atomic numbers up to 126. “We'll be extremely disappointed if [a political contretemps] causes a delay” in the FRM-II start-up, Habs says. And the longer the reactor is in limbo, the more uncomfortable the situation will grow for researchers who have built instruments especially for it. One such device is a double-focusing three-axis spectrometer, designed by Peter Link of the University of Göttingen's Institute for Physical Chemistry. “You can't move it anywhere else without significant changes,” he says. Petry and Zehetmair say they are not opposed to using MEU eventuallyafter the high-density fuel recipe is perfected and tested to ensure that the loss in neutron quality would be minimal. But even the next-generation MEU would require changes in the reactor's moderator tank, and using any less-enriched fuel would require boosting the core size and the reactor's power.

#### A SAMPLING OF FRM-II INSTRUMENTATION

• RESEDA neutron resonance “spin echo” spectrometer

• High-resolution “time-of-flight” spectrometer with cold neutrons

• Crystal time-of-flight spectrometer

• Small-angle scattering diffractometer

• REFSANS diffractometer for reflectometry and small-angle scattering

• Instruments for long-wavelength neutrons

• BSM back-scattering spectrometer

• PANDA three-axis spectrometer for cold neutrons

• PUMA double-focusing three-axis spectrometer with thermal neutrons

• Ultracold neutron source

• Instrument for fundamental physics with cold neutrons

• Radiography/tomography with cold neutrons

• MAFF fission fragment accelerator

• HEIDI single-crystal diffractometer with hot neutrons

SOURCE AND CREDIT: FRM-II, GARCHING

Both sides are hopeful that a deal can be worked out. “I still think we can get this reactor running within 6 months,” says Zehetmair. Federal officials agree that it is feasible for the final operating permit to be issued before midyearif the Bavarians compromise. Meanwhile, scientists are at the starting blocks, waiting for the gun. “Once the final permit is granted, the first fuel element could be installed in about 6 weeks,” says Petry. “The fuel elements are ready and waiting in France.”

3. CONSERVATION

# No Easy Answers for Biodiversity in Africa

1. Gretchen Vogel

Wilderness areas, those vast regions untouched by humans, hold great allure. But in terms of conservation, focusing on only pristine, uninhabited spaces would leave many species vulnerable to extinction, according to a new analysis of human population and biodiversity in sub-Saharan Africa. On page 2616 of this issue, researchers report that some of the most densely populated regions on the subcontinent also contain the greatest biodiversity. “You can't do conservation and development in very different places,” says Andrew Balmford, one of the study's authors. “If your goal is to preserve most of Africa's biodiversity, you're going to have to grapple with the challenges of preserving biodiversity where there are quite a lot of people.”

The analysis does not surprise most conservationists, who for years have been talking about global “hot spots,” areas rich in varied or rare species and also under exceptional pressure from human populations. But the current study provides a more detailed look, says ecologist Gustavo da Fonseca of Conservation International in Washington, D.C. “The fact that these hot spots are emerging even at this finer scale is really surprising,” he says. “We were never sure if we could find hot spots within our global hot spots.”

Balmford, a zoologist at the University of Cambridge, zoologist Carsten Rahbek of the University of Copenhagen, and their colleagues mined a comprehensive database at the Zoological Museum in Copenhagen describing vertebrate populations across sub-Saharan Africa. The team analyzed human census data and data on 1921 bird species, 940 mammal species, 406 snake species, and 618 amphibian species in geographical squares approximately 100 kilometers on a side.

Areas rich in species also tend to contain more people, the team found. To test whether the correlation might be explained by sampling bias—a possible tendency for species lists to be more comprehensive in easily accessible regions close to human population centers—the team compared the correlations separately for different animal groups. If a sampling bias was causing the correlation, says Sir Robert May, a zoologist at the University of Oxford, one would expect the effect to be stronger for less studied groups, for which data are sparse. But in fact, the correlation was stronger for better studied birds and mammals and weaker for relatively uncataloged amphibians.

The pattern is probably not unique to Africa, says Balmford. In North America, for example, “some of the highest conservation priorities have the highest real estate values,” most notably along the East and West coasts. Smaller studies in South America show a similar pattern as well, says ecologist Stuart Pimm of Columbia University in New York City.

The team found no easy answers when it analyzed which 100-km squares would need some kind of protection to preserve nearly all known species in the database. A strategy that started in regions with minimal human populations still identified a set of squares that contain an estimated 116 million people—almost a quarter of the population of sub-Saharan Africa.

The paper “cuts against much of the ethos of the conservation movement that wants to preserve absolutely pristine environments,” May says. “I share that feeling, but there has to be much more work on determining minimal ecological structure: How much of the original habitat do you have to keep to enable particular plants and animals to coexist with humans?” Tom Lovejoy, a tropical biologist at the Smithsonian Institution and a consultant with the World Bank, says the work lends support to “mixed use” projects like the Mesoamerica biological corridor in Central America. The project aims to include strict protected areas as well as bird-friendly coffee plantations and regions in which a hydroelectric project will pay owners of the watershed as an incentive to preserve the forest.

But mixed-use strategies get mixed reviews. In some regions, Fonseca says, “the only way you're going to make sure anything is left is by having secure borders and protecting what you have.” He says the study highlights the fact that if African biodiversity is to survive, “at some point we have to bite the bullet and make some very strong choices, even if those are costly and difficult both economically and socially,” such as creating well-protected parks and compensating local residents.

The study should help guide some of those choices, Balmford says. Fonseca agrees. “We can't make these decisions unless we know where the species and people are,” he says. “They've done that analysis in an extremely comprehensive way.”

4. NEUROBIOLOGY

# How Cannabinoids Work in the Brain

1. Marcia Barinaga

Marijuana may provide a euphoric high, but it can also boggle one's memory. The impairment is so pronounced in laboratory rats under its influence that they behave in some learning tasks as if a key memory area in their brains, the hippocampus, had been removed entirely. Now, the story takes an intriguing twist: Researchers have discovered that the “endogenous cannabinoids,” marijuana-like chemicals made by our brain whose function has long been a mystery, play key roles in a process that may be central to the laying down of memory, among other things.

In reports this week in Nature and Neuron, three independent research teams—from the University of California, San Francisco (UCSF); Kanazawa University School of Medicine in Japan; and Harvard Medical School in Boston—have shown that cannabinoids are dispatched by some brain neurons to fine-tune the signals they receive. The cannabinoids accomplish this by turning down the activity of the neurons doing the signaling. One form of the process, known as depolarization-induced suppression of inhibition (DSI), occurs in the hippocampus, a brain area involved in memory, and in the cerebellum, which coordinates movements.

The discovery unites two previously unlinked research tracks. It offers “the first concrete example of physiological function” for the endogenous cannabinoids, says neuroscientist Leslie Iversen of the University of Oxford, U.K. And it finally reveals the identity of the molecule responsible for DSI; researchers had been searching for this so-called retrograde messenger for nearly 10 years.

“This is extremely exciting,” says neuroscientist Brad Alger of the University of Maryland Medical School in Baltimore, whose team discovered hippocampal DSI in the early 1990s. Now researchers can manipulate the cannabinoid messengers, he says, to “dissect out the roles of DSI in brain function and behavior.” Alger believes it may prime individual neurons in the hippocampus for long-term potentiation (LTP), the synapse strengthening thought to be central to learning and memory. The discovery also has generated new insights into how marijuana intoxicates the brain.

None of the groups set out to solve the cannabinoid mystery. All were searching for the elusive signaling molecule in DSI. Two years ago, Rachel Wilson, a graduate student with Roger Nicoll at UCSF, took up the hunt. In slices of rat hippocampus, she showed that neurons produce the messenger in response to rising internal calcium levels and that, in contrast to most neurotransmitters, the messenger is not packaged in vesicles for release.

To Jeff Isaacson, a former student with Nicoll who was visiting the lab, those characteristics rang a bell: They're shared by the endogenous cannabinoids. On his suggestion, Wilson treated the brain slices with a chemical that blocks the function of cannabinoid receptors. It blocked DSI. “That experiment alone was the story,” says Nicoll. Wilson confirmed with more experiments that a cannabinoid is the messenger, and she presented her work last November at the annual meeting of the Society for Neuroscience in Miami; her paper appears in this week's Nature.

During that same time, Takako Ohno-Shosaku, working with Masanobu Kano at Kanazawa University, was on a parallel course. Having taken her clue from a 1997 paper from Daniele Piomelli's team at UC Irvine, which showed that activated hippocampal neurons release endogenous cannabinoids, Ohno-Shosaku also found that a cannabinoid blocker prevents DSI in cultured hippocampal neurons. Her results are in this week's Neuron.

Neuroscientist Tamás Freund of the Hungarian Academy of Sciences in Budapest says his team had a clue that cannabinoids play a role in DSI: The researchers showed in 1999 that cannabinoid receptors in the hippocampus are located exclusively on the inhibitory neurons that receive the retrograde signal. “The amazing selective localization of the receptors made them an excellent candidate to mediate DSI,” says Freund.

Meanwhile, Wade Regehr and graduate student Anatol Kreitzer at Harvard Medical School found that excitatory signals can also be inhibited, in a process similar to DSI that they called DSE. Working in slices of cerebellum, Kreitzer found that increased calcium levels trigger neurons to release the messenger that initiates DSE, but he was not able to identify it. After talking with Wilson at her poster presentation at the neuroscience meeting, he tried blocking cannabinoid signals. As he and Regehr report in this week's Neuron, this wiped out DSE, implicating cannabinoids in turning down excitatory as well as inhibitory inputs. The discovery that “excitatory synapses can do it, too,” is important, says Stanford University neuroscientist Dan Madison, because “that makes it more widely useful” to the brain.

“I'd be really surprised,” Madison adds, if DSI and DSE aren't found in other places in the brain. His hunch may soon be confirmed. Freund's group has found cannabinoid receptors in the amygdala, an area involved in emotional memory, on the same class of inhibitory neurons as those on which it is found in the hippocampus, so DSI may occur there. And last year, Yuri Zilberter of the Karolinska Institute, Sweden, reported a DSI-like phenomenon in the cerebral cortex of rats. Now, Alger says, researchers can use cannabinoid-receptor blockers or mice lacking cannabinoid receptors to see whether cannabinoid-mediated DSI occurs in these brain areas and to pin down its roles in brain function.

The studies also shed light on how marijuana affects brain functions such as memory, says Alger. “For years people thought that cannabinoids disrupt the development of LTP,” he says, but now it appears that endogenous cannabinoid release may instead enhance it, by triggering DSI. But whereas the normal effects of DSI and DSE are limited to just the neurons in the vicinity of those releasing the cannabinoid and last only tens of seconds, marijuana use exposes the entire brain to high levels of marijuana's active ingredient, tetrahydrocannabinol (THC), for much longer. That would “swamp the whole system,” says Irvine's Piomelli.

And that may explain findings such as those reported last December in the Journal of Neuroscience by Sam Deadwyler's team at Wake Forest University School of Medicine in Winston-Salem, North Carolina, showing that THC-treated rats behave on some memory tests as if they had no hippocampus. THC flooding the brain would eliminate the local activity patterns set up by DSE and DSI, just as spilling a bottle of ink across a page obliterates any words written there.

5. MARINE MAMMALOGY

# River Dolphins Add Branches to Family Tree

1. Dennis Normile

YOKOHAMA—Scientists who study marine mammals have long puzzled over where to place four species of river dolphins on the family tree. Similar in appearance, the Ganges, Yangtze, Amazon, and La Plata dolphins were thought to be more closely related to each other than to their whale cousins. But new data from a genetic analysis suggest that the species diverged at different times. One of the species may have diverged before beaked whales, whereas most dolphins did not appear until much later.

Norihiro Okada, a molecular biologist at the Tokyo Institute of Technology, and colleagues presented results here* based on a technique that uses unique repetitive bits of DNA, called short interspersed elements (SINEs), that are inserted randomly throughout the genome. Okada says the probability of identical but independent insertions at the same location in unrelated species is vanishingly small, as is the possibility of an insertion being precisely deleted later in evolutionary time. “It's the golden method” for molecular studies of evolution, says Hans Thewissen, a paleontologist at Northeastern Ohio Universities College of Medicine in Rootstown.

Okada and his colleagues gathered DNA samples from 14 cetacean species and identified 25 new SINEs, from which they constructed a relative timeline of whale, dolphin, and porpoise divergence. One significant conclusion was that the molecular analysis shows a clear separation between toothed whales, or Odontoceti, and baleen whales, or Mysticeti. Although this is the traditional morphological division, previous molecular analyses had been divided on the issue.

Based on his analysis, Okada believes that toothed marine animals diverged in the following order: sperm whales, Ganges river dolphin, and beaked whales, followed by the remaining freshwater and marine dolphins. No SINEs were found that could be used to resolve the relationships between those remaining freshwater and marine dolphins, although some SINEs indicate a sister relationship between the two South American river dolphins (Amazon and La Plata), and other SINEs clearly group together the remaining marine dolphins. Despite these gaps, Okada says that “the analysis still clearly shows that river dolphins are paraphyletic.”

The new analysis supports a growing number of morphological studies, says Christian de Muizon, a paleontologist at the National Museum of Natural History in Paris, “so I was quite happy to see these results.” And Ulfur Arnason, a molecular phylogeneticist at Lund University, Sweden, adds that Okada's results are also consistent with a growing number of molecular studies. Both agree that the results strengthen the case for revising where river dolphins fit among toothed whales.

But Okada's conclusion that Ganges river dolphins diverged between sperm and beaked whalesboth of which live in the open ocean, are deep divers, and feed on squidis more controversial. Thewissen says that Okada's conclusion “would clear up some problems” with the prevailing view that sperm and beaked whales are closely related. Although they share morphological and behavioral characteristics, “the [traditional] relationships are not all that well supported [in the fossil record],” Thewissen says. Okada's analysis suggests that beaked and sperm whales evolved at different times as shallow water animals and at some later date independently developed their common characteristics.

De Muizon has serious doubts about that interpretation, however. Although the fossil record suggests some morphological traits that link beaked whales more closely to Ganges river dolphins than to sperm whales, he says, most of the evidence points to beaked and sperm whales' proximity on the family tree. Still, Okada's results are sufficiently intriguing to send de Muizon back home “to rerun the studies.” Arnason, whose own studies have not conclusively resolved such relationships, is heartened that morphologists, who have often dismissed molecular analyses, are taking the study seriously. “The two approaches should really complement each other,” he says.

• * Evolution and Adaptation of Marine Mammals, 12 March, Tokyo Institute of Technology, Nagatsuta campus.

6. CLIMATE CHANGE

# Early Birds May Miss the Worms

1. Elizabeth Pennisi

A long-term study has provided new insight into potential short-term consequences of global climate change. Since the 1970s, Jacques Blondel, an evolutionary ecologist at the Center of Functional and Evolutionary Ecology in Montpellier, France, has spent each spring studying blue tits in the woods near his institute and in Corsica, an island 125 kilometers away. His decades-long project has created “an incredible opportunity to test the idea that short-term climate variations can affect the metabolism and breeding of small birds,” comments Kenneth Nagy, a physiological ecologist at the University of California, Los Angeles.

On page 2598 of this issue, Blondel and his colleagues describe the energetic costs to birds that fail to breed where and when their food is in peak abundance. When food is scarce, parents must work harder to feed their young, and the parents' overall survival suffers as a result, the researchers report. “This is one of the few studies that have empirically demonstrated a link between energy expenditure in parental effort and fitness,” says Tony Williams, a physiological ecologist at Simon Fraser University in Burnaby, Canada. The work suggests that animals could be caught in a race against time as they evolve to adjust to shifts in the seasonal availability food sources brought about by climate change. Although birds do seem to be breeding earlier in areas undergoing warming, the worry exists that some won't adapt fast enough. This new work demonstrates the detrimental consequences that could result.

Blondel embarked on this study when Donald Thomas, a physiological ecologist at the University of Sherbrooke in Quebec, Canada, joined the French group for a sabbatical 4 years ago. Thomas and Blondel quickly realized that the blue tit populations presented an unusual opportunity to examine how animals cope with reproducing when food is not abundant.

Blondel had decades' worth of data on birds from both Montpellier and Corsica, including good statistics on when they started to breed, how many young they produced, and how well parents and offspring did. These data could provide a historical context for any single-season study.

In addition, although birds in the two locales typically settle in different habitats—evergreen oak trees in Corsica and deciduous trees in Montpellier—some of the continental birds nest in evergreen oak forests just like the blue tits in Corsica. Blondel's data had shown that these atypical populations were less likely to return to breed a second year as usual, suggesting that living in this evergreen habitat impaired the birds' overall survival. But Blondel's team didn't know why.

That's where Thomas came in. More interested in short-term energetics of the birds' breeding activity than in long-term evolutionary change, he added a new test to the repertoire of daily observations: one to monitor the energetics involved in the birds' daily activities. When the researchers caught birds to determine their breeding status and to measure growth of the nestlings over the course of a season, they injected them with minute amounts of hydrogen and oxygen isotopes and then took a small blood sample. Twenty-four hours later, they recaught the same birds and took a second blood sample. Working with John Speakman from the University of Aberdeen in Scotland, they used the ratio of the isotopes from the two samples to calculate the amount of carbon dioxide produced, and from that they determined energy expenditure. They gathered the same data for the evergreen dwellers in Corsica and the atypical birds that inhabited the evergreen oak habitats near Montpellier.

The results were startling, says Thomas. The Montpellier birds were using almost twice as much energy as the birds in Corsica to rear their young. Typically, breeding birds hop about at three to four times their resting metabolism, but these blue tits were running at seven times, a rate that can't be sustained for long.

“At first, the data didn't make any sense,” notes Thomas. But once they compared the timing of the emergence of caterpillars—the birds' favorite food—with the timing of egg hatching in the two locales, the story became clear. In Corsica, the birds breed in June, when new oak leaves stimulate a population explosion in caterpillars. But on the continent, the birds breed 3 weeks earlier, coinciding with the greening of the deciduous oaks and, again, an abundance of caterpillars. That puts the atypical population nesting in evergreen oaks near Montpellier at a disadvantage: Those trees are not budding and caterpillars have not yet emerged in May. Out of sync with their food source, “those birds ended up having to work relatively hard for relatively less payoff,” Nagy comments. That hard work may burn fat reserves, leaving the birds more vulnerable to starvation during the winter, Thomas suggests.

The new work “confirms what many people thought but were never able to show: that breeding too early has a fitness cost,” says Marcel Visser of the Netherlands Institute of Ecology in Heteren. Uncertainties remain, adds Visser, because it is difficult to compare two populations of birds, one of which has evolved to be well adapted to the evergreen oaks while the other has not. Nevertheless, the work hints that as climate changes and the timing of the seasons shifts, says Blondel, “more and more populations of birds will become maladapted to breeding.”

7. TOXICOLOGY

# Science Only One Part Of Arsenic Standards

1. Jocelyn Kaiser

When the Bush Administration decided last week to withdraw new standards that require lower arsenic levels in U.S. drinking water, it brandished scientific uncertainty as a shield against environmental protesters. But the reality is that setting safe levels of very small amounts of toxicants such as arsenic is not a question that science alone can answer. It's a judgment call, and that means a role for politics.

Rocks and soils are the main source of inorganic arsenic in groundwater, although mining and other humanmade sources also contribute. People who drink water from tainted sources can eventually develop bladder and other cancers. In 1999, a National Research Council (NRC) reviewed the evidence on arsenic and concluded that the current acceptable level of 50 parts per billion (ppb) should be lowered “as promptly as possible.” Although the NRC did not recommend a specific level, on 22 January the outgoing Clinton Administration issued a final rule that would have dropped the safe level to 10 ppb.

Western officials and industry objected, estimating that they would need to spend billions of dollars on treatment equipment to meet the new standard. On 20 March, EPA Administrator Christine Todd Whitman sided with them, saying that she agreed with the NRC but that the Clinton plan was based on “unclear” science. “An independent review … will help clear up the uncertainties,” she added.

But scientists say the evidence won't become clear anytime soon. The lack of a good animal model, until recently, has forced scientists to rely on human evidence—in particular, studies of cancer in Taiwanese villagers exposed to arsenic from wells from the 1920s to 1960s. But those arsenic levels were relatively high—200 ppb or more. To estimate risks at levels below 50 ppb, experts have used a linear relationship to extrapolate the data. But if there is a level of exposure below which arsenic-laced water is harmless, that statistical technique could overestimate the risk. “The lower you go, the greater the uncertainty is,” says Robert Goyer, a retired pathologist who chaired the NRC panel. As a result, Goyer says, setting a standard “depends on a subjective judgment” that must also weigh costs.

As the EPA takes another look, one new study may bolster the 10 ppb standard. In the 1 March issue of the American Journal of Epidemiology, a Taiwanese research team examined cases of urinary tract cancer in villagers exposed to arsenic levels as low as 10 to 50 ppb. The study, the first of its kind, found that cancer risk rose with arsenic levels even at these low exposures. “On the face of it, I think [the new study] might be quite important,” says Kenneth Brown, a statistician and consultant in Chapel Hill, North Carolina.

8. PEER REVIEW

# NSF Scores Low on Using Own Criteria

1. Jeffrey Mervis

Scientists seem to have no trouble giving their opinions on the scientific merit of a grant proposal. But ask them to rate its potential social impact, and they tend to clam up. And that poses a problem for the National Science Foundation (NSF).

Three years ago, NSF changed the criteria for rating the quality of grant proposals it receives. Instead of asking reviewers to judge them on four factors—the research's merit, its relevance, the investigator's ability to do the work, and the work's impact on the scientific enterprise—NSF asked for ratings on just two: scientific quality and social impact. The change was intended to give social impact —defined to include issues such as education and training, diversity, and addressing of national priorities—a more prominent role in assessments. But a new report from a panel of management experts says that most reviewers don't even bother to rate proposals on their potential social impact, and it chides NSF for not doing more to get scientists on board.

Why does it matter? If NSF doesn't convince legislators that the peer review system provides fair, comprehensive reviews, Congress has suggested it may try to apply its own remedy.

NSF made the changes partly to address complaints from federal legislators that the grants process is an “old boys network” biased against first-time applicants and less prestigious institutions. Indeed, barely a week after the new criteria were promulgated, a Senate spending panel asked NSF to hire the National Academy of Public Administration (NAPA) to study the impact of the new criteria “on the types of research the agency supports.” Although NSF thought the suggestion premature, it agreed to a limited review. But when the Senate repeated its request the following year, NSF contracted with NAPA for a $250,000 study. That report, delivered last month, concludes that the reviewers are mostly ignoring social impact. Some 73% “disregard criterion 2 [social relevance] altogether or simply merge it into scientific merit,” it notes, while others “parrot the language without making any actual evaluation on the basis of it.” Most reviewers, it says, “use criterion 1 [scientific merit] as a cutoff and then apply criterion 2 to evaluate any remaining proposals.” The report says NSF bears some of the blame. It notes that the agency gave reviewers broad discretion on how to apply each criterion, a decision that “essentially gives reviewers license to not apply [the social impact criterion] at all.” “We're not achieving our goal,” admits Nate Pitts, head of NSF's Office of Integrative Activities, which collects data on NSF's peer review process. Some members of the National Science Board, NSF's oversight body, seem to agree. At the board's meeting last month, they asked some sharp questions about the office's latest annual report. “How many proposals are sent back because they don't address criterion 2?” asked mathematician Pam Ferguson of Grinnell College in Iowa. “If we want to implement something, we have to make it bite by affecting funding decisions,” added fellow board member Richard Tapia of Rice University in Houston. NSF isn't ready for such drastic steps. “It takes time to get everybody to understand that this is important,” explains Deputy Director Joseph Bordogna. “But that's not an excuse to delay.” Rejecting proposals or reviews “might be an appropriate step to take after we've tried all the other methods,” notes board chair Eamon Kelly. “But remember, you're asking for a real cultural change.” Congress may not wait. A Senate aide says that peer review at NSF “is one of the top priorities” for the spending panel and that the subject could be addressed in a report later this year that accompanies the agency's 2002 budget. “We want to hear NSF's response to the NAPA report,” says the aide, “and see if it goes far enough.” 9. DNA ARRAYS # Affymetrix Settles Suit, Fixes Mouse Chips 1. Eliot Marshall A leading maker of DNA arrays, Affymetrix Inc. of Santa Clara, California, last week made peace with a rival British firm, Oxford Gene Technology (OGT), in a patent fight over fundamental DNA array technology. The settlement ends a bruising transatlantic battle that pitted Affymetrix's patents against similar patents in Europe filed by University of Oxford biochemist Ed Southern. The companies have agreed to withdraw a string of lawsuits in the United States and Europe, and OGT is dropping an appeal it had planned to take to the House of Lords. The settlement provided welcome relief for Affymetrix, which is contending with an embarrassing, but unrelated, problem: Some of its arrays have contained scrambled mouse-DNA data. Both developments will be expensive, however. According to an Affymetrix notice posted on 26 March, the company is spending$19 million on the patent settlement and an unspecified “smaller” amount for legal fees. And replacing the scrambled chips could cost up to $4 million. “Basically the litigation between us and OGT is over—it's done,” says Rob Lipschutz, vice president of corporate development at Affymetrix. “We are very pleased because this lets us go back to providing tools for our customers.” Southern issued a statement on behalf of OGT saying he felt it was “essential for genomic research” to resolve the dispute, because his company and others could now devote their energies to developing and licensing the technology. As for the scrambled mouse DNA, Affymetrix first disclosed the problem in a 7 March notice to the U.S. Securities and Exchange Commission (SEC). To assemble these chips, Affymetrix used information from a public database maintained by the National Center for Biotechnology Information (NCBI) in Bethesda, Maryland. Affymetrix told the SEC it was having trouble “because of the rapidly evolving nature of the public domain sequence databases,” noting that “sequence errors may not be found prior to the commercial release of a product.” Lipshutz made clear last week, however, that the glitch occurred when company employees processed the data. “There can be conflicting data in the database,” he said. “It becomes quite a challenge to deal with potential ambiguities. … We just didn't sort it out as well as we would have liked.” The mix-up involved the “Unigene U74” collection of mouse genes and expressed sequence tags (ESTs), Affymetrix executive Thane Kreiner explained. When company researchers began to annotate genes and ESTs that had already been placed on chips, they discovered that most appeared to be reproduced correctly, but some were reversed. A company review found that all three of the chips in the U74 set had problems. Least affected was the most valuable “A” chip, which contains the best gene information, according to Kreiner. About 75% of the sequences were usable. The “B” chip had the same error rate, but the “C” chip was 60% defective, making it unusable. NCBI director David Lipman confirms that “there has always been some ambiguity” in the directionality of genetic data submitted to NCBI. The information comes from many labs; they may use different methods of sequencing and report the results in different ways, he explains. It's up to the user to interpret the data with care, because differences are not always clearly flagged. Affymetrix plans to have replacement chips ready for those who want them in a matter of weeks, says Lipshutz. He notes that a bigger improvement is on the way: The company plans to put the entire mouse genome sequence on chips, after the public-private consortium that's at work on this project finishes assembling the data (Science, 13 October 2000, p. 242). This consortium has placed more than 8 million bases of raw mouse genomic data in NCBI and other public repositories already. However, mouse researchers say the information is highly fragmented and difficult to use. Affymetrix, like every other group, would like to have a fully assembled mouse genome. Lipshutz says: “We're going to do the best assembly we can, but it's not going to have the depth or richness of the human sequence.” And he adds, “I can't say when that will be.” 10. BIOMEDICAL TRAINING # NIH Pledges Big Hike In Postdoc Stipends 1. Jeffrey Mervis Acknowledging that its stipends for graduate students and postdocs are too low, the National Institutes of Health (NIH) plans to raise them significantly over the next 5 years—and then keep them competitive. NIH is also throwing its weight behind efforts to curb the length of a postdoc's tenure. The new policies are part of the agency's long-awaited response to a report last summer from the National Academy of Sciences calling for changes in how the federal government trains biomedical and behavioral scientists (Science, 8 September 2000, p. 1667). The report said that current Ph.D. production is “more than sufficient” to meet demand and that institutions should concentrate not on growth but on improving the quality of training. In particular, it proposed reducing the number of students supported on research grants and boosting training grants to universities and individual fellowships. It also said that stipends should be much higher and that a postdoc typically should not last longer than 5 years. The NIH response, posted on 23 March, (grants.nih.gov/training/nas_report/NIHResponse.htm), pledges to raise its National Research Service Awards (NRSA) stipend levels by 10% to 12% a year, to a target of$25,000 for graduate students and $45,000 for beginning postdocs. (Current levels are$16,500 and $28,260, respectively.) In a break from current practice, NIH would also issue annual cost-of-living increases. Although NIH funds a minority of students, most universities tie their pay scales to the NRSA levels. It also said federal funding should not exceed 6 years for graduate students and 5 years for postdocs. But NIH resisted the panel's suggestion to shift the balance toward training grants and away from research grants, saying it's unwise and unworkable. “Attempts to manipulate these mechanisms to control Ph.D. numbers would run counter to their primary purpose,” it noted. 11. NUTRITION # The Soft Science of Dietary Fat 1. Gary Taubes Mainstream nutritional science has demonized dietary fat, yet 50 years and hundreds of millions of dollars of research have failed to prove that eating a low-fat diet will help you live longer When the U.S. Surgeon General's Office set off in 1988 to write the definitive report on the dangers of dietary fat, the scientific task appeared straightforward. Four years earlier, the National Institutes of Health (NIH) had begun advising every American old enough to walk to restrict fat intake, and the president of the American Heart Association (AHA) had told Time magazine that if everyone went along, “we will have [atherosclerosis] conquered” by the year 2000. The Surgeon General's Office itself had just published its 700-page landmark “Report on Nutrition and Health,” declaring fat the single most unwholesome component of the American diet. All of this was apparently based on sound science. So the task before the project officer was merely to gather that science together in one volume, have it reviewed by a committee of experts, which had been promptly established, and publish it. The project did not go smoothly, however. Four project officers came and went over the next decade. “It consumed project officers,” says Marion Nestle, who helped launch the project and now runs the nutrition and food studies department at New York University (NYU). Members of the oversight committee saw drafts of an early chapter or two, criticized them vigorously, and then saw little else. Finally, in June 1999, 11 years after the project began, the Surgeon General's Office circulated a letter, authored by the last of the project officers, explaining that the report would be killed. There was no other public announcement and no press release. The letter explained that the relevant administrators “did not anticipate fully the magnitude of the additional external expertise and staff resources that would be needed.” In other words, says Nestle, the subject matter “was too complicated.” Bill Harlan, a member of the oversight committee and associate director of the Office of Disease Prevention at NIH, says “the report was initiated with a preconceived opinion of the conclusions,” but the science behind those opinions was not holding up. “Clearly the thoughts of yesterday were not going to serve us very well.” During the past 30 years, the concept of eating healthy in America has become synonymous with avoiding dietary fat. The creation and marketing of reduced-fat food products has become big business; over 15,000 have appeared on supermarket shelves. Indeed, an entire research industry has arisen to create palatable nonfat fat substitutes, and the food industry now spends billions of dollars yearly selling the less-fat-is-good-health message. The government weighs in as well, with the U.S. Department of Agriculture's (USDA's) booklet on dietary guidelines, published every 5 years, and its ubiquitous Food Guide Pyramid, which recommends that fats and oils be eaten “sparingly.” The low-fat gospel spreads farther by a kind of societal osmosis, continuously reinforced by physicians, nutritionists, journalists, health organizations, and consumer advocacy groups such as the Center for Science in the Public Interest, which refers to fat as this “greasy killer.” “In America, we no longer fear God or the communists, but we fear fat,” says David Kritchevsky of the Wistar Institute in Philadelphia, who in 1958 wrote the first textbook on cholesterol. As the Surgeon General's Office discovered, however, the science of dietary fat is not nearly as simple as it once appeared. The proposition, now 50 years old, that dietary fat is a bane to health is based chiefly on the fact that fat, specifically the hard, saturated fat found primarily in meat and dairy products, elevates blood cholesterol levels. This in turn raises the likelihood that cholesterol will clog arteries, a condition known as atherosclerosis, which then increases risk of coronary artery disease, heart attack, and untimely death. By the 1970s, each individual step of this chain from fat to cholesterol to heart disease had been demonstrated beyond reasonable doubt, but the veracity of the chain as a whole has never been proven. In other words, despite decades of research, it is still a debatable proposition whether the consumption of saturated fats above recommended levels (step one in the chain) by anyone who's not already at high risk of heart disease will increase the likelihood of untimely death (outcome three). Nor have hundreds of millions of dollars in trials managed to generate compelling evidence that healthy individuals can extend their lives by more than a few weeks, if that, by eating less fat (see sidebar on p. 2538). To put it simply, the data remain ambiguous as to whether low-fat diets will benefit healthy Americans. Worse, the ubiquitous admonishments to reduce total fat intake have encouraged a shift to high-carbohydrate diets, which may be no better—and may even be worse—than high-fat diets. Since the early 1970s, for instance, Americans' average fat intake has dropped from over 40% of total calories to 34%; average serum cholesterol levels have dropped as well. But no compelling evidence suggests that these decreases have improved health. Although heart disease death rates have dropped—and public health officials insist low-fat diets are partly responsible—the incidence of heart disease does not seem to be declining, as would be expected if lower fat diets made a difference. This was the conclusion, for instance, of a 10-year study of heart disease mortality published in The New England Journal of Medicine in 1998, which suggested that death rates are declining largely because doctors are treating the disease more successfully. AHA statistics agree: Between 1979 and 1996, the number of medical procedures for heart disease increased from 1.2 million to 5.4 million a year. “I don't consider that this disease category has disappeared or anything close to it,” says one AHA statistician. Meanwhile, obesity in America, which remained constant from the early 1960s through 1980, has surged upward since then—from 14% of the population to over 22%. Diabetes has increased apace. Both obesity and diabetes increase heart disease risk, which could explain why heart disease incidence is not decreasing. That this obesity epidemic occurred just as the government began bombarding Americans with the low-fat message suggests the possibility, however distant, that low-fat diets might have unintended consequences—among them, weight gain. “Most of us would have predicted that if we can get the population to change its fat intake, with its dense calories, we would see a reduction in weight,” admits Harlan. “Instead, we see the exact opposite.” In the face of this uncertainty, skeptics and apostates have come along repeatedly, only to see their work almost religiously ignored as the mainstream medical community sought consensus on the evils of dietary fat. For 20 years, for instance, the Harvard School of Public Health has run the Nurses' Health Study and its two sequelae—the Health Professionals Follow-Up Study and the Nurses' Health Study II—accumulating over a decade of data on the diet and health of almost 300,000 Americans. The results suggest that total fat consumed has no relation to heart disease risk; that monounsaturated fats like olive oil lower risk; and that saturated fats are little worse, if at all, than the pasta and other carbohydrates that the Food Guide Pyramid suggests be eaten copiously. (The studies also suggest that trans fatty acids are unhealthful. These are the fats in margarine, for instance, and are what many Americans started eating when they were told that the saturated fats in butter might kill them.) Harvard epidemiologist Walter Willett, spokesperson for the Nurses' Health Study, points out that NIH has spent over$100 million on the three studies and yet not one government agency has changed its primary guidelines to fit these particular data. “Scandalous,” says Willett. “They say, ‘You really need a high level of proof to change the recommendations,' which is ironic, because they never had a high level of proof to set them.”

Indeed, the history of the national conviction that dietary fat is deadly, and its evolution from hypothesis to dogma, is one in which politicians, bureaucrats, the media, and the public have played as large a role as the scientists and the science. It's a story of what can happen when the demands of public health policy—and the demands of the public for simple advice—run up against the confusing ambiguity of real science.

## Fear of fat

During the first half of the 20th century, nutritionists were more concerned about malnutrition than about the sins of dietary excess. After World War II, however, a coronary heart disease epidemic seemed to sweep the country (see sidebar on p. 2540). “Middle-aged men, seemingly healthy, were dropping dead,” wrote biochemist Ancel Keys of the University of Minnesota, Twin Cities, who was among the first to suggest that dietary fats might be the cause. By 1952, Keys was arguing that Americans should reduce their fat intake to less than 30% of total calories, although he simultaneously recognized that “direct evidence on the effect of the diet on human arteriosclerosis is very little and likely to remain so for some time.” In the famous and very controversial Seven Countries Study, for instance, Keys and his colleagues reported that the amount of fat consumed seemed to be the salient difference between populations such as those in Japan and Crete that had little heart disease and those, as in Finland, that were plagued by it. In 1961, the Framingham Heart Study linked cholesterol levels to heart disease, Keys made the cover of Time magazine, and the AHA, under his influence, began advocating low-fat diets as a palliative for men with high cholesterol levels. Keys had also become one of the first Americans to consciously adopt a heart-healthy diet: He and his wife, Time reported, “do not eat ‘carving meat'—steaks, chops, roasts—more than three times a week.”

Nonetheless, by 1969 the state of the science could still be summarized by a single sentence from a report of the Diet-Heart Review Panel of the National Heart Institute (now the National Heart, Lung, and Blood Institute, or NHLBI): “It is not known whether dietary manipulation has any effect whatsoever on coronary heart disease.” The chair of the panel was E. H. “Pete” Ahrens, whose laboratory at Rockefeller University in New York City did much of the seminal research on fat and cholesterol metabolism.

Whereas proponents of low-fat diets were concerned primarily about the effects of dietary fat on cholesterol levels and heart disease, Ahrens and his panel—10 experts in clinical medicine, epidemiology, biostatistics, human nutrition, and metabolism—were equally concerned that eating less fat could have profound effects throughout the body, many of which could be harmful. The brain, for instance, is 70% fat, which chiefly serves to insulate neurons. Fat is also the primary component of cell membranes. Changing the proportion of saturated to unsaturated fats in the diet changes the fat composition in these membranes. This could conceivably change the membrane permeability, which controls the transport of everything from glucose, signaling proteins, and hormones to bacteria, viruses, and tumor-causing agents into and out of the cell. The relative saturation of fats in the diet could also influence cellular aging as well as the clotting ability of blood cells.

Whether the potential benefits of low-fat diets would exceed the potential risks could be settled by testing whether low-fat diets actually prolong life, but such a test would have to be enormous. The effect of diet on cholesterol levels is subtle for most individuals—especially those living in the real world rather than the metabolic wards of nutrition researchers—and the effect of cholesterol levels on heart disease is also subtle. As a result, tens of thousands of individuals would have to switch to low-fat diets and their subsequent health compared to that of equal numbers who continued eating fat to alleged excess. And all these people would have to be followed for years until enough deaths accumulated to provide statistically significant results. Ahrens and his colleagues were pessimistic about whether such a massive and expensive trial could ever be done. In 1971, an NIH task force estimated such a trial would cost $1 billion, considerably more than NIH was willing to spend. Instead, NIH administrators opted for a handful of smaller studies, two of which alone would cost$255 million. Perhaps more important, these studies would take a decade. Neither the public, the press, nor the U.S. Congress was willing to wait that long.

## Science by committee

Like the flourishing American affinity for alternative medicine, an antifat movement evolved independently of science in the 1960s. It was fed by distrust of the establishment—in this case, both the medical establishment and the food industry—and by counterculture attacks on excessive consumption, whether manifested in gas-guzzling cars or the classic American cuisine of bacon and eggs and marbled steaks. And while the data on fat and health remained ambiguous and the scientific community polarized, the deadlock was broken not by any new science, but by politicians. It was Senator George McGovern's bipartisan, nonlegislative Select Committee on Nutrition and Human Needs—and, to be precise, a handful of McGovern's staff members—that almost single-handedly changed nutritional policy in this country and initiated the process of turning the dietary fat hypothesis into dogma.

McGovern's committee was founded in 1968 with a mandate to eradicate malnutrition in America, and it instituted a series of landmark federal food assistance programs. As the malnutrition work began to peter out in the mid-1970s, however, the committee didn't disband. Rather, its general counsel, Marshall Matz, and staff director, Alan Stone, both young lawyers, decided that the committee would address “overnutrition,” the dietary excesses of Americans. It was a “casual endeavor,” says Matz. “We really were totally naïve, a bunch of kids, who just thought, ‘Hell, we should say something on this subject before we go out of business.'” McGovern and his fellow senators—all middle-aged men worried about their girth and their health—signed on; McGovern and his wife had both gone through diet-guru Nathan Pritikin's very low fat diet and exercise program. McGovern quit the program early, but Pritikin remained a major influence on his thinking.

McGovern's committee listened to 2 days of testimony on diet and disease in July 1976. Then resident wordsmith Nick Mottern, a former labor reporter for The Providence Journal, was assigned the task of researching and writing the first “Dietary Goals for the United States.” Mottern, who had no scientific background and no experience writing about science, nutrition, or health, believed his Dietary Goals would launch a “revolution in diet and agriculture in this country.” He avoided the scientific and medical controversy by relying almost exclusively on Harvard School of Public Health nutritionist Mark Hegsted for input on dietary fat. Hegsted had studied fat and cholesterol metabolism in the early 1960s, and he believed unconditionally in the benefits of restricting fat intake, although he says he was aware that his was an extreme opinion. With Hegsted as his muse, Mottern saw dietary fat as the nutritional equivalent of cigarettes, and the food industry as akin to the tobacco industry in its willingness to suppress scientific truth in the interests of profits. To Mottern, those scientists who spoke out against fat were those willing to take on the industry. “It took a certain amount of guts,” he says, “to speak about this because of the financial interests involved.”

Mottern's report suggested that Americans cut their total fat intake to 30% of the calories they consume and saturated fat intake to 10%, in accord with AHA recommendations for men at high risk of heart disease. The report acknowledged the existence of controversy but insisted Americans had nothing to lose by following its advice. “The question to be asked is not why should we change our diet but why not?” wrote Hegsted in the introduction. “There are [no risks] that can be identified and important benefits can be expected.” This was an optimistic but still debatable position, and when Dietary Goals was released in January 1977, “all hell broke loose,” recalls Hegsted. “Practically nobody was in favor of the McGovern recommendations. Damn few people.”

McGovern responded with three follow-up hearings, which aptly foreshadowed the next 7 years of controversy. Among those testifying, for instance, was NHLBI director Robert Levy, who explained that no one knew if eating less fat or lowering blood cholesterol levels would prevent heart attacks, which was why NHLBI was spending $300 million to study the question. Levy's position was awkward, he recalls, because “the good senators came out with the guidelines and then called us in to get advice.” He was joined by prominent scientists, including Ahrens, who testified that advising Americans to eat less fat on the strength of such marginal evidence was equivalent to conducting a nutritional experiment with the American public as subjects. Even the American Medical Association protested, suggesting that the diet proposed by the guidelines raised the “potential for harmful effects.” But as these scientists testified, so did representatives from the dairy, egg, and cattle industries, who also vigorously opposed the guidelines for obvious reasons. This juxtaposition served to taint the scientific criticisms: Any scientists arguing against the committee's guidelines appeared to be either hopelessly behind the paradigm, which was Hegsted's view, or industry apologists, which was Mottern's, if not both. Although the committee published a revised edition of the Dietary Goals later in the year, the thrust of the recommendations remained unchanged. It did give in to industry pressure by softening the suggestion that Americans eat less meat. Mottern says he considered even that a “disservice to the public,” refused to do the revisions, and quit the committee. (Mottern became a vegetarian while writing the Dietary Goals and now runs a food co-op in Peekskill, New York.) The guidelines might have then died a quiet death when McGovern's committee came to an end in late 1977 if two federal agencies had not felt it imperative to respond. Although they took contradictory points of view, one message—with media assistance—won out. The first was the USDA, where consumer-activist Carol Tucker Foreman had recently been appointed an assistant secretary. Foreman believed it was incumbent on USDA to turn McGovern's recommendations into official policy, and, like Mottern, she was not deterred by the existence of scientific controversy. “Tell us what you know and tell us it's not the final answer,” she would tell scientists. “I have to eat and feed my children three times a day, and I want you to tell me what your best sense of the data is right now.” Of course, given the controversy, the “best sense of the data” would depend on which scientists were asked. The Food and Nutrition Board of the National Academy of Sciences (NAS), which decides the Recommended Dietary Allowances, would have been a natural choice, but NAS president Philip Handler, an expert on metabolism, had told Foreman that Mottern's Dietary Goals were “nonsense.” Foreman then turned to McGovern's staffers for advice and they recommended she hire Hegsted, which she did. Hegsted, in turn, relied on a state-of-the-science report published by an expert but very divergent committee of the American Society for Clinical Nutrition. “They were nowhere near unanimous on anything,” says Hegsted, “but the majority supported something like the McGovern committee report.” The resulting document became the first edition of “Using the Dietary Guidelines for Americans.” Although it acknowledged the existence of controversy and suggested that a single dietary recommendation might not suit an entire diverse population, the advice to avoid fat and saturated fat was, indeed, virtually identical to McGovern's Dietary Goals. Three months later, the NAS Food and Nutrition Board released its own guidelines: “Toward Healthful Diets.” The board, consisting of a dozen nutrition experts, concluded that the only reliable advice for healthy Americans was to watch their weight; everything else, dietary fat included, would take care of itself. The advice was not taken kindly, however, at least not by the media. The first reports—“rather incredulously,” said Handler at the time—criticized the NAS advice for conflicting with the USDA's and McGovern's and thus somehow being irresponsible. Follow-up reports suggested that the board members, in the words of Jane Brody, who covered the story for The New York Times, were “all in the pocket of the industries being hurt.” To be precise, the board chair and one of its members consulted for food industries, and funding for the board itself came from industry donations. These industry connections were leaked to the press from the USDA. Hegsted now defends the NAS board, although he didn't at the time, and calls this kind of conflict of interest “a hell of an issue.” “Everybody used to complain that industry didn't do anything on nutrition,” he told Science, “yet anybody who got involved was blackballed because their positions were presumably influenced by the industry.” (In 1981, Hegsted returned to Harvard, where his research was funded by Frito-Lay.) The press had mixed feelings, claiming that the connections “soiled” the academy's reputation “for tendering careful scientific advice” (The Washington Post), demonstrated that the board's “objectivity and aptitude are in doubt” (The New York Times), or represented in the board's guidelines a “blow against the food faddists who hold the public in thrall” (Science). In any case, the NAS board had been publicly discredited. Hegsted's Dietary Guidelines for Americans became the official U.S. policy on dietary fat: Eat less fat. Live longer. ## Creating “consensus” Once politicians, the press, and the public had decided dietary fat policy, the science was left to catch up. In the early 1970s, when NIH opted to forgo a$1 billion trial that might be definitive and instead fund a half-dozen studies at one-third the cost, everyone hoped these smaller trials would be sufficiently persuasive to conclude that low-fat diets prolong lives. The results were published between 1980 and 1984. Four of these trials —comparing heart disease rates and diet within Honolulu, Puerto Rico, Chicago, and Framingham—showed no evidence that men who ate less fat lived longer or had fewer heart attacks. A fifth trial, the Multiple Risk Factor Intervention Trial (MRFIT), cost $115 million and tried to amplify the subtle influences of diet on health by persuading subjects to avoid fat while simultaneously quitting smoking and taking medication for high blood pressure. That trial suggested, if anything, that eating less fat might shorten life. In each study, however, the investigators concluded that methodological flaws had led to the negative results. They did not, at least publicly, consider their results reason to lessen their belief in the evils of fat. The sixth study was the$140 million Lipid Research Clinics (LRC) Coronary Primary Prevention Trial, led by NHLBI administrator Basil Rifkind and biochemist Daniel Steinberg of the University of California, San Diego. The LRC trial was a drug trial, not a diet trial, but the NHLBI heralded its outcome as the end of the dietary fat debate. In January 1984, LRC investigators reported that a medication called cholestyramine reduced cholesterol levels in men with abnormally high cholesterol levels and modestly reduced heart disease rates in the process. (The probability of suffering a heart attack during the seven-plus years of the study was reduced from 8.6% in the placebo group to 7.0%; the probability of dying from a heart attack dropped from 2.0% to 1.6%.) The investigators then concluded, without benefit of dietary data, that cholestyramine's benefits could be extended to diet as well. And although the trial tested only middle-aged men with cholesterol levels higher than those of 95% of the population, they concluded that those benefits “could and should be extended to other age groups and women and … other more modest elevations of cholesterol levels.”

Why go so far? Rifkind says their logic was simple: For 20 years, he and his colleagues had argued that lowering cholesterol levels prevented heart attacks. They had spent enormous sums trying to prove it. They felt they could never actually demonstrate that low-fat diets prolonged lives—that would be too expensive, and MRFIT had failed—but now they had established a fundamental link in the causal chain, from lower cholesterol levels to cardiovascular health. With that, they could take the leap of faith from cholesterol-lowering drugs and health to cholesterol-lowering diet and health. And after all their effort, they were eager—not to mention urged by Congress—to render helpful advice. “There comes a point when, if you don't make a decision, the consequences can be great as well,” says Rifkind. “If you just allow Americans to keep on consuming 40% of calories from fat, there's an outcome to that as well.”

With the LRC results in press, the NHLBI launched what Levy called “a massive public health campaign.” The media obligingly went along. Time, for instance, reported the LRC findings under the headline “Sorry, It's True. Cholesterol really is a killer.” The article about a drug trial began: “No whole milk. No butter. No fatty meats …” Time followed up 3 months later with a cover story: “And Cholesterol and Now the Bad News. …” The cover photo was a frowning face: a breakfast plate with two fried eggs as the eyes and a bacon strip for the mouth. Rifkind was quoted saying that their results “strongly indicate that the more you lower cholesterol and fat in your diet, the more you reduce your risk of heart disease,” a statement that still lacked direct scientific support.

The following December, NIH effectively ended the debate with a “Consensus Conference.” The idea of such a conference is that an expert panel, ideally unbiased, listens to 2 days of testimony and arrives at a conclusion with which everyone agrees. In this case, Rifkind chaired the planning committee, which chose his LRC co-investigator Steinberg to lead the expert panel. The 20 speakers did include a handful of skeptics —including Ahrens, for instance, and cardiologist Michael Oliver of Imperial College in London—who argued that it was unscientific to equate the effects of a drug with the effects of a diet. Steinberg's panel members, however, as Oliver later complained in The Lancet, “were selected to include only experts who would, predictably, say that all levels of blood cholesterol in the United States are too high and should be lowered. And, of course, this is exactly what was said.” Indeed, the conference report, written by Steinberg and his panel, revealed no evidence of discord. There was “no doubt,” it concluded, that low-fat diets “will afford significant protection against coronary heart disease” to every American over 2 years old. The Consensus Conference officially gave the appearance of unanimity where none existed. After all, if there had been a true consensus, as Steinberg himself told Science, “you wouldn't have had to have a consensus conference.”

## The test of time

To the outside observer, the challenge in making sense of any such long-running scientific controversy is to establish whether the skeptics are simply on the wrong side of the new paradigm, or whether their skepticism is well founded. In other words, is the science at issue based on sound scientific thinking and unambiguous data, or is it what Sir Francis Bacon, for instance, would have called “wishful science,” based on fancies, opinions, and the exclusion of contrary evidence? Bacon offered one viable suggestion for differentiating the two: the test of time. Good science is rooted in reality, so it grows and develops and the evidence gets increasingly more compelling, whereas wishful science flourishes most under its first authors before “going downhill.”

Such is the case, for instance, with the proposition that dietary fat causes cancer, which was an integral part of dietary fat anxiety in the late 1970s. By 1982, the evidence supporting this idea was thought to be so undeniable that a landmark NAS report on nutrition and cancer equated those researchers who remained skeptical with “certain interested parties [who] formerly argued that the association between lung cancer and smoking was not causational.” Fifteen years and hundreds of millions of research dollars later, a similarly massive expert report by the World Cancer Research Fund and the American Institute for Cancer Research could find neither “convincing” nor even “probable” reason to believe that dietary fat caused cancer.

The hypothesis that low-fat diets are the requisite route to weight loss has taken a similar downward path. This was the ultimate fallback position in all low-fat recommendations: Fat has nine calories per gram compared to four calories for carbohydrates and protein, and so cutting fat from the diet surely would cut pounds. “This is held almost to be a religious truth,” says Harvard's Willett. Considerable data, however, now suggest otherwise. The results of well-controlled clinical trials are consistent: People on low-fat diets initially lose a couple of kilograms, as they would on any diet, and then the weight tends to return. After 1 to 2 years, little has been achieved. Consider, for instance, the 50,000 women enrolled in the ongoing $100 million Women's Health Initiative (WHI). Half of these women have been extensively counseled to consume only 20% of their calories from fat. After 3 years on this near-draconian regime, say WHI sources, the women had lost, on average, a kilogram each. The link between dietary fat and heart disease is more complicated, because the hypothesis has diverged into two distinct propositions: first, that lowering cholesterol prevents heart disease; second, that eating less fat not only lowers cholesterol and prevents heart disease but prolongs life. Since 1984, the evidence that cholesterol-lowering drugs are beneficial—proposition number one—has indeed blossomed, at least for those at high risk of heart attack. These drugs reduce serum cholesterol levels dramatically, and they prevent heart attacks, perhaps by other means as well. Their market has now reached$4 billion a year in the United States alone, and every new trial seems to confirm their benefits.

The evidence supporting the second proposition, that eating less fat makes for a healthier and longer life, however, has remained stubbornly ambiguous. If anything, it has only become less compelling over time. Indeed, since Ancel Keys started advocating low-fat diets almost 50 years ago, the science of fat and cholesterol has evolved from a simple story into a very complicated one. The catch has been that few involved in this business were prepared to deal with a complicated story. Researchers initially preferred to believe it was simple—that a single unwholesome nutrient, in effect, could be isolated from the diverse richness of human diets; public health administrators required a simple story to give to Congress and the public; and the press needed a simple story—at least on any particular day—to give to editors and readers in 30 column inches. But as contrarian data continued to accumulate, the complications became increasingly more difficult to ignore or exclude, and the press began waffling or adding caveats. The scientists then got the blame for not sticking to the original simple story, which had, regrettably, never existed.

## More fats, fewer answers

The original simple story in the 1950s was that high cholesterol levels increase heart disease risk. The seminal Framingham Heart Study, for instance, which revealed the association between cholesterol and heart disease, originally measured only total serum cholesterol. But cholesterol shuttles through the blood in an array of packages. Low-density lipoprotein particles (LDL, the “bad” cholesterol) deliver fat and cholesterol from the liver to tissues that need it, including the arterial cells, where it can lead to atherosclerotic plaques. High-density lipoproteins (HDLs, the “good” cholesterol) return cholesterol to the liver. The higher the HDL, the lower the heart disease risk. Then there are triglycerides, which contain fatty acids, and very low density lipoproteins (VLDLs), which transport triglycerides.

All of these particles have some effect on heart disease risk, while the fats, carbohydrates, and protein in the diet have varying effects on all these particles. The 1950s story was that saturated fats increase total cholesterol, polyunsaturated fats decrease it, and monounsaturated fats are neutral. By the late 1970s—when researchers accepted the benefits of HDL—they realized that monounsaturated fats are not neutral. Rather, they raise HDL, at least compared to carbohydrates, and lower LDL. This makes them an ideal nutrient as far as cholesterol goes. Furthermore, saturated fats cannot be quite so evil because, while they elevate LDL, which is bad, they also elevate HDL, which is good. And some saturated fats—stearic acid, in particular, the fat in chocolate—are at worst neutral. Stearic acid raises HDL levels but does little or nothing to LDL. And then there are trans fatty acids, which raise LDL, just like saturated fat, but also lower HDL. Today, none of this is controversial, although it has yet to be reflected in any Food Guide Pyramid.

To understand where this complexity can lead in a simple example, consider a steak—to be precise, a porterhouse, select cut, with a half-centimeter layer of fat, the nutritional constituents of which can be found in the Nutrient Database for Standard Reference at the USDA Web site. After broiling, this porterhouse reduces to a serving of almost equal parts fat and protein. Fifty-one percent of the fat is monounsaturated, of which virtually all (90%) is oleic acid, the same healthy fat that's in olive oil. Saturated fat constitutes 45% of the total fat, but a third of that is stearic acid, which is, at the very least, harmless. The remaining 4% of the fat is polyunsaturated, which also improves cholesterol levels. In sum, well over half—and perhaps as much as 70%—of the fat content of a porterhouse will improve cholesterol levels compared to what they would be if bread, potatoes, or pasta were consumed instead. The remaining 30% will raise LDL but will also raise HDL. All of this suggests that eating a porterhouse steak rather than carbohydrates might actually improve heart disease risk, although no nutritional authority who hasn't written a high-fat diet book will say this publicly.

As for the scientific studies, in the years since the 1984 consensus conference, the one thing they have not done is pile up evidence in support of the low-fat-for-all approach to the public good. If anything, they have added weight to Ahrens's fears that there may be a downside to populationwide low-fat recommendations. In 1986, for instance, just 1 year after NIH launched the National Cholesterol Education Program, also advising low-fat diets for everyone over 2 years old, epidemiologist David Jacobs of the University of Minnesota, Twin Cities, visited Japan. There he learned that Japanese physicians were advising patients to raise their cholesterol levels, because low cholesterol levels were linked to hemorrhagic stroke. At the time, Japanese men were dying from stroke almost as frequently as American men were succumbing to heart disease. Back in Minnesota, Jacobs looked for this low-cholesterol-stroke relationship in the MRFIT data and found it there, too. And the relationship transcended stroke: Men with very low cholesterol levels seemed prone to premature death; below 160 milligrams per deciliter (mg/dl), the lower the cholesterol level, the shorter the life.

Jacobs reported his results to NHLBI, which in 1990 hosted a conference to discuss the issue, bringing together researchers from 19 studies around the world. The data were consistent: When investigators tracked all deaths, instead of just heart disease deaths, the cholesterol curves were U-shaped for men and flat for women. In other words, men with cholesterol levels above 240 mg/dl tended to die prematurely from heart disease. But below 160 mg/dl, the men tended to die prematurely from cancer, respiratory and digestive diseases, and trauma. As for women, if anything, the higher their cholesterol, the longer they lived (see graph on p. 2540).

These mortality data can be interpreted in two ways. One, preferred by low-fat advocates, is that they cannot be meaningful. Rifkind, for instance, told Science that the excess deaths at low cholesterol levels must be due to preexisting conditions. In other words, chronic illness leads to low cholesterol levels, not vice versa. He pointed to the 1990 conference report as the definitive document on the issue and as support for his argument, although the report states unequivocally that this interpretation is not supported by the data.

The other interpretation is that what a low-fat diet does to serum cholesterol levels, and what that in turn does to arteries, may be only one component of the diet's effect on health. In other words, while low-fat diets might help prevent heart disease, they might also raise susceptibility to other conditions. This is what always worried Ahrens. It's also one reason why the American College of Physicians, for instance, now suggests that cholesterol reduction is certainly worthwhile for those at high, short-term risk of dying of coronary heart disease but of “much smaller or … uncertain” benefit for everyone else.

This interpretation—that the connection between diet and health far transcends cholesterol—is also supported by the single most dramatic diet-heart trial ever conducted: the Lyon Diet Heart Study, led by Michel de Lorgeril of the French National Institute of Health and Medical Research (INSERM) and published in Circulation in February 1999. The investigators randomized 605 heart attack survivors, all on cholesterol-lowering drugs, into two groups. They counseled one to eat an AHA “prudent diet,” very similar to that recommended for all Americans. They counseled the other to eat a Mediterranean-type diet, with more bread, cereals, legumes, beans, vegetables, fruits, and fish and less meat. Total fat and types of fat differed markedly in the two diets, but the HDL, LDL, and total cholesterol levels in the two groups remained virtually identical. Nonetheless, over 4 years of follow-up, the Mediterranean-diet group had only 14 cardiac deaths and nonfatal heart attacks compared to 44 for the “Western-type” diet group. The likely explanation, wrote de Lorgeril and his colleagues, is that the “protective effects [of the Mediterranean diet] were not related to serum concentrations of total, LDL or HDL cholesterol.”

Many researchers find the Lyon data so perplexing that they're left questioning the methodology of the trial. Nonetheless, says NIH's Harlan, the data “are very provocative. They do bring up the issue of whether if we look only at cholesterol levels we aren't going to miss something very important.” De Lorgeril believes the diet's protective effect comes primarily from omega-3 fatty acids, found in seed oils, meat, cereals, green leafy vegetables, and fish, and from antioxidant compounds, including vitamins, trace elements, and flavonoids. He told Science that most researchers and journalists in the field are prisoners of the “cholesterol paradigm.” Although dietary fat and serum cholesterol “are obviously connected,” he says, “the connection is not a robust one” when it comes to heart disease.

One inescapable reality is that death is a trade-off, and so is diet. “You have to eat something,” says epidemiologist Hugh Tunstall Pedoe of the University of Dundee, U.K., spokesperson for the 21-nation Monitoring Cardiovascular Disease Project run by the World Health Organization. “If you eat more of one thing, you eat a lot less of something else. So for every theory saying this disease is caused by an excess in x, you can produce an alternative theory saying it's a deficiency in y.” It would be simple if, say, saturated fats could be cut from the diet and the calories with it, but that's not the case. Despite all expectations to the contrary, people tend to consume the same number of calories despite whatever diet they try. If they eat less total fat, for instance, they will eat more carbohydrates and probably less protein, because most protein comes in foods like meat that also have considerable amounts of fat.

This plus-minus problem suggests a different interpretation for virtually every diet study ever done, including, for instance, the kind of metabolic-ward studies that originally demonstrated the ability of saturated fats to raise cholesterol. If researchers reduce the amount of saturated fat in the test diet, they have to make up the calories elsewhere. Do they add polyunsaturated fats, for instance, or add carbohydrates? A single carbohydrate or mixed carbohydrates? Do they add green leafy vegetables, or do they add pasta? And so it goes. “The sky's the limit,” says nutritionist Alice Lichtenstein of Tufts University in Boston. “There are a million perturbations.”

These trade-offs also confound the kind of epidemiological studies that demonized saturated fat from the 1950s onward. In particular, individuals who eat copious amounts of meat and dairy products, and plenty of saturated fats in the process, tend not to eat copious amounts of vegetables and fruits. The same holds for entire populations. The eastern Finns, for instance, whose lofty heart disease rates convinced Ancel Keys and a generation of researchers of the evils of fat, live within 500 kilometers of the Arctic Circle and rarely see fresh produce or a green vegetable. The Scots, infamous for eating perhaps the least wholesome diet in the developed world, are in a similar fix. Basil Rifkind recalls being laughed at once on this point when he lectured to Scottish physicians on healthy diets: “One said, ‘You talk about increasing fruits and vegetable consumption, but in the area I work in there's not a single grocery store.'” In both cases, researchers joke that the only green leafy vegetable these populations consume regularly is tobacco. As for the purported benefits of the widely hailed Mediterranean diet, is it the fish, the olive oil, or the fresh vegetables? After all, says Harvard epidemiologist Dimitrios Trichopoulos, a native of Greece, the olive oil is used either to cook vegetables or as dressing over salads. “The quantity of vegetables consumed is almost a pound [half a kilogram] a day,” he says, “and you cannot eat it without olive oil. And we eat a lot of legumes, and we cannot eat legumes without olive oil.”

The other salient trade-off in the plus-minus problem of human diets is carbohydrates. When the federal government began pushing low-fat diets, the scientists and administrators, and virtually everyone else involved, hoped that Americans would replace fat calories with fruits and vegetables and legumes, but it didn't happen. If nothing else, economics worked against it. The food industry has little incentive to advertise nonproprietary items: broccoli, for instance. Instead, says NYU's Nestle, the great bulk of the \$30-billion-plus spent yearly on food advertising goes to selling carbohydrates in the guise of fast food, sodas, snacks, and candy bars. And carbohydrates are all too often what Americans eat.

Carbohydrates are what Harvard's Willett calls the flip side of the calorie trade-off problem. Because it is exceedingly difficult to add pure protein to a diet in any quantity, a low-fat diet is, by definition, a high-carbohydrate diet—just as a low-fat cookie or low-fat yogurt are, by definition, high in carbohydrates. Numerous studies now suggest that high-carbohydrate diets can raise triglyceride levels, create small, dense LDL particles, and reduce HDL—a combination, along with a condition known as “insulin resistance,” that Stanford endocrinologist Gerald Reaven has labeled “syndrome X.” Thirty percent of adult males and 10% to 15% of postmenopausal women have this particular syndrome X profile, which is associated with a several-fold increase in heart disease risk, says Reaven, even among those patients whose LDL levels appear otherwise normal. Reaven and Ron Krauss, who studies fats and lipids at Lawrence Berkeley National Laboratory in California, have shown that when men eat high-carbohydrate diets their cholesterol profiles may shift from normal to syndrome X. In other words, the more carbohydrates replace saturated fats, the more likely the end result will be syndrome X and an increased heart disease risk. “The problem is so clear right now it's almost a joke,” says Reaven. How this balances out is the unknown. “It's a bitch of a question,” says Marc Hellerstein, a nutritional biochemist at the University of California, Berkeley, “maybe the great public health nutrition question of our era.”

The other worrisome aspect of the carbohydrate trade-off is the possibility that, for some individuals, at least, it might actually be easier to gain weight on low-fat/high-carbohydrate regimens than on higher fat diets. One of the many factors that influence hunger is the glycemic index, which measures how fast carbohydrates are broken down into simple sugars and moved into the bloodstream. Foods with the highest glycemic index are simple sugars and processed grain products like pasta and white rice, which cause a rapid rise in blood sugar after a meal. Fruits, vegetables, legumes, and even unprocessed starches—pasta al dente, for instance—cause a much slower rise in blood sugar. Researchers have hypothesized that eating high-glycemic index foods increases hunger later because insulin overreacts to the spike in blood sugar. “The high insulin levels cause the nutrients from the meal to get absorbed and very avidly stored away, and once they are, the body can't access them,” says David Ludwig, director of the obesity clinic at Children's Hospital Boston. “The body appears to run out of fuel.” A few hours after eating, hunger returns.

If the theory is correct, calories from the kind of processed carbohydrates that have become the staple of the American diet are not the same as calories from fat, protein, or complex carbohydrates when it comes to controlling weight. “They may cause a hormonal change that stimulates hunger and leads to overeating,” says Ludwig, “especially in environments where food is abundant. …”

In 1979, 2 years after McGovern's committee released its Dietary Goals, Ahrens wrote to The Lancet describing what he had learned over 30 years of studying fat and cholesterol metabolism: “It is absolutely certain that no one can reliably predict whether a change in dietary regimens will have any effect whatsoever on the incidence of new events of [coronary heart disease], nor in whom.” Today, many nutrition researchers, acknowledging the complexity of the situation, find themselves siding with Ahrens. Krauss, for instance, who chairs the AHA Dietary Guidelines Committee, now calls it “scientifically naïve” to expect that a single dietary regime can be beneficial for everybody: “The ‘goodness' or ‘badness' of anything as complex as dietary fat and its subtypes will ultimately depend on the context of the individual.”

Given the proven success and low cost of cholesterol-lowering drugs, most physicians now prescribe drug treatment for patients at high risk of heart disease. The drugs reduce LDL cholesterol levels by as much as 30%. Diet rarely drops LDL by more than 10%, which is effectively trivial for healthy individuals, although it may be worth the effort for those at high risk of heart disease whose cholesterol levels respond well to it.

The logic underlying populationwide recommendations such as the latest USDA Dietary Guidelines is that limiting saturated fat intake—even if it does little or nothing to extend the lives of healthy individuals and even if not all saturated fats are equally bad—might still delay tens of thousands of deaths each year throughout the entire country. Limiting total fat consumption is considered reasonable advice because it's simple and easy to understand, and it may limit calorie intake. Whether it's scientifically justifiable may simply not be relevant. “When you don't have any real good answers in this business,” says Krauss, “you have to accept a few not so good ones as the next best thing.”

12. # What If Americans Ate Less Saturated Fat?

1. Gary Taubes

Eat less saturated fat, live longer. For 30 years, this has stood as one cornerstone of nutritional advice given to Americans (see main text). But how much longer? Between 1987 and 1992, three independent research groups used computer models to work out the answer. All three analyses agreed, but their conclusions have been buried in the literature, rarely if ever cited.

All three models estimated how much longer people might expect to live, on average, if only 10% of their calories came from saturated fat as recommended. In the process their total fat intake would drop to the recommended 30% of calories. All three models assumed that LDL cholesterol—the “bad cholesterol”—levels would drop accordingly and that this diet would have no adverse effects, although that was optimistic at the time and has become considerably more so since then. All three combined national vital statistics data with cholesterol risk factor data from the Framingham Heart Study.

The first study came out of Harvard Medical School and was published in the Annals of Internal Medicine in April 1987. Led by William Taylor, it concluded that individuals with a high risk of heart disease—smokers, for instance, with high blood pressure—could expect to gain, on average, one extra year by shunning saturated fat. Healthy nonsmokers, however, might add 3 days to 3 months. “Although there are undoubtedly persons who would choose to participate in a lifelong regimen of dietary change to achieve results of this magnitude, we suspect that some might not,” wrote Taylor and his colleagues.

The following year, the U.S. Surgeon General's Office funded a study at the University of California, San Francisco, with the expectation that its results would counterbalance those of the Harvard analysis. Led by epidemiologist Warren Browner, this study concluded that cutting fat consumption in America would delay 42,000 deaths each year, but the net increase in life expectancy would average out to only 3 to 4 months. The key word was “delay,” for death, like diet, is a trade-off: Everyone has to die of something. “Deaths are not prevented, they are merely delayed,” Browner later wrote. “The ‘saved' people mainly die of the same things everyone else dies of; they do so a little later in life.” To be precise, a woman who might otherwise die at 65 could expect to live two extra weeks after a lifetime of avoiding saturated fat. If she lived to be 90, she could expect 10 additional weeks. The third study, from researchers at McGill University in Montreal, came to virtually identical conclusions.

Browner reported his results to the Surgeon General's Office, then submitted a paper to The Journal of the American Medical Association (JAMA). Meanwhile, the Surgeon General's Office—his source of funding—contacted JAMA and tried to prevent publication, claiming that the analysis was deeply flawed. JAMA reviewers disagreed and published his article, entitled “What If Americans Ate Less Fat?” in June 1991. As for Browner, he was left protecting his work from his own funding agents. “Shooting the messenger,” he wrote to the Surgeon General's Office, “or creating a smoke screen—does not change those estimates.”

13. # The Epidemic That Wasn't?

1. Gary Taubes

For half a century, nutritionists have pointed to soaring death rates as the genesis of their research into dietary fat and heart disease and as reason to advise Americans to eat less fat (see main text). “We had an epidemic of heart disease after World War II,” obesity expert Jules Hirsch of Rockefeller University in New York City said just 3 months ago in The New York Times. “The rates were growing higher and higher, and people became suddenly aware of that, and that diet was a factor.”

To proponents of the antifat message, this heart disease epidemic has always been an indisputable reality. Yet, to the statisticians at the mortality branch of the National Center for Health Statistics (NCHS), the source of all the relevant statistics, the epidemic was illusory. In their view, heart disease deaths have been steadily declining since the late 1940s.

According to Harry Rosenberg, director of the NCHS mortality branch since 1977, the key factor in the apparent epidemic, paradoxically, was a healthier American population. By the 1950s, premature deaths from infectious diseases and nutritional deficiencies had been all but eliminated, which left more Americans living long enough to die of chronic diseases such as heart disease. In other words, the actual risk of dying from a heart attack at any particular age remained unchanged: Rather, the rising number of 50-year-olds dropping dead of heart attacks was primarily due to the rising number of 50-year-olds.

The secondary factor was an increase from 1948 to 1968 in the probability that a death would be classified on a death certificate as arteriosclerotic disease or coronary heart disease. This increase, however, was a figment of new diagnostic technologies—the wider use of electrocardiograms, for instance—and the changing terminology of death certificates. In 1949, the International Classification of Diseases (ICD) added a new category, “arteriosclerotic heart disease,” under the more general rubric “diseases of the heart.” The result, as a 1958 report to the American Heart Association noted, was dramatic: “In one year, 1948 to 1949, the effect of this revision was to raise coronary disease death rates by about 20% for white males and about 35% for white females.” In 1965, the ICD added a category for coronary heart disease, which added yet more deaths and capped off the apparent epidemic.

To Rosenberg and others at NCHS, the most likely explanation for the postwar upsurge in coronary heart disease deaths is that physicians slowly caught on to the new terminology and changed the wording on death certificates. “There is absolutely no evidence that there was an epidemic,” says Rosenberg.