News this Week

Science  22 Oct 1999:
Vol. 286, Issue 5440, pp. 650
  1. NEUROSCIENCE

    Enzymes Point Way to Potential Alzheimer's Therapies

    1. Elizabeth Pennisi

    One way to fight a war is to target the factories where enemy arms are made. New findings, one reported in this issue of Science and others due out soon, could make that a fruitful strategy in one of the most frustrating of medical science's battles, the war against Alzheimer's disease. Researchers are now isolating the enzymes that make β-amyloid, a small protein that builds up in the brains of Alzheimer's patients, where it may kill neurons and thereby drive the relentless neurological degeneration of the disease. If amyloid is the destructive agent in Alzheimer's, drugs that target its production could slow or even reverse the disease's course.

    The hope is not new. Over the last 5 years, at least a dozen teams have described candidates for an elusive enzyme called β-secretase, which is needed to free one end of β-amyloid from its larger precursor protein, known as APP (for amyloid precursor protein). No candidates have swayed the Alzheimer's research community, and many have already failed to stand the test of time. Now, researchers have new candidates for the β-secretase, one of which is described on page 735 by molecular biologists Martin Citron and Robert Vassar and their colleagues at the biotech company Amgen Inc. in Thousand Oaks, California. This one “is incontrovertible,” comments Sangram Sisodia, a β-amyloid specialist at the University of Chicago. “I was floored by the data.”

    A second β-secretase candidate is due to be reported at this week's annual meeting of the Society for Neuroscience in Miami. Other groups have also identified candidates for the enzyme, known as γ-secretase, that clips the other end of β-amyloid. And even before these candidates were in hand, several companies found compounds that inhibit APP cleavage, which they hope can halt formation of the β-amyloid-containing plaques seen in Alzheimer's brains.

    Whether this strategy will stop Alzheimer's disease in its tracks is still a big if, however. While there's good evidence that β-amyloid deposition in plaques causes at least some Alzheimer's cases—for one thing, some relatively rare hereditary cases of the disease have been traced to mutations in APP—the case is not air-tight. But at the very least, the new secretase enzymes could settle the dispute, if a β- or γ-secretase inhibitor does in fact help Alzheimer's patients. “Finding the β-secretase enzyme is a very important development, not only for drug development but also for the clues it will give us about [amyloid] biology,” says John Durkin, a biochemist at Cephalon in West Chester, Pennsylvania.

    Hunting for a specific protein-splitting enzyme is extremely difficult, since cells are loaded with hundreds of these proteases. So for their search, the Amgen team decided not to try to isolate the β-secretase protein directly, but instead to look for its gene. The researchers began by introducing large pools of cloned genes into cultured cells that make β-amyloid and then looked to see whether any of these active genes boosted the cells' β-amyloid production. Some genes did, so after eliminating any in which the upturn could be linked to increased APP production, the researchers repeated the experiments until they homed in on a single gene that raised β-amyloid production. The protein encoded by this gene looked extremely promising as a β-secretase candidate.

    For one thing, the sequence of the protein indicates that it is in fact a protein-splitting enzyme. But more important, the protein, which the researchers called beta-site APP-cleaving enzyme (BACE), proved to have all the properties expected in a β-secretase. For example, antibody studies showed that it is located in the Golgi apparatus and endosomes, two structures in which APP is known to be cleaved.

    In addition, the Amgen team showed not only that the purified enzyme cleaves APP in the right spot for a β-secretase, but also that inhibiting the cellular enzyme decreases β-amyloid production by cultured cells. Bart De Strooper, a neuroscientist at Catholic University in Leuven, Belgium, finds these results convincing. “They have both cell biological evidence and evidence that the purified enzyme acts with the [right] specificity,” he says.

    But the Amgen enzyme is not the only β-secretase candidate. Elan Pharmaceuticals in South San Francisco has a patent on another one, which is apparently different from Amgen's. At the neuroscience meeting, researchers from pharmaceutical giant SmithKline Beecham plan to describe its own candidate. It's unclear, however, how the SmithKline Beecham enzyme relates to the others. When asked for comment, a company spokesperson would say only that “it's early days” in proving whether they have the right protein.

    As for γ-secretase, the second enzyme needed to release β-amyloid from APP, Harvard Medical School neurobiologist Dennis Selkoe and his colleagues suggested in the 8 April Nature that presenilin 1, one of two related proteins implicated in some inherited forms of Alzheimer's, may in fact be that enzyme. His latest data, which he will present at the neuroscience meeting, suggest that presenilin 2 as well may be a γ-secretase.

    Although other researchers agree that presenilins somehow influence β-amyloid production—its formation is altered when the presenilins are mutated—they think that the proteins play a less direct role. The proteins may, for example, help transport either APP or the secretases to the cell site where APP cleavage occurs. Either way, the presenilins could be potential targets for Alzheimer's drugs, says Selkoe.

    Even without knowing the actual identities of any of these enzymes, drug companies have been developing compounds that block their activity. Bristol-Myers Squibb plans to start clinical trials next year on a drug that interferes with γ-secretase activity, though it's not clear if this drug blocks the enzyme itself. Molecular biologist Barbara Cordell says her biotech company, Scios Inc. in Sunnyvale, California, has “both β- and γ- secretase inhibitors and compounds that inhibit [amyloid] by a mechanism we don't understand.” Scios has formed partnerships with two large pharmaceutical companies that hope to test some of these drugs in people.

    In addition, now that researchers have actual secretase enzymes in hand, they can look for more specific and powerful inhibitors. BACE, for example, is similar to the HIV protease in the AIDS virus, and many compounds have been already developed to inhibit that enzyme.

    Alzheimer's researchers hope such compounds will not just prevent new plaques from forming but will also help the brain rid itself of those already present. But whether that can be done without unacceptable side effects remains to be seen. And there's still the big question of whether these drugs will actually make a difference for patients.

    Even so, such inhibitors could “provide an excellent opportunity to [affect] Alzheimer's disease in a profound and important way,” says Steven Younkin, a neuroscientist at the Mayo Clinic in Jacksonville, Florida. “If we don't isolate the secretases and develop inhibitors, it's totally irresponsible.”

  2. SCIENCE POLICY

    Science Supporter John Porter to Leave Congress

    1. Eliot Marshall

    One of the strongest congressional supporters of biomedical research, Representative John Porter (R-IL), announced last week that he will not run for reelection next year. He is the third strong voice for biomedicine who will soon leave a high-profile position.

    Porter, chair of the House appropriations subcommittee that drafts the annual funding bill for the National Institutes of Health (NIH), made the surprise announcement on 12 October. After 21 years on Capitol Hill, Porter told reporters, he wants to pursue “other opportunities and challenges.” He's one of a handful of Capitol Hill leaders who have worked to put the NIH budget on a path toward doubling between 1999 and 2003. Porter played a pivotal role in 1995, for example, when a draconian plan drawn up by the new Republican-led budget committee proposed a 5% cut in NIH funding for each of the next 5 years. Porter ushered a delegation of researchers and biotech executives into the office of then-Speaker of the House Newt Gingrich (R-GA) to make a plea for sparing biomedical research. Afterward, NIH got a 5.7% increase, and Gingrich became a research champion, too.

    Since then, Porter has spoken out several times about his frustrations in dealing with an increasingly fractious federal budget process. Porter's press officer, David Kohn, says his own view is that the “tenor and atmosphere” of congressional debate has become more acrimonious and that his boss seemed to grow tired of the “continual battles with the right wing of his party” over gun control, abortion, and the environment. Kohn adds, however, that new rules adopted by the Republican leadership in 1995 require Porter to step down as chair of the Labor, Health and Human Services, and Education Subcommittee in any case after 2000, and “it was the right moment for a change.” There's no “hidden motivation,” Kohn says: Porter really does want to spend more time with his children and grandchildren.

    Porter's decision to step out of national politics comes on the heels of similar actions by two other key players in biomedical politics. NIH director Harold Varmus revealed last week that he will resign in December to become president of the Memorial Sloan-Kettering Cancer Center in New York City (Science, 15 October 1999, p. 382). And Senator Connie Mack (R-FL)—another advocate of doubling NIH's budget by 2003 and a member of the Senate Appropriations Committee—announced in March that he will not run for reelection in 2000.

    It might not be worrisome if just one of these figures were leaving, says Michael Stephens, lobbyist for the Federation of American Societies of Experimental Biology. But to have all three depart at the same time, he says, “could create a real problem” by depleting the ranks of officials who care about biomedical research.

  3. CANADIAN UNIVERSITIES

    Massive Hiring Plan Aimed at 'Brain Gain'

    1. Wayne Kondro

    OTTAWA—Canadian universities will soon be turned loose on a massive shopping spree for scientific talent. Prime Minister Jean Chrétien last week unveiled a US$205 million program to create 2000 new research chairs, calling it a “plan for brain gain” aimed at reversing a flow of talent to the United States. University officials applaud the initiative, even if it derives more from a desire to outflank political foes than to strengthen academic research.

    The issue of “brain drain” is a political hot potato in Canada. Business leaders have lobbied hard for tax relief, saying that high taxes have driven Canadian high-tech talent across the 49th parallel. Chrétien has resisted that argument, declaring just last month that such flight is “a myth.” Indeed, demographers say that Canada actually enjoys a favorable intellectual trade balance, and that the outflow to the United States in particular has shrunk by one-third since the 1950s. But last week, Chrétien appeared to acknowledge the existence of a brain drain without endorsing the business community's solution. Rather than lower taxes, he reasoned, why not give universities the wherewithal to attract the necessary talent to compete in a global market. “Our goal is for Canada to be known around the world as the place to be,” Chrétien told Parliament. “That's particularly [true] at a time when U.S. universities benefit from both permanent endowments and the generosity of private foundations out of all proportion to those of our universities.”

    The new investment—400 new research chairs in each of the next 3 years and an additional 800 “as soon as possible thereafter”—couldn't have come at a more critical moment for universities, science administrators say. “It's like having the capacity to build a hockey team with several [Wayne] Gretzkys on it,” says Social Sciences and Humanities Research Council president Marc Renaud. “It gives [universities] the feeling that they can grow and compete with the Americans.” Medical Research Council president Henry Friesen called it “a stunning announcement in positioning Canada's economy to compete on a world stage.”

    Each research chair will be awarded for 5 to 7 years and will be renewable. The allocation will be based on an institution's success in obtaining competitive research grants. To prevent major research universities from gobbling up all the funds, however, small institutions will be guaranteed at least one chair. The biomedical and natural sciences are each projected to receive a 40% share, while the social sciences have been promised 20%.

    Two types of chairs will be created. The first, intended to liberate senior scientists from teaching duties, will provide roughly $140,000 a year for “star researchers with a proven track record.” Universities can spend the money to hire a new investigator, to top up an existing salary, or to absorb costs associated with replacing the star in the classroom. They may also funnel it into indirect costs such as lab operations and utilities. The second category, which provides about $70,000 for so-called “rising stars,” is intended to attract younger faculty to aging departments.

    Whether the new monies will actually stem the brain drain is not clear, however. In fact, some argue that the problem may not even exist. Only 1.5% of postsecondary graduates in 1995 went to the United States for some period of time, says Statistics Canada director of education statistics Scott Murray, and only one in eight of them held a Ph.D. Overall, Canada is a net beneficiary of university graduates, gaining 33,000 university-educated immigrants annually while losing 8500 to the States. Immigrants are also three times more likely to hold a master's, doctoral, or medical degree than the Canadian-born population. “All of us know some people who've left,” says Canadian Association of University Teachers executive director Jim Turk, noting the impact of budget cuts on university staffing. “But the plural of anecdote is not data. At most you can argue there's a trickle, primarily in the area of health care.”

    But there's no doubt that Canada has lost some exceptional talent over the years. For example, seven Canadians who moved south have subsequently collected Nobels. One of them, Stanford physicist Richard Taylor—an Alberta native who came to the United States in the 1950s for graduate school and never returned to work in Canada—takes issue with the notion that his career path is a “myth.” Taylor, who shared the 1990 Nobel prize for electron scattering experiments that documented the existence of quarks, says the factors underlying the exodus are complex. They include insufficient spending on research, a relative lack of major research facilities, an unwillingness by Canadian industry to invest in research, and a culture that disdains elitism and risk. “It's not greed that drives people to the United States, it's ambition,” he says. If the U.S.-based Canadian Nobelists had stayed in Canada, he says, “few of them would have won the prize.”

    Although he welcomes the additional chairs, Taylor says they will be insufficient without a change of attitude. “It's very hard for a government, especially a Canadian government, to be elitist,” he says. “But that is what you should be if you want to do a good job.”

  4. NUCLEAR SAFETY

    Secret of Soviet-Era Nuclear Blast Revealed

    1. Vladimir Pokrovsky*
    1. Vladimir Pokrovsky is a writer in Moscow. Additional reporting by Richard Stone.

    MOSCOW—For the past 3 decades, rumors have circulated here that in the early 1970s an accident at the Kurchatov Institute of Atomic Energy, in a residential suburb of Moscow, released a cloud of gas that drifted over the city, exposing the population to potentially harmful radiation. Late last month at a nuclear safety conference in France, a senior Kurchatov researcher discussed these events in public for the first time: There were two blasts at the institute in the early 1970s, he said, but although two technicians were killed, as far as Kurchatov scientists could tell, no radionuclides were released over the city. According to Stanford University historian David Holloway, author of the book Stalin and the Bomb, “Secrecy was such in the 1970s that it would have been covered up.”

    The Kurchatov Institute was at the heart of the Soviet Union's atomic weapons program in the 1940s, but it moved over to civilian research in the 1950s when weapons research was transferred to the secret nuclear cities in eastern Russia. Today, the Kurchatov, which is now one of Russia's State Research Centers, is home to seven research reactors. Dmitry Parfanovich, a leading researcher at the Kurchatov, told the International Conference on Nuclear Criticality Safety in Versailles that the most serious blast occurred on 26 May 1971. At the time, Parfanovich was working in the research area close to one of the institute's critical nuclear assemblies—a basic feature of a nuclear reactor. “It all happened because the structure of the critical assembly was very fragile,” Parfanovich told Science.

    At about 4:00 p.m., experiments at this reactor had been completed and researchers were in the process of shutting it down. This involved draining the assembly of water, which was used as a moderator. Standard procedure required the water to be drained slowly and carefully, but on that day, Parfanovich recalls, the workers were in a hurry and they used a large emergency drain at the bottom of the tank. The rapid removal of the water moderator caused the structure to heat up, creating excess pressure that buckled the base of the reactor. As a result, uranium rods came out of their sockets and dropped out of the bottom of the assembly onto the floor below, where they created a critical mass. There was a flash of radiation, then the rods melted and changed their configuration, so the reaction stopped again.

    Although the blast lasted only milliseconds, Parfanovich said a technician standing nearby received a dose of direct radiation amounting to 6000 Roentgen. He died the next day of a heart attack. A researcher received more than 2000 roentgens and died 2 weeks later. Another two researchers received 800 to 900 roentgens and were saved through extensive medical treatment, but their health suffered as a result. Other staff were protected by a concrete shield and received insignificant doses.

    All personnel working in the building were evacuated, and routine radiation checks revealed that some had radioactive iodine on their clothing. Vladimir Asmolov, head of the Institute for the Control of the Safe Use of Nuclear Energy (a part of the Kurchatov center), recalls that some young researchers who had contaminated clothes deliberately evaded the security and simply waited for the level of radioactivity to go down. They went drinking in an apple orchard on the grounds of the institute, which had been planted by its founder, Igor Kurchatov, “father” of the Soviet bomb. (Kurchatov liked to demonstrate the safety of his institute by eating apples from the trees.) Despite the rumors of radioactive clouds floating across downtown Moscow, Parfanovich said no emissions were traced outside the research area.

    The whole incident was kept secret, even from researchers in other branches of the institute. Most knew that an explosion had taken place but had no idea of its severity. Similarly sketchy details had leaked out of another weaker blast which had happened about 3 months earlier. In this case researchers were unaware there was anything wrong with the reactor until they noticed a blue light illuminating the ceiling. Parfanovich reported that two researchers received a dose of about 1000 roentgens, and one of them later had his feet amputated.

    Parfanovich told the Paris meeting that there were a total of five such blasts in research centers during the Soviet era. Asmolov thinks that the atomic research institutes and the nuclear power industry that grew out of them nevertheless had a good safety record, but that standards are now slipping. “Greater openness now about the past serves as a signal that they are trying to address safety issues seriously today,” says Holloway. However, safety concerns now keep all the research reactors at the Kurchatov idle, and even the director of the Institute, Evgeny Velikhov, favors moving them outside Moscow.

  5. VIROLOGY

    On the Track of Ebola's Hideout?

    1. Michael Hagmann

    One of the many unsolved riddles about the Ebola virus is where the deadly organism hides in between outbreaks in humans. Now, for the first time, virologists may have found traces of the virus's genetic material in small ground-dwelling mammals near areas of previous epidemics. Experts welcome the findings, announced last week, but point out that it is still too early to celebrate the discovery of the Ebola reservoir.

    Ebola, which first surfaced in 1976 in Congo and Sudan, causes vomiting, diarrhea, and copious internal and external bleeding. The virus kills up to 85% of its victims, and there is no known treatment. Recent epidemics have spurred an intensive search for an animal host that might support the virus, but so far to no avail. Although many species can be infected experimentally with Ebola, those captured in the wild have not had detectable levels of the virus. Some researchers speculate that the animal reservoir must be in a secluded area—deep in a rain forest, perhaps, or high in a tree canopy—whose animals have been hard to sample.

    Unconvinced that local animals are spared by Ebola, Marc Colyn of the University of Rennes in France looked at animals from a variety of habitats near previous outbreaks of the disease. His team screened 242 animals, including several species of rodents, shrews, and bats, that had been captured in the Central African Republic. The researchers detected no live virus or viral antigens, but when they used a more sensitive screen—the polymerase chain reaction—they managed to pull fragments of the Ebola genome from seven animals: one shrew and six rodents from three different species. Then when they examined spleen tissue slices from these animals under the electron microscope, they saw tubular structures that looked exactly like the inner core of Ebola virus particles. “These structures are most likely defective [virus] particles that don't contain the full-length Ebola genome,” says Vincent Deubel of the Pasteur Institute in Paris, who announced the group's findings at an institute retreat.

    Virologist Albert Osterhaus of the Erasmus University Hospital in Rotterdam notes that it is still unclear whether these particles, if confirmed to be Ebola, indicate that the animals could harbor the infectious virus. But the study suggests that “animals in a much more accessible habitat [than the deep rain forest] have definitely been in contact with Ebola,” says Osterhaus.

    Others say the study raises more questions than it answers. The researchers “have found traces of Ebola in about 3% of the most common species around. Yet when an epidemic occurs it can usually be traced back to a single [infection]. So why don't we see more [human or primate] outbreaks if so many animals are infected?” asks Clarence Peters of the Centers for Disease Control and Prevention in Atlanta. But Peters welcomes any contribution that may help pin down the elusive Ebola hideout. “People are continually testing various hypotheses. And they should be, because it's an extremely important issue,” he says.

  6. SCIENCE PUBLISHING

    PNAS to Join PubMed Central--On Condition

    1. Eliot Marshall

    PubMed Central, a free archive of research reports planned by the National Institutes of Health (NIH), reached a milestone last week when it signed up an important contributor: the Proceedings of the National Academy of Sciences (PNAS). PNAS's overseer—the governing council of the National Academy of Sciences in Washington, D.C.—voted on 13 October to donate full-text research articles to PubMed Central starting next year. The move follows a similar decision in September by the American Society of Cell Biology, which publishes Molecular Biology of the Cell. Both will allow PubMed Central to release their papers on the Internet after a brief postpublication delay. The academy council also added important conditions, one of which is that everything else in PubMed Central be peer reviewed, contrary to NIH's original plan to include unreviewed material.

    PNAS editor Nicholas Cozzarelli, a molecular biologist at the University of California, Berkeley, says “PNAS is proud to be one of the charter members of PubMed Central,” which he views as “a major advance for science.” Cozzarelli was an early supporter of the project, conceived by NIH director Harold Varmus and several colleagues earlier this year (Science, 3 September, p. 1466). Although some other journal editors are concerned about the possible loss of journal income, Cozzarelli says: “We have an obligation to take a leadership role for the good of science,” and “we do not foresee a significant economic impact on PNAS for the next few years.” In addition to releasing its reports 4 weeks after publication, Cozzarelli says, PNAS aims to give PubMed Central copies of “all of our research content back to 1990.”

    The academy council placed several restrictions on the agreement, however. It set a 1-year limit on the experiment, ruled out any commercial use of PNAS material, and insisted that authors not be charged fees for participation in PubMed Central. In addition, the academy said that participation “is contingent upon [PubMed Central] not including” unreviewed submissions or “reports that have been screened but not formally peer reviewed,” a phrase Varmus used earlier in describing how some of the material would be vetted for publication. The outlet for non-peer-reviewed reports, according to the academy, “must be completely separate.”

    David Lipman, director of NIH's National Center for Biotechnology Information and developer of the PubMed Central plan, sees this as no big problem: “We had always planned” to build a wall between the peer-reviewed and the non-peer-reviewed parts of the Web site, he says. He adds, “Virtually all of the potential participants that have contacted us have been interested in the peer-reviewed component.” He aims to come up with a name for the unreviewed section soon.

    As planning for PubMed Central continues, a private company has announced plans to launch a Web-based biomedical publication in an unspecified field that would use PubMed Central as its distribution network. Huntington Williams III, CEO of the Community of Science, a private outfit sponsored by Johns Hopkins University in Baltimore, says the proposed journal will conduct all of its editorial work, including peer review, through the Internet. Rather than making money on author charges or subscriptions, the company plans to sell Web-based advertising that will “frame” the contents on the Community of Science Web site, which will include reviewer access to papers under review. Final reports would be deposited on PubMed Central. The advertising will be “exquisitely” targeted to specific groups of readers, authors, and peer reviewers who use the company's services. Williams hopes to name an editor and editorial board soon.

  7. TRANSGENIC FOOD DEBATE

    The Lancet Scolded Over Pusztai Paper

    1. Martin Enserink

    For more than a year, a study claiming to show that transgenic potatoes may make rats sick was at the center of a furious debate, even though its findings had never been published. Now, part of the controversial study by protein biochemist Arpad Pusztai has finally made it into the pages of The Lancet—only to drag the prestigious journal down into the trenches of the British war over genetically modified food.

    Critics—including the Royal Society, which after a review of the raw data called the work “deeply flawed” in May—contend that The Lancet is exploiting the study's notoriety for its own publicity and that publication in a top journal lends the paper credibility it doesn't deserve. The U.K.'s Biotechnology and Biological Sciences Research Council called the journal “irresponsible.” But The Lancet editor Richard Horton says that giving Pusztai's data a public airing finally allows all parties to draw their own conclusions. Besides, he says, the paper survived an even stricter scientific scrutiny than normal.

    The study made headlines around the world in August 1998, when Pusztai, a scientist at the Rowett Research Institute in Aberdeen, announced in a television interview that a diet of genetically modified (GM) potatoes could stunt rats' growth and impair their immune system. Just days later, the institute suspended Pusztai and banned him from speaking to the media, saying his claim lacked a scientific basis—a verdict later repeated in an internal review. But an international group of scientists, after examining data provided by Pusztai, demanded his exoneration (Science, 19 February, p. 1094). Their stance fueled the British media frenzy over transgenic crops and turned Pusztai, who is now retired, into a hero for the anti-GM movement. But what his study had or hadn't shown, remained unclear.

    In their paper in the 16 October Lancet, Pusztai and co-author Stanley Ewen, a pathologist at Aberdeen University, don't mention stunted growth or suppressed immunity. Instead, they focus on abnormalities in the intestines of rats fed only potatoes equipped with the gene for GNA, a natural insecticide found in snowdrops. GNA and other lectins are thought to be potentially useful in helping crops fight off insects, but products engineered to express the gene haven't made it to the market yet. The researchers found that rats on the transgenic spud diet for 10 days had a thickening in the mucosal lining of their colon and their jejunum, a part of the small intestine, which didn't occur in animals fed nontransgenic potatoes or nontransgenic potatoes spiked with GNA at levels comparable to the transgenic ones. The findings suggest that the genetic modification of the potatoes—not GNA itself—is somehow responsible for the changes seen in the rats, the authors say. “Perhaps by introducing a gene you will activate or silence other genes in the plant as well,” Pusztai explains.

    But in a commentary in the same issue, three scientists from the National Institute for Quality Control of Agricultural Products in Wageningen, the Netherlands, say the study has several flaws. For instance, the effects could have stemmed from nutritional differences between the potatoes that had nothing to do with genetic modification; with just six rats in each group, the sample size was very small; and the monotonous diet had made all the rats protein-starved—not a good basis to assess a substance's toxicity, they argue. As a result, the Dutch scientists say, the data don't warrant the paper's conclusion. Pusztai, however, points out that the diets were comparable in protein and energy content and that a sample size of six is perfectly normal in studies like this.

    Nevertheless, critics say the shortcomings should have caused the journal to reject the paper. John Pickett of the Institute of Arable Crops Research in Rothamstead, one of the experts asked by The Lancet to assess the paper, last week cast off peer reviewers' traditional cloak of secrecy and publicly denounced the journal for ignoring his advice. “If this work had been part of a student's study, then the student would have failed whatever examination he was contributing the work for,” Pickett railed in a BBC interview.

    Horton responds that the journal put the paper through an unusually rigorous review, asking six instead of the usual three experts to examine it. Of those, only Pickett squarely opposed publication, he says; four others raised criticisms that Pusztai and Ewen addressed, while a fifth deemed the study flawed but favored publication to avoid suspicions of a conspiracy against Pusztai and to give colleagues a chance to see the data for themselves. “When we had five out of six reviewers in favor of publication … we felt we had very strong grounds to go ahead and publish,” says Horton, who also justified his decision in a commentary. Horton denies that The Lancet sought to get mileage out of the media hype, insisting that he would have printed the paper even if it hadn't been mired in controversy. But Marcia Angell, editor-in-chief of The New England Journal of Medicine, a competing journal, finds that hard to believe. “When was the last time [The Lancet] published a rat study that was uninterpretable?” she asks. “This really was dropping the bar.”

    Horton says he sees nothing wrong with publishing a provocative paper: Arguments over a scientific study are “perfectly normal.” “The problem is we are disagreeing about interpretation in this incredible crucible of public debate,” he says. “I think everybody needs to cool it.”

  8. SEISMOLOGY

    Did One California Jolt Bring on Another?

    1. Richard A. Kerr

    No crustal fault is an island, seismologists are learning. Last weekend's Hector Mine earthquake, which struck the desert 160 kilometers northeast of Los Angeles, seems to support the idea that faults feel what happens to their neighbors. The magnitude 7.1 temblor—which did minimal damage because of its remote location—appears to have been triggered by the magnitude 7.3 Landers quake of 1992, which struck 160 kilometers to the east of Los Angeles. “There's clearly a relation” between the Landers and Hector Mine quakes, says seismologist Lucile Jones of the U.S. Geological Survey (USGS) in Pasadena, California, “but we clearly do not understand that relation. There are going to be a lot of hypotheses.”

    Geologists have speculated that faults can reach out and touch one another because earthquakes redistribute stress. When a fault ruptures, it reduces stress in broad zones to either side; their extent can be calculated from the way the fault broke. Great earthquakes like the 1906 San Francisco quake and the 1857 “Big One” in southern California reduced the stress over great swaths along hundreds of kilometers of the San Andreas fault, damping seismic activity in those areas for decades (Science, 16 February 1996, p. 910).

    But stress can actually increase beyond either end of a ruptured fault. The Landers rupture produced prominent lobes of heightened stress across the Mojave Desert to the north and south across the San Andreas near Palm Springs and Riverside. One southward lobe, as calculated by geophysicist Ross Stein of the USGS in Menlo Park, California, and his colleagues, seemed to trigger the magnitude 6.2 Big Bear quake 3 hours after Landers struck 35 kilometers away.

    Seven years later, the other shoe seems to have fallen. Early last Saturday morning, 40 kilometers of a previously unnamed fault, now dubbed the Lavic Lake fault, broke across the Twenty-Nine Palms Marine Base. The quake's epicenter and much of the rupture lie in what Stein and his colleagues calculate was one of the two northern lobes where the Landers quake intensified the stress. The match between the quake and the area of heightened stress buttresses the argument that “if you jack up the stress on a fault, you get a higher rate of earthquakes, big ones and smaller ones,” says Stein.

    Most of Stein's colleagues agree that faults do keep in touch with each other, speaking the language of stress. “There's clearly some relationship between the two earthquakes,” says David Wald of the USGS in Pasadena. “There's no doubt stress triggering is happening.”

    But Wald adds that “the physics behind it is not clear.” For one thing, he wonders why the small nudge from Landers would set off such a long-quiescent fault. For another, stress changes depend to some extent on characteristics of the fault. Some faults, like the San Andreas, are thought to slip with essentially no friction, perhaps because of a claylike lubricating layer where the rock faces meet. Robert Simpson of the USGS calculated that if the Lavic Lake fault is also frictionless, the Landers rupture would not have heightened stress across the Hector Mine rupture. Stein argues that the Lavic Lake fault should have higher friction because it slips only infrequently and therefore hasn't developed a lubricating layer.

    Whatever the physics, the Hector Mine quake, as well as the Turkey quake of last August, which might also have been triggered by an earlier quake (Science, 27 August, p. 1334), are persuading seismologists that an earthquake may heighten seismic threats elsewhere. For example, Stein and USGS colleague Tom Parsons estimate that the Hector Mine temblor has increased the chance of a big one on the San Andreas in the next 30 years from 41% to 44%. To understand seismic hazards of a fault, he thinks, all the chatter among its neighbors must be understood.

  9. ARCHAEOLOGY

    New Questions About Ancient American Site

    1. Heather Pringle*
    1. Heather Pringle writes from Vancouver.

    Brad Pitt and his paramilitary protégés in the celluloid tale Fight Club aren't the only ones putting up their dukes and taking a swing this week. With a belligerence rarely seen in scientific spats, American researcher Stuart Fiedel has landed the first punch in a new battle over the authenticity of what may be the oldest archaeological site in the Americas: southern Chile's Monte Verde.

    In the November/December issue of the popular magazine Scientific American Discovering Archaeology (discoveringarchaeology.com), Fiedel contends that the final report on Monte Verde is riddled with errors and omissions that make evaluation of the evidence all but impossible. The original authors counter that the errors are simply ones of bookkeeping, but it seems clear that clouds have gathered over Monte Verde once again. “The site is in limbo,” says archaeologist Daniel Sandweiss of the University of Maine in Orono.

    For decades, archaeologists have believed that the first inhabitants of the Americas were the spear-wielding, big game- stalking Clovis hunters, who migrated from northern Asia to Alaska along the Beringian landbridge and then trekked south between retreating ice sheets 13,500 years ago. But after 8 years of excavations in a peat bog 560 kilometers southeast of Santiago, a team led by archaeologist Tom Dillehay of the University of Kentucky in Lexington concluded that early humans had lived at Monte Verde 1000 years before the first Clovis site, building pole-frame houses and dining on mastodon, wild potatoes, and medicinal herbs.

    There are other hints of early, non-Clovis cultures, but Monte Verde's direct challenge to the Clovis-first theory fuelled intense debate. Then in 1997 a blue-ribbon panel of archaeologists visited the site and agreed in a public announcement that it was truly pre-Clovis (Science, 28 February 1997, p. 1256). But there were always a few skeptics, and now they are mounting a new assault.

    Fiedel, a consulting archaeologist from Alexandria, Virginia, and the author of a primer on New World prehistory, wanted to know exactly where three hafted stone scrapers were found, as well as what the precise association is between these scrapers and the radiocarbon-dated materials that support a pre-Clovis date. But when he pored over the Monte Verde final report, he found dispersed and inconsistent descriptions that left him uncertain even about how many such scrapers were found. He says he was “peeved,” and he began looking for similar data on the site's projectile points, cores, and other bifaces (stone tools flaked on two sides). He couldn't find it. In his article—which he says he published in a non-peer-reviewed journal so that it would be available quickly—he details 19 pages of errors, ranging from minor slips, such as individual artifacts bearing three different catalogue numbers, to more worrisome problems such as maps drawn to the wrong scales. “When I finally got an opportunity to look at the whole thing,” says Fiedel, “I discovered that the report's just full of holes.”

    Dillehay readily concedes that errors crept into his team's massive 1300-page report, which won a 1998 Society for American Archaeology award “for the extreme care given to the site's excavation, analysis, and publication.” But he says that some 85% of the glitches result from changes made in cataloguing when his team expanded test pits into large block excavations and later entered data in new computer programs. Those are common problems in large, multi-year excavations, agrees Jon Driver, an archaeologist at Simon Fraser University in Vancouver, British Columbia.

    Dillehay suggests that the report is being subjected to impossibly high standards because Fiedel and other critics simply don't want to accept evidence contrary to the Clovis-first paradigm. “A colleague told me a couple of years ago that some card-toting member of the Clovis police would be stepping forward again with another blast,” says Dillehay. “Well, here it is.”

    Part of this fight is a clash of outlooks. Fiedel zeroes in on stone-cutting tools and projectile points, which he and others consider of prime importance because they are undoubtedly made by humans. “Dillehay should make sure that everybody knows where those bifaces come from,” says Driver. “That's what's going to convince many North American archaeologists.”

    But Dillehay's team built its case mainly on simpler stone flakes. The researchers used wear patterns and the preferential use of certain stone types to argue that the flakes were human handiwork, and they determined their age from nearby radiocarbon-dated materials. Ruth Gruhn, a University of Alberta archaeologist who has worked extensively in early South American sites, thinks that's a reasonable approach. “North American archaeologists have a very strong bias towards bifacially flaked projectile points because North America is just saturated with projectile points,” she says. But in South America, Paleo-Indian peoples preferred simple unifacial tools, she says. She thinks that there's “no question” that the Monte Verde flakes were created by pre-Clovis humans.

    All the same, other researchers are dismayed both by the substance and the sheer quantity of errors in the report. “To find that many mistakes and confusions in the final report for such a significant site is certainly very disappointing,” says Sandweiss. But the question remains: Are those errors fatal? Dillehay's team now needs to clear up the inconsistencies, says David Meltzer, an archaeologist at Southern Methodist University in Dallas and a member of the panel that visited Monte Verde. “Then once all [the minor glitches are] eliminated,” he says, “it will be worth taking a look and saying, ‘Are there legitimate issues here?’”

  10. SCIENCE EDUCATION

    Scientists Strike Back Against Creationism

    1. Bernice Wuethrich*
    1. Bernice Wuethrich is a science writer in Washington, D.C.

    New Mexico's school system took an evolutionary leap earlier this month when the State Board of Education voted to ban a creationist credo that had influenced the curriculum for 3 years. The counterpunch was largely the result of a grassroots campaign engineered by a group of scientists who are now moving to raise the scientific sophistication of teachers and students throughout the state. “Without scientists, the trend to reverse [the teaching of creationism] would never have started,” says Kim Johnson, a physicist at Quasar International Inc. in Albuquerque and president of New Mexico's Coalition for Excellence in Science Education (CESE).*

    New Mexico's success is encouraging researchers in Kansas who are rallying opposition to new statewide education standards that eliminate the teaching of evolution and anything suggesting our planet has been around for billions of years (Science, 20 August, p. 1186). Because creationists believe that God created the universe, the Earth, and life in 6 days 10,000 years ago, any science that contradicts that view—including the big bang theory, the geologic timescale, and the validity of radioactive decay as a measure of great age—is vulnerable, says Marshall Berman of Sandia National Laboratories in New Mexico. “We have to realize that this is an assault on all science,” he says.

    The assault took most New Mexicans by surprise in 1996, when with just 2 hours of public notice, the New Mexico board voted to purge many aspects of biology, geology, and physics from the state's education standards, which guide the development of teacher lesson plans and test materials. In a matter of days, Johnson and Berman began rallying colleagues to action. “We zeroed in on the solution—to get somebody knowledgeable on the board,” Johnson says. Berman won a seat on the Board of Education last year, replacing a creationist, then helped orchestrate a public education campaign. “We had all these Ph.D. scientists walking around neighborhoods, pounding on doors,” recalls Berman, a nuclear physicist. “It was a miracle—pardon the expression—to watch these personal transformations take place.” Two other staunch pro-science candidates ousted a creationist and a supporter of creationist policy in local elections last November, setting the stage for change.

    Stealing a page from the creationist play book, the grassroots campaign built a broad base of support that includes teachers, parents, and many religious leaders anxious to defend the separation of church and state in public education. Supporters also formed CESE last year to raise the state's level of science teaching. One program they initiated is “Hotspots,” which pays for teachers to spend 2 weeks in the summer out in the field with geologists. Meanwhile, a state-appointed panel drafted science performance standards—what students need to know at each grade—that reincorporate evolution and other science fundamentals. “Just about every Methodist minister in New Mexico signed a letter supporting these changes,” Berman says. The two-pronged effort paid big dividends: On 8 October, the Board of Education voted 14 to 1 to adopt the new standards and ban creationism from the curriculum. In a ringing endorsement, Catholic Archbishop Michael J. Sheehan published a letter in two New Mexico newspapers on 15 October strongly supporting the revisions.

    Kansans opposed to their school board's new anti-evolution guidelines are hoping for a similar victory. They have put together a coalition, Kansas Citizens for Science—initiated largely by scientists drawn from state universities and colleges—that is planning an education campaign to reverse the decision. Other groups are considering a legal challenge to the standards based on the separation of church and state.

    In the meantime, Kansans are getting supporting fire from a surprising weapon: copyright law. The National Research Council, the American Association for the Advancement of Science (publisher of Science), and the National Science Teachers Association have all denied the Kansas board permission to use portions of their respective science standards publications in the state's new guidelines. That means the guidelines cannot be implemented until after they are revised to remove the copyrighted materials—a costly and timely endeavor—which should leave last year's sound standards in place for the rest of the fall term, predicts biologist Steve Case, a member of the new coalition. He and others welcome the delay. “It's tough to weed something out once its been implemented—very much like cancer,” Case says.

    Creationists forces are not resting on their gains, however. The Web site of the Christian Coalition of New Mexico warns its constituents that Berman will continue to “take a strong pro-evolution stand” and urges them to try to unseat him in the 2002 elections. But Berman and his colleagues have their sights set on the road ahead. “The board will be regularly reviewing and improving its standards, because science is a growing, changing field,” he says.

  11. DISEASE RESEARCH

    Prions: A Lone Killer or a Vital Accomplice?

    1. Michael Balter

    At the largest ever meeting of prion disease researchers last month, there was much new knowledge on display, but still no consensus on whether this mysterious disease agent is acting alone

    Tübingen, GERMANY—Is it an epidemic, or isn't it? European health officials still are not sure. For more than 3 years, they have been anxiously watching the slow accumulation of cases of variant Creutzfeldt-Jakob disease (vCJD), a fatal neurodegenerative disorder linked to eating beef from cattle infected with bovine spongiform encephalopathy (BSE), or “mad cow disease.” As of late September, the toll had risen to 46 cases in the United Kingdom and one in France. So far, earlier concerns that thousands might eventually die have not been realized. But fears of this nightmare scenario continue to generate headlines. As Science went to press this week, the long-running government inquiry into the disastrous BSE epidemic in British cattle, chaired by Sir Nicholas Phillips, had just resumed hearings in London, and French officials were adamantly defying a European Union decision to lift an embargo on imports of British beef.

    One thing, however, is certain: Britain's BSE crisis has fueled an explosion of research into vCJD and similar fatal brain diseases linked to prions, aberrant forms of a normal cellular protein called PrP, which is particularly plentiful in nerve cells. Last month, more than 400 scientists met in Tübingen* to review the latest advances in prion disease research—the largest such gathering ever. The impressive turnout was proof that the field has come a long way since the early 1980s, when neurologist Stanley Prusiner of the University of California, San Francisco (UCSF), began championing the then-heretical hypothesis that the prion acts essentially alone to infect tissues and reproduce itself—without the benefit of an RNA- or DNA-based genome, thought to be fundamental to all life-forms. Since those days, Prusiner has won a lot of converts to this idea. He also won the 1997 Nobel Prize in Physiology or Medicine, although in the wake of controversy over the award the Nobel committee hastened to clarify that it should not be construed as a blanket endorsement of his ideas (Science, 10 October 1997, p. 214).

    Indeed, while the Tübingen meeting reflected a growing scientific consensus that prions are key players in causing these diseases, a few presentations—as well as the buzz in the corridors—underscored the continuing skepticism of some researchers that the “protein-only hypothesis,” as it is often called, can explain all the data. These researchers continue to suspect that some sort of virus or other RNA- or DNA-based organism is also involved (Science, 12 July 1996, p. 184). For many participants, these doubts were reinforced when researchers in Edinburgh reported results that might contradict key experiments performed in Prusiner's lab—experiments that many scientists had thought conclusively showed that prion protein alone could cause disease.

    Although other researchers, including Prusiner's collaborators, shrug off the Edinburgh group's findings, the controversy shows that the issue is far from settled. “I don't want to believe lock, stock, and barrel in the protein hypothesis until it satisfactorily explains prion biology,” comments virologist Jeffrey Almond, a former member of the United Kingdom's Spongiform Encephalopathy Advisory Committee and vice president for research and development in the French operations of drug firm Pasteur Mérieux Connaught.

    Nevertheless, as the Tübingen meeting also showed, even skeptics do not dispute that the prion is intimately involved one way or another in these diseases, and both skeptics and supporters are busy gleaning new clues to how prions behave. Talks at the meeting revealed important new insights into how prions make their way through the body to the brain and provided strong evidence that immune system cells are conduits for the spread of infection—a disquieting conclusion that may nevertheless hold the key to therapeutic strategies.

    Spontaneous generation?

    More than a dozen types of prion diseases, also known as spongiform encephalopathies for the spongy appearance of diseased brains, occur in humans and animals. Some are infectious, such as vCJD in humans, BSE in cattle, and scrapie in sheep. Others appear to be inherited, including familial CJD and Gerstmann-Sträussler-Scheinker syndrome (GSS), which are linked to mutations in the gene that codes for PrP. Still another, a form of CJD called sporadic CJD, appears to develop at random and is normally neither transmitted nor inherited.

    In 1994, Prusiner's group, including molecular biologist Karen Hsiao Ashe, published findings that many researchers thought clinched the protein-only hypothesis. The team created transgenic mice with multiple copies of a mutant PrP gene, referred to as P102L in humans and P101L in its mouse version, which had earlier been shown to cause GSS in humans. The mutant mice, which expressed high levels of the mutant PrP, spontaneously developed GSS-like neurological symptoms. Moreover, when certain other healthy animals, including some mice and hamsters, were inoculated with brain extracts from the sick animals, a significant number developed a similar disease. The finding seemed to confirm Prusiner's contention that the mutant protein—with no help from any virus—causes the de novo creation of infectious prions. Indeed, research by Prusiner and other scientists has shown that prions exert a bad influence on normal PrP by binding to it and converting it to the abnormal form. Exactly how the abnormal protein causes disease is not well understood, but the clumping of prions together in the brain appears to be responsible for some of the pathological effects.

    While many researchers found Prusiner's experiments convincing, others have remained perplexed by some oddities in the findings. For example, although normal wild-type hamsters succumbed to infection by brain extracts from the sick transgenic mice, normal wild-type mice did not. Only other transgenic mice, which expressed low rather than high amounts of mutant PrP and did not spontaneously get sick, were vulnerable to the inoculation. Some researchers suggested that the hamsters might have become infected with hamster scrapie prions that contaminated the laboratory rather than with the mouse brain extracts. Other scientists suspected that Prusiner's transgenic mice—some of which harbored as many as 60 copies of the mutant PrP gene and expressed eight times the normal level of the protein—might have become ill because of damage to their genomes from inserting so many foreign genes, or simply because they were producing too much PrP, and not because the mutation was producing prions.

    These skeptics now see some vindication in new work presented at the Tübingen meeting by molecular biologist Jean Manson of the Institute of Animal Health's Neuropathogenesis Unit (NPU) in Edinburgh. Manson repeated some of Prusiner's experiments, but this time using a new technique, called double replacement gene targeting, that she and her Edinburgh colleagues developed. Manson removed the normal PrP mouse gene, replaced it with a marker gene, and then replaced the marker gene with a PrP gene containing the human GSS mutation. These transgenic mice harbor only two copies—one on each duplicate chromosome—of the mutant P101L gene, which are still in their correct locations in the genome. Unlike Prusiner's transgenic animals, which expressed high PrP levels, the Edinburgh mice did not spontaneously develop disease during their lifetimes, about 900 days. When, however, the P101L mice were inoculated with human brain GSS extracts, they did become ill after an average of about 280 days.

    Manson interprets these results, which are currently in press at EMBO Journal, as meaning that the GSS mutation itself does not create prions, but rather makes the mice much more susceptible to an external infectious agent, whether it is a prion alone, a prion with an accomplice, or something entirely different. That susceptibility may result from the production of PrP proteins that are more or less easily transformed into abnormal forms by external prions. “Jean Manson's data appear to invalidate those of [Prusiner's group] and demonstrate that the GSS mutation does not by itself produce disease or infectivity,” says viral immunologist Bruce Chesebro of the Rocky Mountain Laboratories in Hamilton, Montana. “She has replaced the normal PrP gene at precisely the correct position in the DNA, so there is no possible effect from integration at an unusual site or from integration of multiple copies.”

    Prusiner was not available for comment, but a longtime collaborator at UCSF, biophysicist Fred Cohen, argues that Manson's results do not contradict Prusiner's. The new work is “entirely consistent” with the original experiments, Cohen says, in which “animals that have a high copy number of the P101L transgene get a spontaneous disease, while those that express a low copy number do not.” Cohen argues that since humans with the mutant prion gene develop GSS, whereas the Edinburgh group's transgenic mice do not, “the Manson experiment is not as clean a model as they suggest.”

    Indeed, many researchers argue that Manson's results do not disprove the protein-only hypothesis, because an external source of prions could still be acting alone to cause disease in humans or animals made susceptible by the mutation. And supporters of Prusiner's ideas are not prepared to see them overturned by the Edinburgh findings. “The one conclusion you may not draw,” states molecular biologist Charles Weissman at St. Mary's Hospital in London, “is that [Manson's experiment] has disproved Prusiner's contention that this mutation can cause a prion disease.” Neurologist John Collinge, also at St. Mary's, agrees, saying that while the Edinburgh work is “intriguing,” it should not be the takeoff point to launch a search for a virus or other microbe. “The evidence against that is pretty overwhelming.”

    On the other hand, genetic susceptibility to infectious prion disease is already well established in sheep, which vary widely in their vulnerability to scrapie depending on the makeup of their PrP genes. “These mouse results indicate that familial prion diseases in humans might be analogous to scrapie infection, where differences in susceptibility have been linked to genetic variations,” says Glasgow University veterinary pathologist Alun Williams. And just such a pattern may already be showing up in vCJD: All of the victims so far are homozygous for the amino acid methionine at position 129 of their PrP genes—that is, the PrP genes on both of their duplicate chromosomes contain methionine at this position, whereas the population at large can also be homozygous for valine or heterozygous for methionine and valine.

    Thus Manson's work is sure to keep the debate—and the doubts of skeptics—alive for some time to come. “These experiments go to the heart of the prion hypothesis,” says Almond. “Is this phenomenon of spontaneous production of infectivity real? We are now having doubts because of [Manson's] results.”

    Follow that prion!

    While the Edinburgh results have brought the debate over the protein-only hypothesis back to center stage, the prion is still the target of most studies. The failure to find a virus or other organism, combined with convincing evidence that PrP must be present for disease to occur—especially experimental findings showing that mice whose PrP genes have been “knocked out” cannot be infected—have led to general agreement that prions are, at the very least, surrogate markers for disease. Indeed, the focus on prions has now led to the discovery that there might be other forms as well: In the 1 October issue of the Journal of Molecular Biology, a team led by molecular biologists Richard Moore of UCSF and Inyoul Lee of the University of Washington in Seattle, report finding a second prionlike protein in mice, which is encoded by a gene they call doppel and appears to also cause serious brain damage in the animals. (A Perspective in next week's issue will discuss this finding.)

    Much recent research has focused on understanding how prions propagate through the body and eventually attack the nervous system. A particularly elegant study of this nefarious process was presented in Tübingen by veterinary pathologist Lucien van Keulen of the Institute for Animal Science and Health in Lelystad, the Netherlands. Van Keulen and his colleagues used a biochemical technique called immunohistochemistry—which can detect PrP in tissues—to trace the course of scrapie infection in sheep. In a dramatic series of pathology slides, van Keulen showed that when an infected sheep is about 5 months old, prions begin to accumulate in the lymphoid tissues of its immune system as well as in nerve fibers in its small intestine. By 10 months, infection spreads to nerves serving the visceral organs, and after about 17 months the spinal cord comes under attack. Finally, the prions reach the medulla oblongata, the part of the brain that is just above the spinal cord and is responsible for involuntary functions, and by 26 months the brain is fully infected and the animal enters the terminal stages of the disease.

    Earlier studies had suggested that cells of the immune system might drive this process by passing prions to the nervous system. The findings led to proposals that the diseases might be detected early, for example, in biopsies of tonsils (Science, 22 January 1999, p. 469). Studies also held out promise of future therapies, since the immune system would likely be a more accessible target for antiprion drugs than nerve cells would. But they also suggested the worrisome possibility that prion diseases might be spread through blood transfusions.

    In December 1997, for example, neuropathologist Adriano Aguzzi and immunologist Rolf Zinkernagel at the University of Zurich published findings in Nature that suggested a crucial role for B cells, which circulate in the blood and produce antibodies. The Zurich team found that mice carrying mutations that interfere with B cell development or function resist infection when they are inoculated with scrapie. The team also tested another type of immune cell previously suspected in the transmission of prions—follicular dendritic cells (FDCs), which work together with B cells to mount an immune response. FDCs were a less worrisome candidate because these cells normally remain in the spleen and lymph nodes and do not circulate in the blood. Yet these cells appeared to be cut out of the loop, because mice without functional FDCs could still be infected.

    Besides setting off alarm bells among public health officials, the Zurich findings led to speculations about just how the B cells might be spreading infection. They carry significant amounts of PrP in their membranes, and the lymphoid tissues, where they normally reside when not in the bloodstream, are heavily invested with nerve fibers that could pick up abnormal forms of the protein from them. Late last year, however, new results from the Zurich group revised that picture. The team found that immunodeficient mice that lack B lymphocytes—and therefore cannot be infected with scrapie—became susceptible again when they received donor B lymphocytes from PrP knockout mice; that is, even B cells that lacked PrP could apparently play a role in disease. Although Aguzzi and his co-workers argued that the cells might transport prions to the central nervous system via some mechanism not linked to PrP expression, these results left many prion researchers scratching their heads.

    However, a series of new experiments presented in Tübingen by Moira Bruce of the NPU in Edinburgh may help point to a solution of the riddle—one that implicates both B lymphocytes and FDCs in the spread of prions. Using a combination of gene knockout techniques and grafts of bone marrow—where the precursors of many immune cells develop—Bruce and her colleagues succeeded in creating two groups of chimeric mice with mismatches in the PrP status of their immune cells. One group had PrP-positive FDCs but PrP-negative B lymphocytes, whereas the other had PrP-negative FDCs but PrP-positive lymphocytes. When inoculated with scrapie, only the FDC-positive group could be infected. Bruce interprets these findings, which are in press at Nature Medicine, to mean that PrP-expressing FDCs are essential for the spread of prion infection after all.

    In discussions with Science, some scientists suggested that the discrepancy between Bruce's and Aguzzi's results could be due to differences in the strains of scrapie used in the two experiments. Whatever the case, many researchers now believe that both B lymphocytes and FDCs are essential players. Collinge, for example, suggests that B lymphocytes might be important because they foster maturation of FDCs. Weissman agrees this is the most reasonable scenario: “My interpretation of the data is that B cells are required for maturation of FDCs, but FDCs are responsible for making prions.”

    If this view is correct, it could allay concerns about blood safety, because the circulating B lymphocytes would not necessarily be direct carriers of infectivity. And researchers say that figuring out how prions replicate in the immune system during the early stages of disease might open the door to therapies that would block this process. Indeed, a talk at the meeting by Dominique Dormont of the French Atomic Energy Commission's neurovirology service in Fontenay-aux-Roses raised just that possibility. He presented findings that a number of drugs known to interact with immune system cells can increase the survival time of rodents experimentally infected with scrapie or BSE.

    “The idea of therapy when the infection is still in the periphery is not a pipedream,” says Chesebro. Almond adds: “You have to have hope in something like this, especially if some of the more dire forecasts for vCJD in the U.K. are real.”

    • *Characterization and Diagnosis of Prion Diseases in Animals and Man, Tübingen, Germany, 23 to 25 September 1999.

  12. ECOLOGY

    A Surprising Tale of Life in the City

    1. Keith Kloor*
    1. Keith Kloor is a free-lance writer in New York City.

    As some ecologists shift their focus from wild habitats to buzzing metropolises, they are finding webs of life more intricate than anyone had suspected

    Swallowing an acre of desert every hour, the rapacious maw of Phoenix, Arizona, may not seem like much of an oasis—except, perhaps, to the species bent on pushing the sprawl ever outward. But there's more to this cityscape than golfers and sun worshippers. If you want to see the yellowleg sandpiper or two dozen other kinds of migratory birds, for instance, grab your binoculars and head to one of Phoenix's sewage treatment centers. Or if you're into cottontail rabbits or tenebrionid beetles, you'll find plenty of those in Phoenix, too.

    Not so long ago, cities held little interest for ecologists; they were mostly places to escape from to study real ecosystems. But in a landmark shift 2 years ago, the National Science Foundation's (NSF's) Long Term Ecological Research (LTER) program, which funds a network of sites in relatively pristine areas in the United States and Antarctica, added two urban LTER sites: Phoenix and Baltimore, Maryland. The deeper scientists dig into the ecology of these cities, the more life they are finding, according to a report on the Phoenix project released this month. “The simple notion that a city diminishes biodiversity is wrong,” says anthropologist Charles Redman of Arizona State University (ASU), co-director of the Phoenix LTER site. The findings have a handful of ecologists arguing that maybe—just maybe—cities aren't such a blight after all.

    That kind of heretical talk doesn't go down well with some scientists, who caution against making too much of the variety of life in the big city. “We need to be concerned about keeping what we have and not become entranced by long lists of species [in urban areas],” says Michael Bogan, a wildlife biologist with the U.S. Geological Survey (USGS) in Albuquerque, New Mexico, who contends that Phoenix's golf courses and subdivisions are no substitute for the swath of Sonoran desert they have supplanted. Bogan points to a USGS report released last month that cites urbanization as a major cause of environmental degradation and loss of biodiversity. It's true, says John Wiens, an ecologist at Colorado State University in Fort Collins, that cities “enhance the environment for some species but deteriorate it for others.” But he and others submit that the urban LTER sites serve an important purpose: to get to know your enemy. “We're not going to stop urbanization,” says ASU ecologist Nancy McIntyre. “What we can do is design it to ameliorate its effects.”

    The 20-year-old LTER program was an unlikely wellspring for an urban ecology initiative. “The idea was to first try and understand natural systems,” says Scott Collins, LTER program director at NSF. Sites were chosen in isolated regions: for example, tundra in Alaska, the Chihuahuan desert in New Mexico, and the McMurdo Dry Valleys in Antarctica. But it began to dawn on many ecologists, says Wayne Zipperer, a landscape ecologist with the U.S. Forest Service in Syracuse, New York, that “we know more about other habitats than our own.”

    Venturing into the cities was an inevitable evolution for the LTER program, Collins says. Besides carrying out standard measurements of nutrient levels in soil or cataloguing species, researchers at the two urban sites probe the nexus between society and the environment: multidisciplinary topics such as natural resource use and sustainable development. As part of the 6-year, $4.4 million Phoenix study, researchers have examined, for instance, the effects of initiatives to encourage xeric landscapes—yards with drought-resistant plants suited to the dry local conditions and sandy gravel instead of grass. After comparing water usage of people with xeric yards to those with grassy yards, the researchers found, to their surprise, no measurable difference.

    It is the species inventory that's attracted the most attention, however. So far, researchers have documented more than 75 species of bees, 200 species of birds, and hundreds of insect species within metropolitan Phoenix. “It's surprising the amount of species we have seen,” says McIntyre, considering the rapid growth of Phoenix, which has 2.8 million people and counting. “You would think there wouldn't be enough room for all these critters.” True, some heavyweights are absent: Bighorn sheep and other animals that need room to roam aren't going to make it in Phoenix, McIntyre says. But birds, rabbits, and other small animals are doing fine. “It depends on what kind of biodiversity you want,” she says.

    The biodiversity in Phoenix, it turns out, is mostly imported—95% of plant species and one in four kinds of birds, for example, are nonnatives. “Cities have become staging grounds for exotic invasions,” says Julio Betancourt, a USGS paleoecologist in Tucson, Arizona, who points to starlings in Phoenix bullying gila woodpeckers out of their nests in suguaro cacti. Although Betancourt disputes the notion that the species diversity in Phoenix is a sign of a healthy ecosystem, he applauds the attention being paid to cities. “To be doing this [urban LTER] is a revolution,” he says.

    Other districts hope to stage similar revolutions. Stephan Paulit, a landscape ecologist at Munich Technical University in Germany, says his group has a proposal before the European Commission to launch a similar effort in six European cities. And in Asia, a study modeled after the urban LTER program could soon get under way in Taipei and Singapore. While NSF has no plans for now to add more cities to its LTER network, says Collins, the agency has issued a request for proposals for three coastal sites that “will explicitly include humans and their impacts on ecosystems.”

    Collins sees the creeping urbanization of some of his colleagues as a good sign and suggests that ecological research in the cities can “facilitate policy decisions and urban planning.” It may take a while for the findings from the urban LTER sites to influence local politicians, but in the meantime they are at least prompting scientists to wonder what kinds of life-forms may be living in that garbage dump or down that manhole. The bottom line, says Wiens, is that “cities are not the kind of sterile wastelands that some people think.”

  13. AGING RESEARCH

    Do Mitochondrial Mutations Dim the Fire of Life?

    1. Elizabeth Pennisi

    A sensitive technique shows that mutations accumulate over time in a key genetic region of the cellular power plants called mitochondria

    Until about a decade ago, few researchers outside cell or evolutionary biology paid much attention to mitochondria, the tiny particles that generate most of the cell's energy. But now researchers have begun to see the more menacing side of these internal power plants. First, geneticists started tracing certain rare inherited disorders to mutations in the mitochondria's small circular genome. More recently, other researchers have speculated that mitochondria might contribute to aging, either by releasing tissue-damaging reactive oxygen molecules or by deteriorating and depriving the cell of the energy it needs to function.

    On page 774, a team led by Giuseppe Attardi, a human geneticist at the California Institute of Technology (Caltech) in Pasadena, reports some of the first hard evidence that mitochondria do deteriorate as people age. His team found that mutations in the 16,500-base mitochondrial genome accumulate with time and in a particularly important region: a 1000-base segment that controls the genome's replication. Other researchers had previously found low frequencies of mutations in mitochondrial DNA as people age, but given that cells have hundreds of mitochondria, each with multiple genome copies, skeptics argued that the changes were not extensive enough to alter cell function.

    By carefully screening out contaminating DNA and using a technique that could detect single base changes, however, the Attardi team found that the damage can be extensive. The work “really shows that aging does something to the mitochondrial genome,” says Manuel Graeber, a neuropathologist at the Max Planck Institute for Neurobiology in Martinsried, Germany.

    Michael McKinney, a neuroscientist at the Mayo Clinic in Jacksonville, Florida, agrees. He says that while he's not sure that mitochondrial changes bring about aging by themselves, “even those of us who aren't mitochondrial people but who are worried about [aging] are going to be thinking about this [result].” Still, McKinney and others point out that a key link in the connection between mitochondrial mutations and aging is still missing: The Caltech team has not yet demonstrated that these changes alter mitochondrial—or cell—survival or activity.

    For his current work, Attardi decided to study the main control region for replication of the mitochondrial genome, which “seemed the most [likely] to be involved in aging,” he says. The control region must recruit cellular proteins to replicate itself and the rest of the mitochondrial genome and thus represents a pivotal connection between the mitochondria and the cell. If the connection goes bad and the mitochondria don't replicate properly, they might decline in number, causing the cell to have less energy than it needs.

    The Caltech team began the project by sampling connective tissue cells, called fibroblasts, from 18 randomly picked healthy individuals ranging in age from less than a year to 101. The researchers also obtained two sets of stored cells, taken 9 to 19 years apart, from each of nine other individuals. To look for mutations in the DNA of the mitochondrial replication control region in the cells, molecular biologist Yuichi Michikawa first used DNA probes to pull out the region from each of the cell samples as seven separate segments.

    He then took comparable segments from the cells and sorted them on electrophoresis gels using a technique that can separate DNA pieces that have slight sequence differences, such as those that result from single base changes, based on the different rates at which they migrate through the gel. Finally, Michikawa cloned the DNAs in the bands on the gels in bacteria and then sequenced them. The analysis was “very, very laborious,” notes Attardi. The labor paid off, however.

    For one segment studied, for example, mutations were not present in any of the clones from younger individuals, but they were in 5% to 50% of the clones from the older individuals, eight of whom had exactly the same mutation. And by analyzing the nine pairs of samples separated in time, the Caltech researchers found three people who had at least one of these mutations in the older cells but not in the younger cells. Thus, their data indicate that specific mutations in the mitochondrial replication control region accumulate with age in some people, sometimes in high numbers. “The extent of the mutations is among the highest ever reported,” says James Dykens, a biochemist with the biotech company Mitokor Inc. in San Diego, California.

    Still, researchers say that much more work will be needed to show that the mutations have anything to do with aging. For starters, notes Eric Schon, a molecular biologist at Columbia University in New York City, the study needs to be replicated on a larger scale and in other types of cells, such as muscle or brain, that suffer the most harmful aging effects. Furthermore, says George Martin, a gerontologist at the University of Washington in Seattle, “There's no evidence [the mutations] functionally impair the cell or change [the rate] of mitochondrial replication.”

    Attardi concedes that he has not tied these particular mutations to a decline in cell or mitochondrial function. But he points out that the fact that his team found so many in one region suggests that other areas of the mitochondrial genome might be affected as well. And those hypothetical mutations may lead to mitochondrial changes such as increased production or release of the damaging oxidative free radicals that have already been linked to aging. Alternatively, or perhaps in addition, such mutations could make mitochondria less efficient at generating ATP, a molecule that fuels many of the cell's biochemical reactions, an idea suggested in 1989 by Anthony Linnane, now a molecular biologist at Epworth Medical Center in Melbourne, Australia. Linnane's own work shows that the amount of complete and active mitochondrial DNA declines as the years pass.

    The new results now “give [that idea] ammunition,” notes Vilhelm Bohr, a molecular biologist at the National Institute on Aging in Baltimore, Maryland. And Graeber, too, can see how it all might fit together. “The mitochondria have been compared to the fire of life,” he explains. Perhaps “that fire is slowly extinguished as we grow old.”

  14. MOLECULAR BIOLOGY

    A New Blocker for the TGF-β Pathway

    1. Gretchen Vogel

    Two proteins, Ski and Sno, are plucked from obscurity and revealed as regulators of a key growth-controlling pathway

    For more than a decade, researchers have suspected that two proteins known as Ski and Sno play a role in the unrestrained cell growth that leads to cancer. Both can transform normal cells into cancerous ones in culture dishes, and both turn up in a variety of human tumor cell lines. But because no one was able to determine their role in the cell, they received little attention amid the throng of potential cancer-causing molecules. Now that's changing. Five separate papers—including one on page 771 of this issue—show that Ski and Sno can block one of the cell's major growth-controlling pathways, the transforming growth factor-β (TGF-β) signal. The finding puts the proteins at the center of one of the hottest areas of cell biology and sheds new light on this ubiquitous pathway.

    The TGF-β signal acts like a powerful molecular traffic light, ordering certain cells to slow down and stop dividing. But sometimes cells manage to speed through this checkpoint, triggering runaway growth and cancer. The new findings suggest that when Ski and Sno work properly, they may help cells resume normal growth after stopping for TGF-β's red light. But when the system gets out of whack, the molecules can shut down TGF-β entirely, and tumors may therefore form. It's “a plausible explanation” for the action of the two proteins, says cell biologist Carl-Henrik Heldin of the Ludwig Institute for Cancer Research in Uppsala, Sweden. “If you inhibit a growth inhibitor, you have more growth.”

    The finding also connects Ski and Sno to one of the most intensively studied molecules in cell biology. TGF-β is hot in part because it affects everything from inflammation to tissue repair to bone formation; a meeting on it this summer attracted more than 600 scientists. Surprisingly, this influential signaling pathway is relatively simple. The TGF-β molecule docks at a pair of receptors on the cell membrane, which then attach a phosphate molecule to one of two intracellular proteins, Smad2 or Smad3 (see diagram). The activated protein joins with another member of the Smad family, Smad4, and moves inside the nucleus, where the complex triggers the expression of a variety of genes, depending on the state of the cell.

    Given TGF-β's importance, researchers have been searching for molecules that interact with the pathway, but they had come up with only a handful of actors so far. Now on page 771 and in last month's Genes and Development, Kunxin Luo of Lawrence Berkeley National Laboratory in California and her colleagues describe how they stepped up the search by engineering cells to produce a molecular “hook” attached to Smad4. They used an antibody to the hook to reel in any protein complexes that included Smads—and landed both Sno and Ski.

    The catch has brought the two proteins in from the cold. Ski is named for the Sloan-Kettering Institute, where it was identified in the early 1980s as the culprit gene in a virus that causes tumors in chickens. The sequence of Sno (Ski-related NOvel gene), found a few years later, is very similar to the human version of Ski. But because scientists couldn't figure out how Ski and Sno caused abnormal cell growth, “Ski had been a backwater,” says Ed Stavnezer of Case Western Reserve University in Cleveland, who first identified the gene. “It had never been tied to a major pathway that people could latch onto,” he says. No longer.

    Both Ski and a form of Sno called SnoN can block the action of Smad3 and Smad4, according to Luo's work as well as reports by Robert Weinberg and Harvey Lodish's team at the Massachusetts Institute of Technology's Whitehead Institute, which will be published in today's issue of Molecular Cell and in next week's Proceedings of the National Academy of Sciences (PNAS). Kohei Miyazono of the Japanese Foundation for Cancer Research in Tokyo and his colleagues have similar results in press at the Journal of Biological Chemistry. Meanwhile, Estela Medrano of Baylor College of Medicine in Houston, in collaboration with Stavnezer's group, attacked the problem from the other side, seeking proteins that bind to Ski, and found the Smads, in work presented at a recent meeting and now under review.

    Although their results differ slightly, all four groups found that when they forced a cell to produce extra Ski, it ignored the growth-slowing signal. The MIT group and Luo's team also found that high levels of SnoN have a similar effect. The link seems to explain Sno and Ski's tumor-causing propensities, says Luo: “If the balance gets tipped the wrong way, you get cancer.”

    Scientists are still puzzling out the mechanics, but it seems that Ski and Sno interact in a feedback loop with the TGF-β pathway. The initial TGF-β signal boosts levels of Smad3 in the nucleus—which in turn degrades Sno, according to Luo and her colleagues. (Lodish and Weinberg also report in next week's PNAS that the TGF-β signal degrades both Sno and Ski.) Lowered levels of Sno evidently allow the Smad complex to turn on its target genes, so that the TGF-β signal gets through. But Luo found that two hours after the cell receives the signal, the level of Sno increases once more, to above its original levels. She thinks the extra Sno helps shut off the Smads so TGF-β's red light isn't stuck on indefinitely.

    Such modulating factors “make a lot of sense,” says Joan Massagué of the Memorial Sloan-Kettering Cancer Center in New York City. One would expect several layers of control over a pathway that “is involved in every aspect of life and death in virtually every group of organisms,” he says.

    Indeed, the importance of the find may go beyond cancer. In addition to playing a growth-slowing role, TGF-β can act as a green light to certain cells, encouraging growth and differentiation. Ski and Sno may also affect these TGF-β signals, says Luo. Ski is known to affect muscle development: Mice with extra copies of Ski turn into “Arnold Schwarzenegger mice,” says Stavnezer, whereas those missing the molecule have underdeveloped muscles.

    Stavnezer is thrilled that his long-ago discovery has found a place in the sun. “There have been six labs in the world that worked on Ski,” he says. “I guarantee there will be more now.”

  15. NOBEL PRIZES

    Protein ZIP Codes Make Nobel Journey

    1. Michael Hagmann

    Truly great discoveries often appear obvious in hindsight. So do some Nobel prizes, including this year's Nobel Prize in Physiology or Medicine, awarded to the German-born cell biologist Günter Blobel for his insights into how the cell uses a ZIP code system of sorts to deliver thousands of proteins to various addresses within the cell.

    Several recent Nobel prizes for medicine have stirred controversy; in 1997, for example, the prize recognized what many say is still a highly disputed theory that proteins called prions can act as infectious agents (see p. 660). But the current prize decision seems to have been a safe bet. Randy Shekman, a cell biologist at the University of California, Berkeley, says, “Almost all of us in the field expected this; it was long overdue.” James Rothman, a cell biologist at the Memorial Sloan-Kettering Cancer Center in New York, agrees, saying that “Blobel's contributions about proteins encoding their own fate in the cell were truly monumental in scope.” He adds that Blobel's concept itself seemed so obvious—“it left you with a feeling of utter simplicity”—that some biologists resisted it at first.

    Just as people try to organize their belongings by devising filing systems, cells have to sort newly synthesized proteins and send them wherever they are needed: into different internal compartments called organelles or even out of the cell altogether. In the 1960s, George Palade of The Rockefeller University in New York City had found that proteins designated to be excreted pass through a sort of relay station called the endoplasmic reticulum (ER), a vast folded membrane system that looks like a deflated beach ball—a discovery that helped earn him a Nobel prize in 1974. But no one had a clue about the inner workings of this protein-sorting machinery. “Cellular transport really was something of a black box,” says Wilhelm Stoffel, a molecular neurobiologist at the University of Cologne in Germany.

    Blobel, who joined Palade as a postdoc in the late 1960s and has stayed at Rockefeller ever since, was captivated by this protein secretion puzzle. In 1971, together with David Sabatini, now at New York University, he formulated a simple model of how cells regulate their protein traffic: The first few amino acids in a nascent protein chain serve as an address tag that tells the cell whether or not the protein is destined for secretion and hence for import into the ER. “At first it was just a wonderful idea; it was quite a bold thing to say because nothing hinted at a signal sequence. But it was by far the best thing we could come up with,” recalls Blobel.

    Only a year later a group at the Medical Research Council Laboratory in Cambridge, U.K., led by César Milstein (yet another Nobel laureate, who won the prize in 1984 for monoclonal antibody technology) found the first hints of signal sequences in one of the protein chains of antibodies. The form of the protein that had been secreted into the bloodstream, these researchers found, was a little shorter than the one that was still within cells, suggesting to Blobel that a signal sequence originally present on the protein had been trimmed away by the time it was secreted. In the following years Blobel developed a cell-free system that mimicked the cell's protein sorting pathway, so that he could identify the molecular players.

    Finally, in 1975, he succeeded in deciphering the first signal sequence. At the same time he expanded the original hypothesis by proposing that the ER membrane contains a protein channel through which the proteins to be secreted sneak into the ER. Rothman, who was a postdoc at the time, still vividly remembers the “excitement in the field, when you realize biology will never be the same again.”

    Blobel and his colleagues then went on to pin down the various parts of the ER export system. In the early 1980s, the work culminated in their discovery both of the so-called “signal recognition protein” (SRP), which reads the ER ZIP code by binding to it in the cytoplasm, and of the receptor on ER membranes to which the complex of SRP and nascent protein chain then docks. At the same time Blobel and others showed that similar ZIP codes also serve to guide proteins to other cell organelles such as mitochondria, the cellular power plants, and chloroplasts, the site of photosynthesis in plant cells. “It's always variations on the same theme: various signals, SRPs, and docking receptors” for different organelles, Blobel explains. Finally, in the series of experiments, done in the early 1990s, that Blobel says he's most proud of, his team demonstrated the existence of the long-elusive ER channel.

    Together, say many colleagues, these studies laid the foundations for modern cell biology. “Blobel was the first one to make cellular biology molecular and come up with mechanisms; before that it was merely descriptive,” says Kai Simons of the European Molecular Biology Laboratory in Heidelberg, Germany. Blobel's ideas have also shed light on diseases such as familial hypercholesterolemia and lysosomal storage disorders, which result from errors in the signals or the transport machinery. And protein signals have become a crucial tool for researchers who genetically modify bacteria, plants, and animals to produce drugs. By adding a specific tag to the desired proteins, genetic engineers can, for instance, tag them for excretion, making them much easier to harvest.

    But although Blobel's work is standard textbook knowledge these days, protein ZIP codes have seen rougher times. “People didn't like the idea of a signal [sequence] even as late as in the 1980s. Especially the proposed channel was a lightning rod for the opposition; some of them got very angry,” Blobel says, adding that only when his team could show that the channel really existed did the tide turn for good. The idea was too obvious, says Rothman: “Often biologists think nature can't be that simple.”

  16. NOBEL PRIZES

    Theory Leads to Particles and Prize

    1. Alexander Hellemans*
    1. Alexander Hellemans is a science writer in Naples, Italy.

    In the world of subatomic particles and forces, a good map goes a long way. Electroweak theory, a key part of particle physicists' theoretical map, keeps leading them to new particles—and to Nobel prizes. The most recent of these, the 1999 Nobel Prize in Physics, was awarded last week to Gerardus 't Hooft and Martinus Veltman, two Dutch physicists who refined the theory so that it can be used to make precise calculations of particle masses and behaviors.

    Following the example of James Clerk Maxwell, who realized in the 1860s that electricity and magnetism are aspects of a single electromagnetic force, physicists yoked together a second pair of forces in the 1960s to create electroweak theory. It unites electromagnetism with the weak force, which operates within the atomic nucleus and is responsible for certain kinds of radioactive decay. Electroweak theory predicted new force-carrying particles called W+, W, and Z0, but it was a frustrating tool at first. When used to calculate particle masses and behaviors, it tended to yield nonsensical answers—such as infinity.

    In 1969, Veltman, a professor at the University of Utrecht, and 't Hooft, then a doctoral student and now a professor at Utrecht, started working on a mathematical model that would make sense of electroweak interactions. By 1972 they had published the essentials of their method, which had an immediate and widespread effect on particle physics. “It was called a ‘revolution,’” remembers Veltman, who is now an emeritus professor at the University of Michigan, Ann Arbor. “Things started falling in place.”

    Karel Gaemers, a particle physicist at the University of Amsterdam, says their main contribution to the field was the development of a technique for “renormalizing” the theory, which makes precise predictions for electroweak interactions possible. The technique does away with those pesky infinities, in part by replacing certain theoretical values with numbers determined by experiments. Veltman and 't Hooft also made many other refinements to the theory, based, for example, on unusual symmetries. Although the predictions their methods yield are not as precise as those of quantum electrodynamics—the theory that deals with electrons, positrons, photons, and their interactions—the electroweak theory comes quite close, says 't Hooft.

    Renormalization allowed precise predictions of the masses of the W and Z particles, for example, which were ultimately created and detected in 1983 at CERN, the European particle physics laboratory in Geneva. It also pointed to an approximate figure for the mass of the top quark, years before it was discovered in 1995 at the Fermi National Laboratory near Chicago. “That was a great success of theory and experiment and calculational method—and 't Hooft and Veltman supplied the calculational method. … I think the prize was very well deserved,” says physicist Steven Weinberg of the University of Texas, Austin, who himself shared a 1979 Nobel prize for developing the original electroweak theory.

    Of all the particles predicted using 't Hooft and Veltman's methods, the only one still undetected is the infamously elusive Higgs particle. Physicists believe it ought to exist because without it in their equations, particles such as the W and Z should have no mass. Predicting its mass, however, is difficult: Although the particle interferes with many interactions, it does so only indirectly. 't Hooft and many other physicists hope that the Large Hadron Collider, expected to be completed by CERN in 2005, will capture this scarlet pimpernel of particles.

    If the Higgs is found, its mass may rock the foundations of existing theories. Should the mass turn out to be much heavier than the current prediction—already a hefty 100 gigaelectronvolts—it will indicate the existence of a “new physics,” says 't Hooft. “The situation will then become unpredictable. … We can expect new objects and unknown effects”—and a serious need for a new guide to the particle world.

  17. NOBEL PRIZES

    A Winning Flash Dance

    1. Robert F. Service

    Ahmed Zewail can finish an experiment faster than you can bat an eye. Over the past 10 years, the Egyptian-born chemical physicist, who works at the California Institute of Technology in Pasadena, has pioneered the use of ultrashort laser pulses to witness the dance of atoms as they knit and break chemical bonds. Last week, Zewail's freeze-frame view of reactions won him the 1999 Nobel Prize in Chemistry.

    At the molecular level, chemistry is breathlessly fast. Some reactions, such as the rusting of a nail, may seem sluggish, but that's because the individual molecules react only rarely. Once the reactants meet and hurdle an energy barrier, the making and breaking of bonds takes a mere 100 or so femtoseconds, or quadrillionths of a second. Before Zewail's work, few dreamed of ever seeing this speedy dance. Being able to do so, says University of Pennsylvania chemist Robin Hochstrasser, “caused a large number of chemists to think about chemical reactions in a different way, in real time.”

    The trick was coming up with an ultrafast camera capable of freezing the whir of molecules, much as a flash and a fast camera shutter can halt the blur of hummingbird wings in mid-flight. Fortunately for Zewail, in the late 1960s and 70s research groups around the world were developing lasers that generated shorter and shorter light pulses. A small tabletop device called a colliding pulsed mode-locked (CPM) laser—developed by Bell Labs researchers Charles Shank and Erich Ippen—proved to be the camera Zewail needed.

    “We recognized that if we could get the time resolution shorter than a vibration of atoms in a molecule, we could see bonds breaking,” says Zewail. The CPM laser and its successors generate a wide range of light frequencies and then emit them only during the brief moments when their wavelengths all march in lockstep, creating an intense pulse. The result is a flash as short as 7 femtoseconds, well within the time it takes the atoms in a molecule to vibrate back and forth.

    To capture the action, Zewail constructed an apparatus to feed reactant gases into a vacuum chamber and then used a CPM laser as the equivalent of a camera and flash. He set up his laser to fire pairs of ultrashort light pulses. The initial pulse—the flash—supplies the energy that the target molecules need to surpass the energy barrier and begin reacting. The second pulse, fired mere femtoseconds later, illuminates the reacting molecules, which either absorb the pulse or respond to it by fluorescing at wavelengths that depend on their configuration. By varying the time interval between the two pulses and recording the absorbed or emitted light, Zewail can track the chemical reaction from the starting molecules, through the intermediate states that result as bonds are stretched, broken, and rearranged, to the final products.

    In his first experiments, in the late 1980s, Zewail and his colleagues watched as molecules of iodocyanide split into their component ions in a reaction that took a mere 200 femtoseconds. “It was a wonderful set of experiments that had a big impact on chemistry,” says Hochstrasser. By witnessing the birth and death of molecules in just femtoseconds, “we have reached the end of the road: no chemical reactions take place faster than this,” noted the Royal Swedish Academy of Sciences in its award citation.

    Zewail's work initially focused on simple reactions of gaseous molecules and answered basic questions about reaction mechanisms, revealing, for example, that molecules containing two equivalent bonds break them one at a time instead of simultaneously. Since then his group and others around the world have pushed the technology to chronicle chemical changes in liquids and solids as well. The result is a whole new discipline of femtosecond science, which has yielded insights into everything from how plants capture sunlight in photosynthesis to how the human eye manages to see at night when the light is faint.

    This year's Nobel “was a wonderful choice and a well-deserved award,” says Paul Corkum, who heads femtosecond science research at the National Research Council in Ottawa, Canada. For his part, Zewail says he has been “overwhelmed” by the response generated by the announcement. Thousands of congratulatory messages have poured in from Egypt alone, he says, including one broadcast on Egyptian television by the country's president, Hosni Mubarak. Zewail, whose portrait already appears on two Egyptian stamps, says such recognition by your native country “is an honor that lasts forever.” Still, he says, it's not the same as recognition from scientific peers, epitomized by the Nobel prize: “You cannot match that.”

  18. NOBEL PRIZES

    A Prize for Economic Foresight

    1. Mark Sincell*
    1. Mark Sincell is a science writer in Houston.

    Robert Mundell did the work that won this year's Nobel Prize in Economic Sciences nearly 40 years ago. But evidence of what the Nobel prize committee called his “almost prophetic accuracy in terms of predicting the future development of international monetary arrangements” can be found today on every newspaper business page. Mundell's research, once considered esoteric and irrelevant, provides the framework for current understanding of international exchange rates and was used recently to establish the ground rules for the European Monetary Union.

    The world was a very different place when Mundell received his Ph.D. from the Massachusetts Institute of Technology in 1956. “After World War I, most [international] capital systems collapsed,” says Harvard University economist Ken Rogoff. From then until the early 1960s, a system of fixed exchange rates linked virtually all countries, and a mere trickle of funds passed between countries—compared to the estimated $2 trillion now exchanged on the international markets every day. “Capital flows were far less important,” says Princeton University economist Peter Kenen, so economists tended to ignore exchange rates when analyzing domestic economies.

    Perhaps it was his Canadian upbringing—Canada and the United States have always freely exchanged money—that made Mundell see things differently. Before his work, economists thought that governments could influence their domestic economies in two ways: by adjusting the money supply (monetary policy) or by changing spending priorities (fiscal policy). In papers written in the 1960s, when he was working at the International Monetary Fund, Mundell drew on Canada's experiments with both fixed and floating exchange rates to show how large capital flows limit governments' options.

    He theorized that when international capital is highly mobile and the exchange rate is fixed, the central bank loses control of the money supply and hence of domestic interest rates; fiscal policy becomes the only tool left for influencing the domestic economy. As a result, says Rogoff, “we now know that fixed exchange rates are unstable” when capital flows increase. Market pressures inevitably force countries to adopt either floating exchange rates or a common currency.

    The second possibility was, at the time, a startling one. In the late 1950s, currencies were almost synonymous with their parent countries. “The idea of France and Germany sharing a currency seemed very oddball at the time,” says Rogoff. But in a watershed 1960 paper Mundell outlined the criteria under which multiple countries could successfully merge their currencies.

    His work influenced generations of economists. “Anyone with an economics Ph.D. was cheerfully force-fed his ideas,” says Columbia University economist John McLaren. And they turned out to be dead on target. Throughout the 1990s, changes in the technology of trade and communication dramatically increased international capital flow and most Western countries abandoned fixed exchange rates. Even more surprising, eleven European states are well on their way to fully instituting a common currency following the principles Mundell set forth four decades earlier.

    “He was far ahead of his time,” says Kenen. Almost 40 years later, the framework Mundell developed “is still a workhorse of modern international macroeconomics.”

  19. OLFACTION

    Following the Scent of Avian Olfaction

    1. David Malakoff

    It turns out Toucan Sam may not have been a liar after all. In the 1970s, the bright-billed cartoon spokesbird for Kellogg's Froot Loops cereal boasted in television ads that he could “follow his nose” to the cereal's “delicious fruit smell.” But amused biologists called Sam a sham: The conventional wisdom, based on studies of brain anatomy, held that most birds had little or no sense of smell.

    Since then, however, researchers have steadily pecked away at that idea. They've shown that even birds with relatively small olfactory bulbs—the brain tissue responsible for discriminating a sickening stench from a fragrant aroma—can not only sense odors, but also use them to find food, select prime nesting material, and navigate across vast stretches of unknown terrain. “We're realizing that avian olfaction isn't some strange minor quirk confined to a few species,” says behavioral ecologist Tim Guilford of Oxford University in the United Kingdom. Such findings not only show that olfaction plays a major role in the behaviors that birds need for survival, but could also have practical applications in conservation and farming.

    The once widespread notion that few birds could follow their beaks rested on studies of bird brains and the views of a few influential natural historians. Prominent 19th-century bird artist John James Audubon, for instance, popularized the misconception that vultures don't use smell to find the reeking carcasses that make dinner after he conducted an experiment in which he presented black vultures with covered and uncovered bodies; they flocked only to the flesh in plain sight. And early anatomists noticed that birds had smaller olfactory bulbs than mammals do and thus concluded that smell likely played little role in the life of birds.

    By the 1960s, however, such broad-brush findings were giving way to a more complicated picture. In widely cited work published in the late 1960s and early 70s, Betsy Bang, who was affiliated with the Woods Hole Marine Biological Laboratory in Massachusetts, measured the size of the olfactory bulb relative to overall brain size in 151 bird species. She found that whereas olfactory tissue took up as little as 3% of the brains of small forest-dwelling songbirds such as the black capped chickadee, the bulbs accounted for up to 37% of the brains of some seabirds, such as the snowy petrel that patrols the South Polar seas. During the same period, ornithologist Ken Stager of the Los Angeles County Natural History Museum showed that the turkey vulture, unlike Audubon's black vultures, could find food by smell. Indeed, Stager noted that some clever engineers had harnessed the vulture's olfactory prowess to find leaks in natural gas pipelines: The sight of circling vultures could lead work crews to leaks, presumably because gas is “flavored” with ethyl mercaptan, a chemical that smells like carrion.

    Meanwhile, behavioral neuroscientist Bernice Wenzel of the University of California, Los Angeles, was finding that pigeons—which rank somewhere in the middle in Bang's bulb-size study—could detect even subtle scents. When the birds were placed in a box with a steady airstream, their heart and respiration rates would jump when Wenzel released various odors. “I was dimly aware that avian olfaction had been looked at by a few odd folks, but the issue was very much up in the air,” the retired researcher recalls. She and a procession of collaborators went on to study smell in other species. “Every bird we tested, regardless of bulb size, showed some reaction” to odors, Wenzel says.

    As the evidence that birds can smell began piling up, researchers began pinning down the role of olfaction in bird behavior. Homing pigeon aficionados, for instance, had had a long-standing controversy over whether the birds use olfactory clues to find their way over unfamiliar ground back to their coops. Over the last decade, however, researchers have repeatedly shown that pigeons whose sense of smell is blocked by wax or an anaesthetic placed in their nostrils take significantly longer to find home base—or fail completely. And last year Italian researchers, including Anna Gagliardo of the University of Pisa, showed that young birds with damaged olfactory tissues are unable to learn to navigate from visual landmarks alone. “Pigeons appear to be able to extrapolate a map of unfamiliar environments by sensing incoming odors from different wind directions,” says Guilford, though they probably also rely on visual guideposts closer to home.

    Smell also helps some birds build their nests, chemical ecologist Larry Clark of the U.S. Department of Agriculture's National Wildlife Research Center in Fort Collins, Colorado, discovered in the 1980s. He found that European starlings rely on smell to find the green plants they commonly weave into their nests—material that helps reduce potentially harmful microbe and parasite populations. He also found hints that just as the song centers in the brains of some male canaries shrink after the mating season, the olfactory bulbs of starlings atrophy when the nesting season is over.

    Exactly which odors starlings, pigeons, and other birds home in on remains mostly a mystery. But in 1993, a serious accident at sea led olfaction specialist Gabrielle Nevitt of the University of California, Davis, and colleagues to the discovery of at least one important cue for some seabirds. While Nevitt was aboard an antarctic research cruise, a storm dislodged a heavy tool box and sent it hurtling into her kidney. The injury confined the researcher to her bunk, giving her plenty of time to ponder how antarctic seabirds such as petrels and albatrosses, which have among the largest olfactory bulbs, locate small patches of shrimplike prey in the vast polar seas. Not only do the birds have few apparent landmarks to guide them, but the patches frequently move, making the task akin to find a moving needle in a haystack.

    Building on studies by Wenzel and others, Nevitt believed the birds were homing in on a chemical signal, but no one had managed to identify the compound that piqued the birds' interest. Then, during a stop in Punta Arenas, Chile, a research team including atmospheric chemist Tim Bates of the National Atmospheric and Oceanic Administration's Pacific Marine Environmental Laboratory in Seattle, Washington, boarded the ship.

    Nevitt says Bates “was just being nice” in chatting with the injured interloper he discovered in the team's quarters. But Bates says he found Nevitt's olfaction problem interesting. “I told her that if I had to put a bet on [an oceanic] compound that had an odor, it would be dimethyl sulfide (DMS),” a gas given off by phytoplankton, microscopic plants that live in surface waters. He gave Nevitt a map of a DMS plume over Antarctica's Drake Passage, which showed that the compound concentrates over zones of upwelling and mixing, where the phytoplankton concentrate. “It changed the way I thought about the problem,” Nevitt recalls. “I realized the birds were navigating through an olfactory landscape,” complete with low-concentration valleys and DMS-rich mountain peaks. The tiny crustaceans eaten by the seabirds can also elevate DMS levels when they chow down on phytoplankton, providing a potentially solid food clue for the birds.

    Nevitt was eventually able to document that several kinds of petrels and prions, another type of seabird, home in on DMS-laced vegetable oil slicks more often than odorless control slicks. But the preliminary findings, published in Nature in 1995, raised new questions. How, for instance, do the birds follow the changing gradients of DMS in the often turbulent atmosphere, where odor plumes can become fragmented? One possibility, she says, is that the odor cue prompts the birds to execute a search pattern, such as a broad turn, just as some salmon automatically swim against the prevailing current upon encountering a desirable odor. Over the next year, Nevitt and ornithologist Henri Weimerskirch of France's CNRS research agency in Villiers will look for such response patterns as they use satellites to track snowy petrels and other antarctic seabirds on their foraging trips. They will also block some birds' sense of smell to see if that alters the foraging strategy.

    While petrels may follow their noses to food, chickens apparently call on olfaction to help them avoid eating bad-tasting insects. Guilford and colleagues have shown that chicks presented with bright, contrasting colors typical of insects that produce noxious odors won't reject the offering unless they can sense both odor and color. Such “multimodal” responses are also being studied by Lesley Rogers and colleagues at the University of New England in Armidale, Australia, who are using combinations of odors and colored beads. Related studies that monitor where olfactory stimuli are processed in the brain have also produced hints that chicks can develop “lateralized” olfaction, in which the right and left nostrils feed separate signals to the brain. In a kind of multitasking, cells in “the left nostril might be on the lookout for noxious odors, while the right is involved in something else,” she says.

    Other scientists are studying how chicken farmers might benefit from imprinting chicks on certain odors while they are still in the egg. Preliminary studies have shown that chicks exposed to odors in the final days of development—when tissue plugs melt out of nostrils and the chick begins breathing air that seeps through the eggshell—are attracted to the same smells after hatching. If the findings hold up, adding familiar odors to food and coops could improve production by reducing the stress the birds experience when confronted with new settings and foods, notes poultry scientist Bryan Jones of the Roslin Institute in Midlothian, United Kingdom.

    And Nevitt speculates that olfaction studies might eventually influence conservation strategy too, by helping breeders of endangered birds provide the olfactory cues needed to get the young birds off to a good start. Rogers and others caution, however, that progress could be slow, because lab and field studies involving odors “are hellishly difficult to set up and control.” Toucan experts, for instance, say figuring out whether real toucans use fruit odors in foraging could take years. But Guilford is upbeat about the prospect of learning more about how birds use smell. “There is plenty of room for speculation,” he says, “and plenty more for experiments.”

  20. OLFACTION

    Salmon Follow Watery Odors Home

    1. Marcia Barinaga

    While the smell of fresh-baked bread may pull us irresistibly down unfamiliar streets until we stand at the bakery door, that's about as much as we humans ever rely on olfaction to guide our travels. But for some animals, olfactory homing is a matter of life and death. Recent work has shown that some birds depend heavily on their sense of smell to find food and to navigate (see p. 704). And salmon sniff their way back from ocean or lake to the streambed where they hatched, guided by an odor signature derived from the unique mix of elements such as plants, animals, and soils in their home stream and imprinted on their memory years earlier.

    The survival of a salmon population depends on the fish's ability to return to their birthplace to spawn, because many of their physical and behavioral traits have been selected over generations for the survival advantage they provide in that particular stream. Now neuroscientists and fisheries biologists are learning just how salmon form the olfactory memories that guide them home. They are uncovering the physiological changes that prepare young salmon for olfactory imprinting and are finding out when in the animals' life cycles those changes occur. They are also gleaning clues to the biochemical basis of imprinting.

    This work should help the management of salmon and their close relatives, though not necessarily of all fish that use olfactory homing. Some, such as lamprey eels, find spawning sites by sniffing out pheromones, an achievement that appears to be instinctive rather than learned.

    Nevertheless, salmon represent an important and frequently threatened species, and conservation managers are eager to put the new information to use. “If the mechanism of olfactory imprinting could be clarified, this would definitely contribute to the [conservation] of valuable salmon resources,” says fisheries biologist Hiroshi Ueda of Hokkaido University in Japan.

    On the Pacific coast of the United States alone, salmon have been lost from 40% of their one-time range, and stocks are threatened or endangered in another 27%. To save endangered populations, conservation managers need to be sure that hatchery-raised fish imprint properly, to minimize their straying to other streams. “We need to get more mechanistic about it and figure out what is actually occurring during the development of these fish that causes them to home,” says biologist Jeff Hard of the Northwest Fisheries Science Center in Seattle.

    The first demonstration that salmon migrate based on odors learned in their youth came in the mid-1970s from Arthur Hasler's team at the University of Wisconsin, Madison. Graduate student Allan Scholz exposed hatchery-raised coho salmon to one of two odorant chemicals, morpholine or phenethyl alcohol (PEA), tagged the fish, and released them into Lake Michigan. Although the smells on which the fish imprint in nature are probably mixtures of odorants, Scholz's experiment proved one chemical was enough to guide them. Two years later, when it was time for the salmon to return to their native stream to spawn, Scholz spiked one stream near the release site with morpholine and another with PEA. Of the returning fish the researchers subsequently recovered, more than 90% were found in the stream spiked with the chemical to which they had been exposed.

    The researchers then worked out when the fish were most susceptible to the imprinting. In the original experiments, Scholz had exposed the salmon to the odorant molecules when they were just over a year old, during a period known as smolting when the fish prepare to migrate and experience, among other things, a surge in thyroid hormone (TH). That surge proved crucial for imprinting. When Scholz gave younger fish a hormone to elevate their TH levels, those fish—which otherwise would not have been able to imprint—learned the odors.

    That suggested that coho salmon find their way home based on olfactory memories formed at smolting time. But it didn't explain the behavior of another salmon species, the sockeye. Sockeye spawn in tributaries of lakes; a few months after hatching, the young fry swim to the lake, where they spend a year before smolting and heading to sea. When they return, they home in not just on the lake, but on the very tributary where they hatched—something they shouldn't be able to do if they imprinted only at smolting time. “That suggested there might be more than one period of imprinting,” says Scholz, now at Eastern Washington University in Cheney.

    In recent years, Scholz's group in Washington has identified that early period by working with a population of sockeye salmon in Washington's Columbia River system. The team found high TH levels in young fish shortly after hatching, and fish exposed to PEA or morpholine during that time chose to spawn in stream beds scented with the chemical to which they had been exposed.

    While that work showed that TH primes the fish for imprinting, it left open the question of how a smell leaves its mark on the fish's olfactory system. In the late 1980s, Andrew Dittman and Gabrielle Nevitt, then graduate students at the University of Washington (UW) in Seattle, joined with behavioral ecologist Tom Quinn, also at UW, to seek the answer. Nevitt, working in the lab of UW electrophysiologist Bill Moody, recorded the electrical activity of olfactory receptor neurons from the noses of PEA-imprinted fish as well as from fish that hadn't been exposed to PEA. She found that, compared to controls, the noses of PEA-imprinted fish contained a higher fraction of PEA-responsive neurons, and those neurons had a heightened sensitivity to the compound.

    In spite of the common view that all learning occurs in the brain, the discovery suggested that the fishes' olfactory memories consisted at least partly of changed neural responses in their noses. But, notes Nevitt, now at the University of California (UC), Davis, this form of learning does not seem to be unique to salmon: A team at the Monell Chemical Senses Center in Philadelphia found in 1993 that repeated exposure of mice to certain odorant chemicals increased the sensitivity of their sensory neurons to those odorants. And Robyn Hudson, a behavioral neuroscientist at the National University of Mexico, reported in 1995 that the olfactory neurons of baby rabbits have a heightened sensitivity to the odors of the foods their mothers ate while they were pregnant and nursing.

    While Nevitt was doing her studies, Dittman, working in the lab of pharmacologist Daniel Storm at UW, found a clue to what might be making salmon olfactory neurons more sensitive to the smell of home. When he used PEA to stimulate neurons from imprinted fish, he found that the cells made more cyclic GMP (cGMP) than those from nonimprinted fish. cGMP is an intracellular messenger that helps transmit signals inside cells and might be influencing the responses of the olfactory neurons. But no one knows yet exactly how cGMP affects olfactory neurons, says Dittman, now a postdoc with John Ngai at UC Berkeley. To really understand the neural changes that underlie imprinting, Dittman wants to trace more completely how imprinting alters the signaling pathway by which an odorant binding to the olfactory receptor molecule causes the neuron to fire.

    In Ngai's group, he plans to address that issue. David Speca, a graduate student with Ngai, recently developed a means to identify the olfactory receptor that responds to a particular odorant. “That now allows us to look functionally for a PEA receptor,” says Dittman, “and then see how its expression or function is altered during imprinting.”

    If the approach works, it might yield information about the mechanism of imprinting that would provide a simple assay for telling when hatchery-raised fish have formed ample memories of their surroundings. That could be a big boon to those who manage salmon stocks and need to optimize the imprinting of hatchery fish so that they don't stray into the home beds of wild populations, threatening the wild gene pools. A gauge for imprinting would also help managers of captive breeding programs mounted as last-ditch efforts to save vanishing populations, by ensuring that the fish they release are properly imprinted. “Right now the only assay for imprinting is whether the fish come back 5 years later,” says UW's Quinn. “If there were some assay to tell whether the fish have imprinted or not, that would be very useful.” Indeed, for some endangered salmon populations it could be a matter of life and death.

Log in to view full text