News this Week

Science  03 Jul 1998:
Vol. 281, Issue 5373, pp. 16
1. SPENDING BILLS

U.S. R&D Budget Becomes Political Football

1. Eliot Marshall,
2. Andrew Lawler

Research funding stands at ground zero this year in a bitter fight over who will control the federal budget. So far, the signs for science look good. Last week, for example, key House committees voted a 9.1% increase for the National Institutes of Health (NIH) and an 8.3% increase for research at the National Science Foundation (NSF). But those gains may prove hard to hold on to as a broad struggle over tax cuts and domestic spending priorities plays out over the next few months.

The scene for this struggle was set in February, when President Clinton offered to boost research spending in part by tapping into a proposed tobacco tax. Clinton promised big increases for most civilian science programs, including an 8.4% raise for NIH. But the tax proposal collapsed last month after conservatives attacked the idea and tobacco companies launched a negative advertising blitz. The loss will leave legislators less room to maneuver at a time when Republican leaders are pushing for a large tax cut and taking a hard line on social programs.

As a result, R&D is caught in the crossfire between the White House and conservatives. The NIH funding bill dramatically reflects this contest. It was approved along partisan lines on 23 June by the appropriations subcommittee on labor, health and human services, and education chaired by Representative John Porter (R-IL). The bottom line is a $1.2 billion boost in NIH's budget, now$13.6 billion, an increase that has been praised by champions of biomedical research. “Outstanding,” said David Moore, a spokesperson for the Ad Hoc Group for Medical Research Funding. “Spectacular,” said Ralph Yount, president of the Federation of American Societies for Experimental Biology. But other programs in the bill were clobbered. The committee sliced some $2 billion, for example, from summer jobs programs and energy subsidies for low-income families. As a result, Democrat David Obey of Wisconsin, a longtime friend of biomedical research, denounced the measure. The bill, Obey said as he cast his dissenting vote, reflects “a renewed sense of confrontation” from the “hard right wing” of the Republican leadership. “Republicans … decided to pay for NIH and other increases out of the hides of the most defenseless and vulnerable—minority youth and … families and seniors in poverty,” he thundered. The bill also alienated moderate Republicans and science advocates like Representative Sherry Boehlert (NY), who said he couldn't support it. The reception from the White House was no more cordial. President Clinton immediately threatened to veto the bill, which he called “arbitrary” and “extreme.” One congressional staffer bemoaned the choices this standoff has created: “This is great—it's science versus poor people.” And a White House official made clear who would win such a standoff: “This Administration prefers poor people to scientists when it comes to the hard choices.” Science advocates are left wondering what to do. Neither Moore nor Yount wanted to discuss the$2 billion in cuts that made the NIH raise possible. “We didn't choose to make those cuts,” says Moore, who notes that “there needs to be additional money pumped into the system.” Yount says: “We have no expertise [on social programs]. Our policy is to make the case for NIH.” One congressional aide says, however, that science lobbyists may have to get involved: “If you want big money, you've got to play with the big boys.” Other lawmakers and science leaders agree. House Majority Leader Newt Gingrich (R-GA), who says he backs doubling R&D within 8 to 10 years, told Science recently that scientists “need to reach out to the general public.” Rutgers University President Francis Lawrence, joining a group of senators (see sidebar), warned last week that “we have to convince people we're not just [another mouth] at the funding trough.”

Congress could avoid a fiscal train wreck by passing a budget resolution that gives appropriators more to spend in 1999, or it could negotiate with the White House to make use of the growing budget surplus. But the White House has said surpluses should be used to shore up Social Security, while key Republican leaders favor tax cuts. Both sides could also agree to budget gimmicks—such as one tried unsuccessfully last week to make spending on the year 2000 computer problem an emergency appropriation that wouldn't require cuts to other programs. Or they could use accounting changes to make more money available in 1999, as the House Appropriations Committee did last week in approving a funding bill that covers NSF, NASA, the Environmental Protection Agency, and several other programs. (The committee earmarked for science the additional revenue expected to be generated by raising a ceiling for federal housing loans—adding $70 million to a previously planned$200 million increase for NSF's $2.5 billion research program and$10 million for research at the Veterans Administration.)

Conservatives oppose such gimmicks, but even they may be desperate for a way out of the budget impasse by September as the November elections concentrate the minds of all politicians. “This is like a basketball game—it will all be decided in the last 4 minutes,” says one White House aide. Referring to the president's power to veto any spending bill, the aide adds, “and Clinton holds the passes for these guys to go home.”

2. SPENDING BILLS

Senate Bill Calls for More Spending

1. Andrew Lawler

While 1999 funding for R&D programs is caught up in intense and immediate partisan rivalries in the House (see main text), a group of Senate Democrats and Republicans joined forces last week to make a joint plea for the long-term health of science and technology. Led by Senator Bill Frist (R-TN), the coalition introduced a bill that would boost civilian R&D spending from $38 billion in 1999 to$68 billion in 2010. That amounts to an annual increase of 2.5% above inflation for the next 12 years.

Frist's legislation, which has the strong backing of many universities and research organizations, would not obligate Congress to spend more dollars on R&D. And it requests slightly less than a plan proposed last fall by Senators Phil Gramm (R-TX) and Joe Lieberman (D-CT) which called for a doubling of R&D in a decade (Science, 31 October 1997, p. 796). But backers say it raises the profile of science and technology among politicians and should help the R&D community hone a unified message. The bill, entitled the Federal Research Investment Act, would also require the National Academy of Sciences to study criteria for determining the success or failure of government R&D efforts.

Frist declined to say when the bill will be taken up by the Senate Commerce science, technology, and space subcommittee he chairs. That caution may be warranted. After nearly 8 months of effort, supporters of the Gramm-Lieberman measure won only 19 co-sponsors out of 100 U.S. senators. “Part of the problem is the [science] community hasn't made this one of its priorities,” Lieberman complained at a 10 June meeting sponsored by the Council on Competitiveness.

3. GLOBAL CHANGE

Signs of Past Collapse Beneath Antarctic Ice

1. Richard A. Kerr

Glaciologists have long been casting a worried eye on the West Antarctic ice sheet (WAIS). Its bed is below sea level, which in theory makes it far less stable than the larger East Antarctic ice sheet. And the western sheet is plenty big. If it melted away in a greenhouse-warmed world, it would raise all the world's oceans by 5 meters. Your favorite beach would be underwater—as would New Orleans, Miami, and Bangkok. Now the worries may deepen. A paper in this issue of Science (p. 82) confirms suspicions that in the recent geologic past, at a time perhaps not much warmer than today, the WAIS wasted away to a scrap and flooded the world's coasts.

That implication comes from holes drilled through kilometer-thick ice near the edge of the ice sheet. Reed Scherer and his colleagues at Uppsala University in Sweden and Slawek Tulaczyk and his colleagues at the California Institute of Technology in Pasadena report that the muddy bed of the ice sheet yielded fossils of microscopic marine plants along with isotopes showing that the fossils were deposited under open waters. The age of the fossils shows that the ice was gone, making way for open ocean, sometime in the last 1.3 million years, presumably during a warm period between ice ages, like the present. “Can this ice sheet change a lot?” asks glaciologist Richard Alley of Pennsylvania State University, University Park. The answer, he says, is yes: “It is a high-impact, low-probability event, but it could happen.”

Scherer's new analysis backs up a claim he made 8 years ago, after the first hole was drilled through the thin edge of the sheet 700 kilometers inland from the open sea. Scherer had sorted through the microscopic remains of diatoms—single-celled plants that grow in the ocean's sunlit surface waters—in the mud from beneath the ice. He found mostly diatoms that lived in the open sea more than 5 million years ago, when a cooling climate first fostered the growth of the WAIS. But there was also a smattering of species that appeared in Antarctic waters more recently, since 1.3 million years ago. Scherer took their presence as evidence that the ice had retreated at least 700 kilometers sometime within the past 1.3 million years. And as glaciologist Robert Bindschadler of NASA's Goddard Space Flight Center in Greenbelt, Maryland, points out, after a retreat of that scale, “there wouldn't be much room left for an ice sheet.”

Other researchers pointed out a loose end in the claim: The diatom fossils might have blown onto the ice sheet from marine sediments exposed on land and then—through crevasses and ice flow—gotten carried down to the base of the ice. To rule out that possibility, Scherer and his colleagues have now analyzed sediments from the bottom of nine holes spread over 10 kilometers of the ice sheet. Four of them had young, marine diatoms. These sediments had none of the Antarctic lake diatoms that would accompany marine diatoms if they had been carried to the base of Antarctic ice, which implies a different source for the marine diatoms.

The diatom-containing sediments were also the only ones that contained significant amounts of the radioactive isotope beryllium-10. Beryllium-10 is a hallmark of sediments recently deposited beneath an open sea, says Scherer. Made in the atmosphere by cosmic rays, it attaches to particles in seawater that sink to the bottom; far too little beryllium-10 was found in the overlying ice for windblown beryllium to have been the source. Scherer's analysis of the diatoms and beryllium make it “highly, highly unlikely there is any windblown” contribution to the samples, says Alley. “I feel that Scherer has addressed almost all the criticisms,” adds diatom specialist John Barron of the U.S. Geological Survey in Menlo Park, California.

From the diatom species in the sediment, Scherer argues that the area was ice-free and underwater as recently as the last 600,000 years. The most likely time, he adds, might be the brief but exceptionally warm interval between ice ages 400,000 years ago. But Barron doesn't think the diatom species allow the retreat of the ice to be pinned down any more precisely than sometime in the last 1.3 million years.

Whatever the exact date, the recent collapse of the WAIS is no longer in doubt. Now the question is when the WAIS might disintegrate again as the world warms—and how rapidly it might flood low-lying coasts.

Glaciologist Johannes Weertman of Northwestern University put a scare into the field 25 years ago when he argued that the ice sheet, sitting on a concave bed that is below sea level and fringed with floating ice shelves, should be prone to collapse rapidly if the climate warms. He explained that even a slight warming-induced retreat of the ice's grounding line—where it begins to float off the bottom—will move the grounding line into thicker ice. The thicker the ice, the faster it flows outward and therefore the faster it thins. The faster it thins, the sooner it floats and moves the grounding line even farther inward. Such an accelerating retreat could consume WAIS in a matter of a century or two, Weertman argued. The ice sheet retains a modicum of stability, researchers came to believe, only because its ice shelves are wedged into semi-enclosed embayments like the Ross Sea.

Researchers have relaxed a bit since then as they have come to appreciate that spotty resistance along the ice sheet's bed is also helping to hold it together. But Scherer's finding comes on top of some more alarming recent predictions. Staff scientist Michael Oppenheimer of the Environmental Defense Fund in New York City recently reviewed the question of WAIS stability (Nature, 28 May, p. 325) and concluded from the ice sheet's somewhat erratic behavior of late that its most likely fate is disintegration during the next 500 to 700 years, greatly accelerating sea-level rise beginning in the 22nd century. If that scenario comes to pass, it will be small consolation to Florida landowners to know that it has all happened before.

4. SOLAR PHYSICS

Earth to SOHO, Come In Please

1. Alexander Hellemans
1. Alexander Hellemans is a writer in Naples, Italy.

Controllers have lost contact with one of the most productive solar astronomy satellites ever. While controllers were putting SOHO—the Solar and Heliospheric Observatory—through routine maneuvers on Wednesday, 24 June, a safeguard program kicked in unexpectedly, apparently sending the craft into a spin. The craft's high-gain communications antenna is no longer pointed toward Earth. Although communication should still be possible through two omnidirectional low-gain antennas, “so far the baby does not talk back to us,” says Franco Bonacina, a spokesperson for the European Space Agency (ESA) in Paris.

SOHO, a joint NASA-ESA project, was launched in December 1995 and has since been monitoring the sun with 11 different instruments from a vantage point 1.5 million kilometers sunward from Earth. The $1 billion mission has gathered data on everything from the sun's internal structure (Science, 26 June, p. 2047) to outbursts of gas from the sun's atmosphere, called coronal mass ejections. SOHO's success persuaded planners to extend its operations—originally meant to end last spring—through 2003, to allow the spacecraft to observe the sun as its 11-year cycle of activity peaks. “The next couple of years would have been a different mission, because the sun is a different sun,” says Bernhard Flick, ESA Deputy Project Scientist for SOHO at NASA's Goddard Space Flight Center in Greenbelt, Maryland. As a result, last week's mishap is “potentially a tremendous loss,” says Cambridge University's Douglas Gough, co-investigator on three experiments studying solar oscillations, which hold clues to the sun's structure and motions. The crisis began when controllers at Goddard began a maintenance operation for the spacecraft's orientation system, which spins reaction wheels to rotate the craft. These reaction wheels often accumulate momentum during corrections, and NASA spokesperson Bill Steigerwald explains that the technicians fired thrusters to hold the craft steady while the reaction wheels were slowed. The craft then suddenly entered the “emergency sun reacquisition mode,” which automatically fires thrusters to point SOHO back toward the sun if it loses its bearings. “The telemetry stopped before the thrusters stopped firing. The reason is not clear at this time,” says Steigerwald. SOHO researchers now face a tense wait to see whether the mission can be saved. The satellite's solar panels are probably turned away from the sun now, draining the batteries and making communication impossible. “There is a slight chance that the orbit may eventually take SOHO to a point where the solar panels are aimed at the sun,” says Steigerwald. This would power up the craft and perhaps enable controllers to make contact with it again. A team of specialists from ESA and from Matra Marconi Space, the builder of the craft, has gathered at Goddard to plan the rescue effort. “We haven't lost [SOHO] yet,” says Gough. “I'm still optimistic.” 5. GENOMICS Canada Proposes$175 Million Effort

1. Wayne Kondro
1. Wayne Kondro is based in Ottawa.

Ottawa—The Medical Research Council (MRC) of Canada has pledged $17.5 million toward a 5-year,$175 million national genomics initiative aimed at reestablishing the country's global position in the rapidly growing field. The money would seed a project much more ambitious than the one terminated in 1996 as part of government-wide austerity moves. First, however, the project's backers must raise more than $100 million in additional funds from the federal government and nearly$50 million from business and other sources.

“The potential advantages to Canada are enormous,” says Marc LePage, MRC's director of business development. “The initiative will address major diseases that affect a lot of Canadians, while helping to train young researchers in promising new areas like bioinformatics. Also, there's an economic advantage from industrial spin-offs that come from the field. Canada will be taking a major step forward here.”

The initiative, called Genome Canada, would be the successor to the Canadian Genome Analysis and Technology program, created in 1992. Genome researchers have struggled to find support since its dissolution. “We've been thinking and talking while other people have been moving ahead,” says geneticist Lap-Chee Tsui, head of the Centre for Applied Genomics at the Toronto Hospital for Sick Children. “We definitely lost a lot of ground in the meantime.”

Tsui, who chaired an MRC-appointed Genome Task Force that crafted the proposal, says what's envisioned is a “structured” program focused on basic genomics research that can feed the growth of biotechnology companies across agriculture, forestry, the environment, and health care. “This time,” he says, “we should have bigger centers and a much more coordinated effort, more targeted instead of purely investigator-driven.”

The task force's report, which was adopted by MRC's governing board on 19 June, casts Genome Canada as a “virtual national institute” or consortium. In addition to managing a multidisciplinary research effort, it would help to broker early-stage and spin-off companies. The biggest component of the program, about $98 million, would be centers of research excellence in six fields: genome mapping and large-scale sequencing (with the goal of sequencing 25 to 50 megabases of DNA per year at three or four sequencing facilities); functional genomics; genotyping technologies; proteomics; bioinformatics; and medical, ethical, legal, and social issues. Genome Canada joins a spate of proposals—like the Canadian Institutes of Health Research (Science, 8 May, p. 821)—rising like hot-air balloons in the suddenly balmy economic climate. But staying aloft will be a challenge. The magnitude of the projected federal contribution, some$108 million, will likely require a special Cabinet appropriation, and the Liberal government of Jean Chretien has warned repeatedly that talk of how to spend a sudden budget surplus is premature. The business plan also includes raising $28 million from pharmaceutical and biotechnology firms,$14 million from provincial governments, and $7 million from nonprofit organizations and foundations. Although proponents admit they have set their sights high, they remain optimistic. They are hopeful that their efforts will be incorporated into a national biotechnology strategy expected to be issued this fall by Industry Minister John Manley. A report last fall by an outside panel of experts urged him to create such an initiative as a necessary condition of a flourishing biotechnology sector. “Everybody seems to be very much aware of genomics and the fact that Canada wants to play a big role in biotechnology,” says Tom Hudson, assistant director of the Whitehead Center for Biomedical Research at the Massachusetts Institute of Technology and assistant professor at McGill University in Montreal. “We're thrilled [by MRC's announcement].” 6. SPACE Remodeled ESA Backs Applications Projects 1. Helen Gavaghan 1. Helen Gavaghan is a writer in Hebden Bridge, U.K. The European Space Agency's (ESA's) governing council last week approved development money for a new satellite-based navigation system, ESA's revamped Earth-observation program, an upgrade for the Ariane-5 rocket, and a new launcher for small satellites. The decisions demonstrate the agency's increasing focus on space applications since Antonio Rodotà, took over as the agency's director-general last July (Science, 5 September 1997, p. 1426). But the council's apparent unanimity masks major disputes that loom over some programs. Some heavy politics are in store before the more senior council of European space ministers decides next year on whether to implement the programs, and how much to spend on them. The Earth-observation program is one potential area of discord. The ESA council is requesting about$330 million per year for a new program of scientific and applications missions (Science, 16 January, p. 316). But some countries think this is excessive. “We approve of the ideas,” says Gérard Brachet, head of the French space agency CNES, “but they are aiming too high. We think [$190 million] per year is enough.” Roy Gibson, the agency's first director-general and now a member of two panels advising ESA on Earth observation, is sympathetic to some reduction, but he says a level of$190 million would threaten the science content.

Even larger battles loom over the proposed satellite navigation system. ESA is hoping to develop both ground-based and satellite-borne equipment that would, during a first phase, make use of signals from the U.S. Global Positioning System (GPS) and Russian Glonass satellites to provide precise position information across Europe. A second phase, slated for 2010, could be anything from a joint system with the United States and the Russians to an independent European system. France, however, is concerned that the United States might deny access to GPS signals in some circumstances and seems to favor a European solution, while the United Kingdom prefers transatlantic cooperation. “This is a decision that will be taken at prime ministerial level,” says Brachet.

As for the proposed lightweight launcher—a four-stage vehicle dubbed Vega that would loft a 700-kilogram satellite—Brachet argues that the projected launch cost of $20 million is too high. “The competition is with the East, and they are selling such launches for between$10 million and $12 million,” he says. Even a seemingly innocuous resolution on closer cooperation between ESA and the European Union may prove divisive, as some ESA members favor more EU input into space policy while others oppose it. European space politics are alive and well. 7. SCIENTIFIC COMMUNITY Panel Says Some UFO Reports Worthy of Study 1. David Kestenbaum On 8 January 1981, a man working in his yard in Trans-en-Provence, France, claims to have heard a low whistling sound and turned to see an ovoid object land in his garden. Thirty seconds later it rose and departed in the direction of a nearby forest, leaving a 2.4-meter diameter, ring-shaped imprint in the ground. The police and the government's Unidentified Aerospace Phenomena Study Group sampled the compacted soil and the damaged vegetation. Four labs analyzed the samples but reached no definitive conclusions as to what had happened. The case may sound like an X-Files transcript, but it and other UFO tales got a serious 4-day hearing by nine senior physical scientists at a workshop late last year. In a report released this week, the panel concluded that some of the UFO events merited further scientific study (see www.jse.com/ufo_reports/Sturrock/toc.html). “Our feeling was [that] anything not explained is something science at some level ought to be interested in,” says Thomas Holzer, a geophysicist at the National Center for Atmospheric Research in Boulder, Colorado. Holzer was co-chair of the workshop, which was convened by Laurance S. Rockefeller. For most scientists, the definitive word on UFOs came from a 1968 review sponsored by the U.S. Air Force and led by physicist Edward Condon. The Condon report concluded that “further extensive study of UFOs probably cannot be justified in the expectation that science will be advanced thereby.” But after hearing reports from eight UFO investigators, the new panel decided that although there was no convincing evidence that extraterrestrial intelligence was involved in the incidents, some events might represent novel atmospheric or other phenomena that are worth looking into. Kendrick Frasier, editor of The Skeptical Inquirer, worries that the report will unjustly legitimize UFO research. Some of the scientists who organized the workshop have a record of enthusiasm for these exotic topics, he says. One organizer, Robert Jahn, a physicist at Princeton University, is well known for his experiments with psychokinesis. Peter Sturrock, a physicist at Stanford University who oversaw the effort, is president of the Society for Scientific Exploration, whose mission Sturrock describes as investigating topics such as “parapsychology and strange monsters,” which he feels are not adequately covered by mainstream science. “Let me be clear: There is no justification for a crash program to look at unnatural phenomena,” says panel member Jay Melosh, a planetary scientist at the University of Arizona, Tucson. But panel co-chair Charles Tolbert, an astronomer at the University of Virginia, Charlottesville, notes that “meteorites were once considered to be a stupid idea. … People said, ‘Rocks can't fall out of the sky.’” Still, Tolbert says he doubts the sky harbors any alien spacecraft. That level of skepticism doesn't satisfy Bob Park, a physicist at the University of Maryland, College Park, who is writing a book about what he considers pseudoscience. “I think [investigating UFO reports] is just a total waste of time,” he says. “Calling in all the people who have seen strange things just gets you a roomful of strange people.” 8. EPIDEMIOLOGY NIH Panel Revives EMF-Cancer Link 1. Jocelyn Kaiser Breathing life into a moribund debate over whether power lines cause cancer, an advisory panel to the National Institutes of Health (NIH) last week concluded that electromagnetic fields (EMFs) are a potential human carcinogen. But regulatory bodies haven't yet called for new measures to reduce EMF exposure, and some panelists quickly sought to downplay their own report. “I don't think you could conclude there's a real problem with EMFs,” says vice chair Arnold Brown, dean emeritus of the University of Wisconsin Medical School in Madison. Still, the panel lifted EMFs off the canvas—however briefly—after a one-two punch had knocked the controversial topic off the list of credible health threats. In 1996, a National Academy of Sciences panel found “no conclusive and consistent evidence” for harm from residential exposure to EMFs generated by power lines, appliances, and other sources. Then last year, a major National Cancer Institute (NCI) epidemiological study found no evidence of childhood leukemia from EMF exposure (Science, 4 July 1997, p. 29). Even before the academy and NCI weighed in, however, Congress in 1992 had created a research program called RAPID, run by the NIH's National Institute of Environmental Health Sciences (NIEHS) and the Energy Department, to examine EMFs. The law required NIEHS to form the advisory panel to review RAPID, which has spent$66 million on studying the effects of EMFs on everything from gene expression to breast cancer on Long Island. Chaired by Michael Gallo, a toxicologist at the University of Medicine and Dentistry of New Jersey-Robert Wood Johnson Medical School in Piscataway, the 30-member panel used a more liberal standard than most U.S. bodies would in judging EMFs: It followed International Agency for Research on Cancer criteria, which allow a substance to be labeled a carcinogen based only on an association in a population, even in the absence of evidence linking a substance to tumors in lab animals.

The latest EMF indictment is not based on any hot new data. An NIEHS-commissioned analysis of pooled data from several population studies upheld earlier findings—namely, that children living near power lines appear to have a 56% increased risk of leukemia. And it considered other studies finding a similar leukemia risk in adults exposed to high levels of EMFs at utilities and other workplaces. The panel voted 19-9 to classify low-frequency EMFs as a “possible human carcinogen”; their 400-page report, set for release this month, calls the vote “a conservative, public health decision based on limited evidence.”

Experts are quick to point out that any cancer risk from EMFs is slight. After a 2-month public comment period on the report, NIEHS will calculate how many U.S. cancer cases might be due to EMFs, then send its final review on to Congress and other agencies. The panel did boldly come through with one recommendation: more research. If there is a link between EMFs and cancer, explains panelist Jerry Williams of Johns Hopkins University, “it's very small, very subtle, and very complex, and something we don't understand at any level.”

9. MEDICAL ETHICS

No Consensus on Rules for AIDS Vaccine Trials

1. Jon Cohen

Geneva, Switzerland—A meeting held here last week to try to set ethical ground rules for AIDS vaccine trials in poor countries almost reached boiling point when the participants grappled with a key question: If a vaccine is tested in a country that cannot afford anti-HIV drugs and volunteers become infected during the trial, should they be given state-of-the-art treatment? The answer could determine the ethical, financial, and scientific viability of AIDS vaccine tests. But for the 85 AIDS vaccine developers, ethicists, public health officials, lawyers, and activists from more than two dozen countries who tried to answer it, consensus proved elusive.

The meeting—an ad hoc advisory group to the United Nations' AIDS program, which will go on to recommend changes to international guidelines for all clinical trials—did reach agreement on some points. For example, the participants recommended ending the current requirement that a vaccine be tested first in the country where it is made, and they said trials should be more closely monitored to make sure that participants truly give their consent. These recommendations could lead to “major changes in the way trials are done,” said Barry Bloom, a researcher at Harvard University who heads the UNAIDS Vaccine Advisory Committee. But the central controversy over how to treat those who become infected—the question that led to the meeting being called in the first place—remains unresolved.

The problem it poses for researchers was highlighted at the meeting by Mary Lou Clements-Mann of Johns Hopkins University in Baltimore. She pointed out that vaccines rarely prevent infection; rather, they prevent or modify disease. Hence a critical measure of the success of an AIDS vaccine trial would be whether the vaccine lowers the “viral load”—the amount of HIV in the blood—in people who get infected. But if many of those who become infected soon begin taking potent anti-HIV drugs, says David Ho of the Aaron Diamond AIDS Research Center in New York City, “you're not going to be able to see anything.” Thus the widespread use of anti-HIV drugs could make it “impossible to design a scientifically valid [vaccine] trial,” warned Clements-Mann.

But Don Francis, head of the San Francisco-based biotech company VaxGen, which just last week launched in the United States the first efficacy trials of an AIDS vaccine, argued that not everyone would start treatment immediately, and because researchers will take blood from participants every 24 weeks or so, they should be able to make at least one viral-load measurement in many untreated people who become infected. If the vaccine had an effect, said Francis, it should be relatively easy to determine. Ho remained skeptical. “I think it's tough in a country like the United States,” he said. “Patients are going to be treated very quickly.”

This problem could, potentially, be avoided by carrying out trials in poor countries where the expensive cocktails of anti-HIV drugs are unavailable and unaffordable, but is that ethical? According to the two most influential guidelines today for clinical research—the Declaration of Helsinki and a subsequent document written by the Council for International Organizations of Medical Sciences—the answer appears to be no. Both state that “every patient—including those of a control group, if any—should be assured of the best proven diagnostic and therapeutic method.”

This principle was put to the test last year, when the Public Citizen's Health Research Group, an influential consumer-advocate organization based in Washington, D.C., slammed drug trials in developing countries that aimed to prevent mother-to-infant transmission of HIV. Public Citizen complained that the trials used placebos even though a U.S.-French study had already proved that an intensive regimen of the anti-HIV drug AZT would prevent transmission (Science, 16 May 1997, p. 1022). The researchers countered that they needed placebos in order to determine quickly whether a cheaper, simpler course of AZT—which would be more applicable in poor countries—might decrease transmission, too. (The dispute became moot when an interim analysis of one trial found that the shortened treatment worked.)

Public Citizen's attack set alarm bells ringing for AIDS vaccine researchers, because the same considerations should apply to people who become infected during vaccine trials. “I knew if we didn't deal with it in vaccines, we were going to get into the same mired mess,” said Bloom.

The majority of the participants at the Geneva meeting agreed with the practical argument that people who become infected during an AIDS vaccine trial should be offered the “highest attainable” treatment in their locale that can be sustained after the trial ends. To offer more, said Dwip Kitayoporn of Thailand's Mahidol University, would be “like leaving a Cadillac or Rolls Royce in our country, but no one can afford to drive it or even repair it.” Major Rubaramira Ruranga, an HIV-infected Ugandan who works at a research center in Kampala, warned that people may also sign up for vaccine trials just to get access to drugs. “We're going to create a safe haven for people who are going to be put on the trial,” Ruranga said. This, others noted, would violate the ethical principle that researchers must not “unduly influence” people to join trials.

But an impassioned, ardent minority rejected the idea that trial volunteers should be treated any differently from those in developed countries. Dirceu Gerco, coordinator of an AIDS vaccine center in Brazil, worried that setting a lower standard for poor countries was a slippery slope. “When you put the level of ethics below the maximum, it's very easy to lower it more,” said Gerco, whose sentiments were shared by several other Brazilians at the meeting.

Francis and several Thai scientists underscored how this debate is far from theoretical: They are now gearing up for a large trial of the company's vaccine in Thailand before the end of the year. Neither the company nor the cash-strapped Thai government plans to give cutting-edge treatments to people who become infected. When Public Citizen's Peter Lurie was asked at the meeting if the group would campaign against this trial, he said no comment—which is one more critical question that the meeting left unanswered.

10. WILDLIFE BIOLOGY

Fungus May Drive Frog Genocide

1. Jocelyn Kaiser

The case is so frustrating it would make even Hercule Poirot sigh. Amphibian populations have been plummeting in the past 2 decades, but the perpetrator has left precious few clues to its identity. Time and again, scientists have visited woods filled with frog song just 3 or 4 years earlier, and “they're just gone,” says David Wake of the University of California, Berkeley—the frog corpses already decayed or eaten. Now, researchers have finally caught a killer in the act.

The accused is a new fungus that has turned up in 120 frogs and toads of 12 species in Australia and seven species in Panama often during mass die-offs in relatively pristine areas. Fourteen scientists from Australia, the United States, the United Kingdom, and Canada will describe the fungus—from the phylum Chytridiomycota—in the 21 July Proceedings of the National Academy of Sciences. “I don't think this is the cause of amphibian declines,” says Allan Pessier of the National Zoo in Washington, D.C., who is part of a second team that has seen the same fungus in zoo populations of amphibians in the United States. Researchers haven't found any fungi when they've looked for them in frogs in California, for instance, where pesticides are the leading suspect in amphibian die-offs, says Gary Fellers of the University of California, Davis. But, adds Pessier, “in my opinion, this is a significant finding.”

After noticing spore casings on the skin of rainforest frogs that died in Queensland, Australia, in 1993, a team led by veterinary pathologist Lee Berger of James Cook University in Queensland homed in on a suspect: a new species of chytrid fungus, whose prior rap sheet had it infecting plants and insects, not vertebrates. Meanwhile, U.S. scientists had found a similar fungus in frog corpses after a die-off in western Panama in January 1997. “This is the only thing the dead and dying frogs shared in common,” says veterinary pathologist D. Earl Green of the U.S. National Institutes of Health. The team has yet to isolate the fungus and prove it's the culprit, rather than something else on the skin. They are also unsure about the killer's modus operandi—whether it exudes a lethal toxin or suffocates frogs by clogging their skin pores, through which they breathe.

Also a mystery is just how the fungus turned up on two far-flung continents in such a short time. One unsettling theory is that researchers traveling between Australia and Central America carried it with them on their boots. Another is that the fungus had been lurking in both hemispheres but didn't start killing frogs until after they were weakened by something else—such as UV light coming through the thinning ozone layer, or pesticides. One way to sort this out is to examine the fungal DNA to establish the phylogenetic relationship among isolates.

The DNA studies will also help determine how fast the fungus might be country hopping. For example, chytrid may have spread to Panama from Costa Rica, where in 1988 half the 40 amphibian species on a Monteverde ridge vanished. Although the detective work is far from finished, says team member Peter Daszak of Kingston University in the U.K., “what we've got for the first time is real evidence—dead bodies.”

11. ARCHAEOLOGY

Eight Millennia of Footwear Fashion

1. Heather Pringle
1. Heather Pringle is a writer in Vancouver, British Columbia.

From the bear-fur shoes that once graced the feet of Japanese samurai to the sleek platform sandals that strut down runways today, people have long garbed the humblest part of the human body—our feet—in high fashion. Now ancient sandals and slip-ons from central Missouri reveal that attention to fashion in footwear goes back 8000 years or more. On page 72, archaeological textile expert Jenna Kuttruff of Louisiana State University in Baton Rouge and her colleagues analyze and date a rare collection of 35 perishable fiber and leather shoes excavated decades ago from a Missouri cave.

One shoe is dated at more than 8000 years old, making it among the oldest in North America. And the shoes' complex weave and design indicate that early North Americans were just as fashion conscious as we are. “The complexity in design means that we had artists and craftspeople even then,” says Kathryn Jakes, a fiber specialist at Ohio State University in Columbus. Adds James Petersen, an archaeologist at the University of Vermont in Burlington: “In modern society we show our status and individuality through our clothing. But one would not have guessed this of prehistoric native North America 8300 years ago,” as social distinctions in personal effects such as jewelry don't generally appear until 4000 to 5000 years ago.

The shoes were unearthed in the 1950s by an amateur archaeologist, J. Mett Shippee, at Arnold Research Cave near Columbia, Missouri. Analyses of animal bones, stone tools, and ceramic fragments from the cave by Shippee and later by archaeologist Michael O'Brien of the University of Missouri, Columbia, revealed that the cave's visitors ranged from Archaic hunters and gatherers to later agricultural peoples. Taking shelter in the cave, generations of these early Americans lost or tossed away their worn shoes, which the cave's dryness preserved.

But no one suspected the shoes' age until O'Brien contacted Kuttruff, an expert on prehistoric clothing in the eastern United States. She noted that although regional historic accounts described Native Americans in mainly leather footwear, almost all the shoes were of plant fiber, suggesting that they were ancient. She and her colleagues carbon-dated fibers of seven of the most diverse shoes by accelerator mass spectrometry, an especially sensitive dating technique. They found that the shoes range in age from 1070 to as much as 8325 years old.

The ancient shoemakers relied largely on just one of several fiber-producing plants in the region: Eryngium yuccifolium, or rattlesnake master (named for the supposed antivenom properties of its leaves). The designs, however, range from sandals to several varieties of slip-ons and moccasins, with fibers twined, twisted, and interlaced in different and complex ways to form straps, soles, and heels. The sling-back and slip-on styles look contemporary enough to be sported on modern city streets.

Whether the distinctive footwear styles were created for different seasons or simply for fashion is far from clear. But if a larger sample of the styles could be found and dated, they could prove a real boon to research, says Tom Dillehay, an archaeologist at the University of Kentucky, Lexington. The varied styles “not only show footwear technology and its growth and change” but could also be used, along with more traditional markers such as tools and pottery, to help identify the age or cultural affiliation of sites.

The cache of footwear also offers an unusually personal glimpse of early Americans. Some sandals were trodden to holes and frugally repaired before being lost, while a child's leather moccasin was apparently kicked off almost new. One complete specimen was a perfect men's size 9½. It “makes you think about some person in prehistoric times wearing those sandals,” says Jakes. “Looking at the sandals, [you know] that someone used them.”

12. EVOLUTIONARY BIOLOGY

Successful Flies Make Love, Not War

1. Gretchen Vogel

Vancouver—Male rivalry may be costlier than expected. Male fruit flies, for example, have evolved a nasty chemical weapon in their duels over females: toxic semen that thwarts their rivals and harms their mates. Evolutionary biologists had thought that because males with the best genes win these battles, the benefits outweigh the costs of such tactics. A study reported here last week at the annual meeting of the Society for the Study of Evolution suggests that's not the case. When researchers forced fruit flies to be monogamous, allowing evolution to disarm the seminal fluid, they found that the monogamous population produced more offspring overall than control populations did.

Evolutionary biologists have theorized since the early 1970s that mating takes place on an evolutionary battlefield. In flies, rival males and the females they mate with seem to wage a three-way contest for reproductive advantage. After mating, a female fly stores about 500 sperm in internal pockets until her eggs are ready to be fertilized. But those sperm can be supplanted in later matings. To gain an edge over other Casanovas, a male fly laces his seminal fluid with about 60 proteins designed to boost the chances that his sperm will win out. Some depress the female's sex drive, decreasing her willingness to mate again. Some increase her short-term egg-laying rate, and some are toxic to other flies' sperm. Unfortunately, the female gets caught in the crossfire; the seminal fluid is also mildly toxic to her, so she evolves chemical defenses against it.

Two years ago, evolutionary biologist William Rice of the University of California, Santa Cruz, dramatized how male rivalry can put the sexes at odds when he used a trick of genetics to prevent females from evolving defenses to the male power plays. Unrestrained, the males became “supermales,” with very toxic seminal fluid and aggressive mating habits. They reaped larger numbers of offspring than their rivals but caused their mates to die young (Science, 17 May 1996, p. 953).

Now Brett Holland, a graduate student collaborating with Rice, has shown that sensitive nice-guy flies can evolve, too, when competitive pressure is removed. Holland imposed monogamy on the normally promiscuous insects by isolating male-female pairs in separate vials. He mixed the offspring from all the pairs and picked his next generation at random from the hatchlings. After 32 generations, the flies were on their way to disarmament. Compared with male progeny of control flies that had to compete for a single female, descendants of monogamous males had less toxic seminal fluid and did not harass females as much. Females, in turn, were less resistant to the males' seminal fluid and more receptive to their courtship proposals.

The move toward cooperation in a monogamous relationship was expected, Holland says, as “anything [a male] does to hurt her hurts himself.” But the researchers were less sure what the effect would be on the population as a whole. In fact, the cooperation paid off. The monogamous flies produced an average of 28% more viable offspring than controls, even when the disarmed males competed with each other.

The experiment is a clear, and clever, demonstration of the costs of conflict in evolution, says Michael Rose, an evolutionary biologist at the University of California, Irvine. Locke Rowe of the University of Toronto in Canada agrees: “It's similar to a real arms race, where competition drags the whole economy down.”

In his own talk, Rowe offered another example of this destructive path: water strider species belonging to the genus Rheumatobates. He found evidence of a gradual buildup of armaments in males of different species, including longer legs, spines, and antennae that look like muscular legs. These implements apparently give a male a reproductive advantage over other males by enabling him to hold down resistant females during mating, Rowe says.

Indeed, species that eschew such rivalry are relatively rare. “You need fairly special environmental conditions for monogamy to evolve,” Holland says, because any cheater—a male who mates with more than one female—will have more offspring than his monogamous brothers. Unless a male can guard his mate without harming her or geographical distance separates couples, he says, there is no truce in sight.

13. ENERGY RESEARCH

Competition Heats Up on the Road to Fusion

1. Andrew Lawler,
2. James Glanz

The demise of a full-scale ITER, a giant experimental reactor project, and the promise of alternative approaches prompt a congressional call for a divided field to set its priorities

Dividing your forces during battle is a risky maneuver. Yet that is what U.S. researchers and their government funders have long been doing in one of the most daunting scientific and technical campaigns ever: trying to replicate on Earth the power that fuels the sun. One arm of this effort aims at providing safe, clean, and cheap electric power sometime in the next century by caging a hot ionized gas in a magnetic field. The other arm focuses on the physics of nuclear weapons by creating tiny explosions in pellets of fuel crushed by lasers or other methods. That two-pronged effort paid off for a generation in the form of breakthrough science. But with the demise of a project to build a vast magnetic fusion reactor and a surge of activity in the military fusion effort, there is a push for the two approaches to fight for supremacy.

Last month the U.S. Congress asked the Department of Energy (DOE) and the fusion community to review the government's entire $650 million fusion research program and set priorities. The results could shape the direction of fusion research for decades to come, and it could ultimately lead to a unified strategy for fusion power that would test, compare, and perhaps even combine technologies from both arms of the current effort. The review strikes fear in the hearts of some fusion researchers, who expect it to intensify competition for limited funds. But others welcome the effort, which both the House and Senate backed in the 1999 DOE spending bill. “I think we basically need a physical—a complete checkup,” says Dale Meade, head of advanced fusion concepts at the Princeton Plasma Physics Laboratory. Adds Gerold Yonas, vice president for pulsed-power technologies at Sandia National Laboratories in Albuquerque, New Mexico, “It's time for us to review and rethink fusion—we're all in this together. In the long run we are all better off figuring out a way to make a credible demonstration of fusion power.” The review will reexamine a division blessed by a 1990 blue-ribbon panel that identified distinct goals for magnetic fusion and inertial confinement fusion, which relies on lasers, particle beams, or pulses of current. Magnetic fusion, funded as part of DOE's civilian energy research effort, is meant to work toward a commercial power plant. Inertial confinement fusion, funded mainly by the department's nuclear weapons program, is designed to support national security objectives. Now, however, the centerpiece of the magnetic fusion program, a$10 billion international project called the International Thermonuclear Experimental Reactor (ITER), appears to be dead (Science, 2 January, p. 20). Political and technical problems drained congressional support from the project, and many scientists have also turned away from it. At a recent meeting in Madison, Wisconsin, magnetic fusion researchers discussed the prospects for a cheaper version called “ITER Lite,” now being studied by the international partners. The researchers also considered an alternative approach based on a series of smaller machines that could be hosted by several nations and called for a sweeping review of magnetic fusion science. At the same time, technical breakthroughs in pulsed power at Sandia and construction of a massive $1.2 billion new laser facility at Lawrence Livermore National Laboratory in California have raised the profile of inertial confinement research. These developments prompted lawmakers like Senator Pete Domenici (R-NM), who chairs the Senate DOE spending panel, to take another look at the diverse fusion program. With each approach clamoring for more dollars to fund research and build expensive facilities, some lawmakers see the split between military and civilian programs as an unwanted relic of the Cold War. “Fusion is divided into separate areas, but to politicians the dollars are going toward the same goal,” says one congressional staffer. Domenici's bill calls for a review of both civilian and defense technologies “prior to making decisions about next steps toward fusion energy.” The House version echoes that request. Burning desire The magnetic fusion community's troubles were highlighted last month when Representative John McDade (R-PA), who chairs the House panel that funds DOE, asked the department not to sign an international agreement extending the ITER collaboration. His panel also refused to provide the$11 million the Administration had requested in 1999 to continue U.S. work on the project, which includes Japanese, European, and Russian participation.

Now the community is scrambling to lay out a long-term plan for the future. At Madison, Michael Mauel of Columbia University, president of the University Fusion Association, which represents academic researchers, outlined a proposal—which he said has been encouraged by DOE—to emulate astrophysics and particle physics and hold intensive retreats to hash out fieldwide priorities. Such an approach “leaves a lot of blood on the floor,” warned Robert Rosner, a University of Chicago astrophysicist who was invited to explain the process. But once the arguments are over, he said, the field “heals itself” and presents a unified voice to lawmakers.

A series of technical presentations offered a preview of battles to come. A large contingent favored breaking the ITER mission into smaller, less costly experiments that could be scattered around the globe (see pp. 27 and p. 28). One cluster would study technology. The second would explore innovative and relatively diminutive machines that, like ITER, would cage plasma in a doughnut-shaped chamber called a tokamak. But while ITER was meant to be closer to a complete fusion reactor, the main goal of the smaller machines would be to study the physics of a short fusion burn—perhaps 10 seconds during which a plasma would produce much more power than was used to heat it. Such a plasma might even ignite, meaning the burn would continue when the heating source is turned off. “Better, cheaper, faster fusion—that's a theme I think we ought to get behind,” said Richard Siemon of Los Alamos National Laboratory in New Mexico, summing up a widespread feeling.

But the reaction from those who still favor large projects was harsh. “The stupid man's approach to ignition” was how Charles Baker, the U.S. ITER Home Team Leader, characterized the smaller-is-better philosophy. Baker, an engineering professor at the University of California, San Diego, said that only lengthy burns of hundreds of seconds could illuminate the way a real power plant would work. Baker now supports ITER Lite—a roughly half-price version of the original with suitably reduced goals (Science, 8 May, p. 818)—and he found some allies among the international contingent.

Mitsuru Kikuchi of the Japan Atomic Energy Research Institute told a reporter that “we would not be interested” in a global, modular program of many modest devices. Although the U.S. and European programs are focused primarily on plasma science, Japanese researchers are generally more interested in technology—technology not addressed by the smaller ignition devices. On the other hand, a source within the European fusion program expressed support for “lower cost versions of ITER” as well as “alternative solutions.”

By the end of the weeklong meeting, the multiple-machine option had emerged as the favorite. “Most of the people were surprised that the community could come together as it did,” says Columbia University plasma physicist Gerald Navratil, who summarized reports from several breakout groups. Most resounding of all, said Navratil, was a consensus that the next step should feature burning plasmas—which could include ITER Lite or the smaller devices.

Whichever path is chosen, several scientists emphasized that ITER and tokamaks should not overshadow other innovative magnetic concepts that could lead in the long run to more attractive reactors. These concepts, which include a hybrid approach that crushes the plasma, as in inertial confinement, while bottling up the hot particles with magnetic fields, “are really exciting and are bringing young scientists into the field,” said Mauel. “These are the things we've been doing in the non-ITER part of the program for years.”

Real gain

While the magnetic community stumbles toward a long-term approach, newer types of fusion are striding ahead with bold plans. Construction is well under way at Livermore's National Ignition Facility (NIF), which will focus nearly 200 laser beams on a single target and will serve as a key facility for DOE's stockpile stewardship effort to maintain nuclear weapons without underground testing. Lab laser chief Mike Campbell already has an extensive wish list; on top of the $90 million the lab now spends on laser fusion, he says that some$35 million to $40 million annually will be needed through 2003 to prepare experiments on NIF. That figure would rise to$80 million to $100 million annually through 2010, he estimates, followed by a couple of billion dollars for a next-generation NIF. Even if NIF reaches its goal of igniting a fusion target, no one argues that the approach could lead directly to a commercial fusion power plant. NIF's glass lasers, for example, are acceptable for defense purposes but impractical for a power plant because of their cost and low efficiency. While NIF would demonstrate controlled ignition, another technology, heavy-ion drivers, could take the place of the lasers in a future machine. These particle accelerators, which could prove far more efficient and flexible than lasers, are under development at Lawrence Berkeley National Laboratory in California as part of the department's civilian fusion effort. The funding, however, is modest—about$7 million a year.

Meanwhile, pulsed-power advocates at Sandia hope to advance their own long-term plan in light of recent breakthroughs in that technology. The method, whose energy applications are just starting to be explored, would crush fuel pellets with x-rays emitted from a plasma imploding after an array of wires is vaporized by a jolt of current. Although the concept has received much less study and funding than Livermore's approach, Sandia officials say pulsed power could prove far cheaper than lasers or ions, and they want to build a $400 million facility called the X-1 to prove it. “We have a reasonable prospect to produce real energy gain,” says Yonas. But Campbell and Yonas will have trouble squeezing additional money out of the stockpile stewardship program, despite its massive$4-billion-a-year budget. Vic Reis, who heads DOE's defense programs, warns that any fusion efforts funded by his office must also help to keep the nuclear stockpile safe: “We can't do science for science's sake.” Adds David Crandall, who heads DOE's inertial confinement program: “The budget process over here is at least as fierce as it is on the [civilian] fusion energy side. Finding a few million dollars more over here is no easier.”

Some magnetic fusion researchers worry that behind the congressional request for a review is a move to shift money into inertial confinement programs at their expense. And civilian fusion program officials are quick to note that there's no money to spare in magnetic fusion. Given her tight budget, says Anne Davies, head of DOE's civilian fusion program, “they don't want to be over here.” Congressional aides deny such an intent. The idea, says one, is “to keep doing basic research on each technology until we are confident enough to choose one direction.” And inertial confinement researchers insist that greater cooperation among the various technologies would benefit all sides. “My goal is not to erode the program—it already has been eroded—but to build it up,” says Campbell. Adds Yonas: “You don't compete to kill each other, but for the better idea.”

DOE officials are now making plans for the review, which likely would be conducted by a panel of researchers from within and outside the fusion community. They hope to have it ready by December, in time to offer guidance to Congress as it considers the 2000 budget. “I'm not going to prejudge where we will come down,” says outgoing DOE Secretary Federico Peña. But one outcome could be a single fusion office, speculate some congressional aides and researchers, adding that such a change would not be easy given the long-standing separation between fusion researchers.

What seems certain is that the very process of a review will force all the players in the fusion drama—laser, pulsed-power, ion-driver, and magnetic researchers alike—to interact more closely. Both Davies and Crandall say there has been progress in building bridges between the cultures. But a great deal more blood may be shed before the fusion communities can become one.

14. ENERGY RESEARCH

Korea Brings U.S. Design to Life

1. Michael Baker
1. Michael Baker is a free-lance writer in Seoul.

Seoul—Researchers at the Princeton Plasma Physics Laboratory (PPPL) felt a mixture of pride and regret last fall when South Korean officials began ground preparations for the Korea Superconducting Tokamak Advanced Research (KSTAR) facility. The pride was for their role in preparing Korean scientists to take a big step toward understanding a key element in fusion reactions. And the regret was for one that got away: KSTAR is a scaled-down version of the $750 million Tokamak Physics Experiment (TPX), a fusion research facility slated for PPPL that the U.S. Congress killed in 1995 in an economy move. Despite the country's worst economic crisis in more than 40 years, the Korean government is backing KSTAR because it believes the project will stimulate industrial R&D at the same time that it catapults Korea into the front ranks of fusion science. That's an attitude not widely held among politicians in the United States, where cuts are forcing officials to perform triage on a range of possible experiments (see main text) “The project gives Korea's science and technology sector a chance to catch up with [wealthier] countries for a comparatively small amount of money,” says Lee Gyung-su, project coordinator at the Korea Basic Science Institute in Taejon, KSTAR's home some 150 kilometers south of the capital. Work on KSTAR itself will begin this fall, and Lee says that its completion in 2002 will not only boost Korea's fusion program but also demonstrate that the country has the ability to manufacture advanced research devices. KSTAR is designed to focus on techniques that “allow tokamaks to operate continually and at high performance,” says PPPL's George Neilson, physics coordinator for the U.S. team that has been working for the past 3 years on the KSTAR design. It “is getting at a set of issues [that will] make the tokamak a better product.” The key to achieving that steady-state goal lies in understanding the behavior of superheated plasmas—ionized particles at temperatures of millions of degrees—and minimizing the turbulence that robs a tokamak of its ability to confine heat. The plasma, which is created by the inductive current pulse and sustained by longer lasting sources like radio-frequency waves, is confined magnetically, and a more compact plasma will allow scientists to build less expensive, smaller tokamaks. KSTAR's goal is to confine the plasma for 300 seconds, “almost an eternity for tokamaks,” says Lee. Longer pulses also lend themselves to superconducting magnets that don't heat up like conventional magnets and which, in theory, can run indefinitely. Superconducting materials have already been used in some stellarators, another torus-shaped machine that uses external coils rather than the inductive plasma current to apply a necessary twist to the magnetic fields. Building these magnets will be one of KSTAR's biggest challenges, and officials hope that the project's industrial partners will apply the knowledge gained to a range of commercial products. Because KSTAR is not designed to achieve ignition, it will use hydrogen and deuterium as fuel instead of the more potent mixture of deuterium and tritium. That combination reduces the amount of shielding needed to protect users from radioactivity. The scaled-down version of TPX also comes with a smaller pricetag, an important consideration given the country's current economic woes. Indeed, a government review this spring gave high marks to the project, delaying only the construction of three auxiliary heating systems and some diagnostics. The cuts, which won't interfere with early experiments, are expected to trim 20% from the$300 million cost of the baseline machine.

At that price, Korean officials think KSTAR is worth it. “There's a nice time window” in which KSTAR will be the only major advanced superconducting tokamak of its kind, says Lee. “For small money compared to what [others] spent, we can suddenly leap into major nation fusion research [status]. We can be in equal partnership.”

15. ENERGY RESEARCH

Magnetic Fusion Researchers Think Small

1. James Glanz

Magnetic fusion needs to take a page from the Wright Brothers, say some fusion researchers. They are arguing for small and cheap “burning plasma” experiments—the Kitty Hawk flights of fusion energy—in place of the massive, costly reactors that have been the mainstay of the government's fusion program, such as the defunct International Thermonuclear Experimental Reactor or even the scaled-down incarnation called ITER Lite.

At a recent fusion meeting in Madison, Wisconsin (see main text), Earl Marmar of the Massachusetts Institute of Technology (MIT) spoofed the big-machine approach with an imaginary dialogue in which Wilbur Wright grumbles that the proposed flight “sounds like a stunt” because it would not demonstrate practical mass transportation. He tells Orville that a delay is in order while they address materials issues. “Wood and cloth will never make it,” he frets. “In 10 to 20 years we'll be ready to build a single integrated step on the path to the jumbo jet.”

But Orville's approach won big at the Madison meeting. Participants described a range of small experiments that would focus on the physics of an energy-producing fusion plasma, leaving the technology needed for practical fusion power to a separate suite of devices scattered around the world. The “jumbo jet”—an actual power plant—would be considered much later. “I believe the technology is going to change tremendously in the next 50 years,” says Tim Luce of General Atomics (GA) in San Diego. “The physics is not going to change. So let's study the physics.” Agrees Gerald Navratil of Columbia University, one of the meeting's organizers, “The overwhelming consensus was that the exploration of a burning plasma really was the primary priority.”

A majority of the researchers, said Navratil, favored the multimachine approach. Like their outsized cousins, these smaller devices would be tokamaks, doughnut-shaped vessels threaded with magnetic fields that confine hot plasma. The devices would create those magnetic fields with electrical currents in chilled copper coils, rather than the bulky and expensive superconductors of ITER. Copper coils can deliver much more powerful fields than superconductors can—albeit for shorter periods.

Copper-coil devices rely on these high fields, rather than sheer size, to confine the plasma at densities and temperatures high enough for it to ignite. As a result, these machines can be dramatically smaller than ITER. The Ignitor, a project led by MIT's Bruno Coppi, will produce a field more than twice as strong as ITER's and shrink the doughnut width by a factor of 6, to 2.6 meters (see graphic). Coppi estimates that the project, parts of which are already being built in Italy with funding from the government there, could cost as little as \$200 million—a factor of 50 less than ITER.

Since Coppi began work on the high-field concept more than 20 years ago, experiments on MIT's Alcator C-Mod tokamak have greatly strengthened the argument that such devices could reach ignition, say Dale Meade of the Princeton Plasma Physics Laboratory and others. The experiments have shown good plasma confinement at the high densities and fields that would be needed. Meade's own concept, called BPX-AT, would be slightly larger than Ignitor at 4 meters across and have somewhat lower fields, but would be based on similar principles. “Bruno deserves the credit for initiating research in this area,” says Meade.

Yet another concept for a modest-sized ignition device, this one not based on high fields, was presented in Madison by Luce. The proposal relies, in part, on experiments at GA's DIII-D tokamak that attempt to create many of the same “dimensionless parameters”—such as the ratio of the hot plasma's pressure to that of the confining magnetic field—that an ignition device might have. Because the experiments also show, in an indirect fashion, that plasma confinement rapidly improves if these parameters are held constant while such a machine is scaled up, Luce concludes that a tokamak about the size of the Joint European Tokamak in the United Kingdom, some 6 meters across, could ignite.

These machines can't produce all of the fusion conditions that would be needed for a practical reactor, such as very long ignition pulses. But proponents say they could reproduce the essential physics. “The analogy with the Wright Brothers is very good,” says Meade. The Kitty Hawk flight “is analogous to lighting this plasma and having it heat itself up and coming back down. The next step,” he adds with a chuckle, “was to go once around the field and land.”

16. BIOLOGICAL WEAPONS

Arms Control Enters the Biology Lab

1. Helen Gavaghan
1. Helen Gavaghan is a writer in Hebden Bridge, U.K.

An enforcement protocol of the bioweapons convention, now under negotiation, could affect some biotech firms and academic microbiologists

Some biotechnology companies and academic biology labs could soon find themselves caught in the highly charged world of arms control. Facilities and labs that handle potentially worrisome types of biological agents could be required to file reports detailing the materials they possess and submit to regular inspections. The reason: Negotiations that resumed last week in Geneva may finally put some teeth into the Biological and Toxin Weapons Convention (BTWC), an arms control agreement that is currently based entirely on trust; it has no mechanism to check whether signatories are complying.

Although the convention was negotiated in 1972, verification was not considered a high priority until recently, largely because few military experts considered biological weapons to be a major threat. But revelations about the extent of the former Soviet Union's biological weapons program, and recent discoveries by United Nations inspectors of Iraq's widespread efforts, have injected a sense of urgency into the discussions. Both the European Union and the Clinton Administration are now pushing for a compliance protocol to be negotiated for the BTWC by the end of this year. And, in a speech last month, U.S. Secretary of State Madeleine Albright underlined the message: “The [biological weapons convention] needs enforcement teeth if we are to have confidence it is being respected around the world.” Tibor Toth, the Hungarian ambassador chairing the talks in Geneva, told a meeting of industrialists, diplomats, and academics in Vienna in May, “It is not now a question of whether but of when and how.”

That prospect has come as a wake-up call to biotech industry and microbiology researchers worldwide. Industry trade organizations, particularly in the United States, have long been aware of the issue, but individual companies and institutions are only now realizing they soon may become involved. “Until recently,” says Brad Roberts from the Institute of Defense Analysis in Washington, D.C., “the U.S. [biotech and pharmaceutical] industry hoped this issue would just go away.”

The negotiations that reopened last week in Geneva will determine how extensive and intrusive the verification provisions are likely to be. Some of the 158 countries that have signed the treaty are proposing that facilities judged to fall under the treaty should declare what potential biological warfare agents they possess, be subject to site visits to check the declaration, and be given a thorough inspection if a violation of the convention is suspected. “The idea is to force those countries running a biological weapons program to lie,” says Patrick Lamb, of the U.K.'s Foreign and Commonwealth Office. Once a country is forced to lie, he says, discrepancies are likely to show up between its declarations and intelligence reports, giving the United Nations grounds to act.

The critical issue is which facilities would have to make these declarations. Because many of the pathogens and toxins that could be used as weapons, as well as the equipment to manufacture them, also have civilian uses, hundreds of facilities in any country could potentially fall under the scope of the treaty. And, unlike the manufacture of nerve gases—which are prohibited by the chemical weapons convention—only small quantities of a biological agent are needed to produce an offensive weapon that multiplies in its host organism.

View this table:

At the Vienna meeting, many diplomats and arms control specialists were talking of devising a combination of “triggers” that would bring no more than a few tens of facilities per country under the convention. These will probably include any facility that has worked on offensive or defensive biological weapons, any facility currently working on biological defense measures, and any facility working with the most stringent biocontainment standards of biosafety level 4. If such activities were used as stand-alone triggers, most signatory countries would only have a handful of facilities that needed to make declarations, and some may have none.

Other triggers under discussion include biosafety level 3, work with listed pathogens or toxins, expertise in genetic manipulation or creating aerosols of pathogens, and production microbiology. As stand-alone triggers these would in many countries force declarations from as many as several hundred facilities, many of which would be of no interest to the convention, says Graham Pearson, former head of the U.K. Chemical and Biological Defence Establishment at Porton Down. However, Pearson says, combinations of, say, biosafety level 3 and other triggers would be more discriminating and could be tailored to require no more than 10 or so facilities per nation. “The aim,” says Tony Phillips, who is providing technical advice to the British government, “is to catch the facilities most relevant to the treaty.”

Industry's response to these efforts to minimize the number of facilities affected by the treaty may come as a surprise to the diplomats, however. Gillian Woollett of the regulatory department of the U.S. Pharmaceutical Research and Manufacturers Association (PhRMA) says that if just a few companies are singled out to make declarations, their reputations could be tarnished. PhRMA, says Woollett, would prefer that a broad range of companies be required to make a declaration under the convention, but that these declarations be kept as short as possible.

PhRMA is, however, far more leery about opening up industrial labs to routine inspections to verify the declarations because of problems of commercial confidentiality. “By looking at the way equipment is linked, an expert can learn about our whole production process, or work out how easily prototype equipment could be scaled up. One bug casually wiped from a surface could tell you everything about the protein product produced, its promoters, and the environment in which it thrives,” says Woollett.

That sentiment seems to be widely shared in industry. Helmut Bachmayer, head of corporate biosafety at the Swiss drug giant Novartis International in Basel, for example, fears that the already heavily regulated pharmaceutical and biotech industries will run the risk of industrial espionage without making the world a safer place. “You cannot stop the bad guys if they intend to make biological weapons,” says Bachmayer. And following a May meeting held by the European Union in Brussels to try to get the biotech industry on board, Roger Wils from Janssen Pharmaceutica in Belgium said, “There were a lot of nice beautiful words, but I'm not sure anyone can guarantee confidentiality.” Wils says he left the meeting with the sense that the protocol would lay open his company's entire research program.

Although reluctant to submit to routine inspections, American and European industry accept the need for investigations when a treaty violation is suspected. Such “challenge inspections” could be politically damaging both for the accuser and the accused, and much discussion is currently focused on what circumstances would require challenge inspections to be instigated.

Concerns over confidentiality also worry those few academic researchers aware that their labs might fall under the scope of the protocol. It is still unclear how many labs will be affected, but it is almost certain that the activities of some academic institutions will trigger the need for a declaration. According to Otto Doblhoff of the Institute of Applied Microbiology at the University of Agricultural Sciences in Vienna, the large number of concurrent activities in a modern biology lab will also make visits and inspections more difficult for academic institutions than for production facilities. And in a research world of tight budgets and limited resources, completing the paperwork for a bioweapons compliance declaration could be an onerous burden on researchers. Nevertheless, Doblhoff believes a compliance protocol for the BTWC is essential and could be made workable.

Industry in the United States and Europe is beginning to accept the inevitability of the impending compliance protocol, however, and is becoming engaged in the negotiations on technical issues. Both PhRMA and the European Federation of Pharmaceutical Industry Associations are completing position papers. Lynn Klotz, a biotechnology and biobusiness specialist with the Federation of American Scientists, says the PhRMA paper is much less combative than its earlier statements. Klotz attributes this softened position to a series of White House-sponsored meetings where government and industry exchanged views. “At the first of these, there were maybe three industrialists and 30 White House staffers. That balance has now changed,” says Klotz.

Moreover, says Malcolm Dando of the Department of Peace Studies at the University of Bradford in the U.K., industry knows that their governments are not going to fit them into an arms control straitjacket: “The diplomats know that biobusiness is a growth area for the 21st century and that they must protect the intellectual property of their national industries.”

17. HUMAN GENETICS

New Gene Found for Inherited Macular Degeneration

1. Elizabeth Pennisi

In a large swedish family, geneticists have tracked a gene that could help reveal clues to what causes vision to fade in old age

Gene-hunters dream of families like the one introduced to researchers almost 40 years ago by Yngve Barkman, an ophthalmologist in Falun, Sweden. Many of the family members, currently numbering more than 1000 individuals over 12 generations, either had or have an inherited disease called Best's macular dystrophy, which sometime during adulthood destroys the part of the retina responsible for the sharpest vision. Now, that heritage—a burden to family members—has proved a boon to geneticists, enabling them to track down the gene at fault.

In the July issue of Nature Genetics, Konstantin Petrukhin, a geneticist with Merck Research Laboratories in West Point, Pennsylvania, and his colleagues report that they have traced the mutation that causes Best's disease to a previously undiscovered gene, which they call bestrophin. The function of bestrophin is currently unknown. But its discovery, along with the earlier identification of the gene for Stargardt's disease, another inherited form of the condition, could help researchers track the roots of age-related macular degeneration, a common cause of vision loss in old age.

The new gene, like the Stargardt's gene, “now becomes a candidate” as the site of mutations causing the age-related eye problem, says Michael Dean, a molecular geneticist at the National Cancer Institute-Frederick Cancer Research and Development Center in Frederick, Maryland, who co-discovered the Stargardt's gene. Even if the genes are not involved directly, they might at least help researchers figure out what goes wrong in age-related macular degeneration. If so, these studies might lead to ways to prevent or treat the condition, which afflicts millions of people in the United States alone. In contrast, only about 1 person in every 30,000 has Best's disease.

When Barkman first came across this family in central Sweden, he didn't recognize that the macular degeneration that afflicted many of its members was due to Best's disease. That possibility also escaped Stefan Nordström, a medical geneticist at the University of Umeå in northern Sweden, who identified another family with macular degeneration in that part of the country. But as the two researchers became aware of each other's work, they began to wonder whether there was any connection between the families. They spent the next 2 years tracking down additional members of these families, aided by the extremely careful records churches in Sweden keep on their parishioners.

By 1976, the researchers had found the link. The family Nordström had identified was descended from a female member of Barkman's family in central Sweden who had moved north with her husband. Born in 1760, “she's the ancestor to all these cases,” says Ola Sandgren, an ophthalmologist at the University of Umeå who now takes care of many of the current members with the disease. Barkman and Nordström had also recognized by then that their now-unified family suffered from Best's disease, which is diagnosed from the distinctive electrical potential that develops between the retina and cornea of the eye.

More than a decade later, Claes Wadelius, a clinical geneticist at the University of Uppsala, realized that the linked populations gave researchers enough surviving family members to attempt to track down the faulty gene. His team pursued the gene by linkage analysis: seeing how often the condition was inherited together with markers at known locations on the chromosomes. Wadelius and his colleagues at Umeå were beaten by a couple of weeks in the first phase of the search, however. In 1992, Edwin Stone, an ophthalmologist and molecular biologist at the University of Iowa, Iowa City, used data from a large Iowa family that also has the disease to map the gene's approximate location on chromosome 11.

Further studies by Wadelius's team of DNA from 250 affected and unaffected family members narrowed the search to a million-base-pair section of the chromosome. At that point, the Swedish geneticists joined forces with Merck to home in on the gene. Once the combined team got to within 800,000 bases of it, the researchers searched computer databases for potential genes in that target region.

Both labs then looked for mutations in those genes in family members with the disease but not in people with normal vision. Petrukhin's group found that the mutations consistently appeared in the gene now called bestrophin. The researchers have since confirmed the finding in other families with Best's disease. “We now have looked at 25 independent families and have found [bestrophin] mutations in many, if not most, of those families in this gene,” says Wadelius. “It's a wonderful piece of work,” comments Stone, whose group has also been looking for the gene.

But the only sure thing about the bestrophin protein is where it's made: in the pigment cell layer of the retinal epithelium—“the exact place where you see the pathology,” says Petrukhin. Over time, these cells accumulate debris that interferes with vision, particularly in the macula, the part of the retina that sees fine details. The big question now is how the bestrophin mutations bring about that pathology.

The next big question is whether people with age-related macular degeneration also have bestrophin mutations. Efforts to pin down a genetic link between the age-related condition and another inherited form for which the gene is known, Stargardt's disease, have been inconclusive.

But whether or not Petrukhin, Wadelius, Stone, and others succeed in finding abnormal bestrophin in elderly people with macular degeneration, study of how the protein triggers Best's disease could still shed light on the events that dim vision in old age. The cellular debris seen in Best's disease is more like what is seen in the age-related form than is the debris in Stargardt's disease, says Wadelius: “It's a most promising model for the age-related form.”

18. SPECIAL FOCUS ON CARDIOVASCULAR DISEASE

Tracking Down Mutations That Can Stop the Heart

1. Marcia Barinaga

with heart disease the leading cause of death in developed countries, science is taking a special look at two factors that have received little publicity. one story deals with mutant genes that cause frequently fatal heart defects, while the other discusses recent results linking infections to artery-clogging plaques

People were shocked when Boston Celtics star Reggie Lewis collapsed on the basketball court in the summer of 1993, but the death also had a familiar ring. Every few years a well-known young athlete drops dead without warning during a sporting event, victim of an undetected genetic heart condition. These cases provide an all-too-graphic demonstration that heart disease doesn't have to begin in midlife with the development of clogged arteries but can arise from stealthy inherited flaws in the heart itself. “People have known for quite a while by looking at families with heart disease that genetics plays a role” in conditions ranging from congenital heart malformations to fatal disturbances in heart rhythms, says geneticist Mark Keating of the University of Utah Health Sciences Center in Salt Lake City.

But until 8 years ago, none of the genes at fault had been identified. Then, Keating says, came “an explosion of information” about mutations that directly impair the heart—an explosion that still hasn't let up. In 1990, Christine and Jon Seidman at Harvard Medical School in Boston and their colleagues found the first mutation, in myosin, a key protein in heart muscle. Since then, discoveries of heart-handicapping mutations have been pouring out of numerous labs at an ever-increasing rate, yielding more than 100 mutations in more than a dozen genes.

Some, like a new gene reported on page 108 by the Seidman team, affect the formation of the heart structure itself. Mutations in these genes cause defects such as an abnormal hole in the wall, or septum, that divides the upper two chambers of the heart. Other mutations disrupt ion channels, the protein pores that control the electrical conductance of heart muscle, altering the heart's normal rhythms. And mutations in myosin and other proteins involved in muscle contraction are at the root of most of the hereditary cardiomyopathies, conditions that cause the heart's walls either to thicken abnormally or to stretch out.

View this table:

Cardiologists are already putting this new information to work to design genetic tests that will identify those at risk of dying from the heart defects, as well as treatments that will safeguard their lives, such as drugs tailored specifically to reverse a patient's genetic flaw. And the benefits may not be limited to the hereditary defects.

Although these conditions are relatively rare—the most common strikes only 0.2% of the population—many are similar to much more common heart ailments that develop later in life. For example, disturbances in heart rhythms, which can kill very quickly, often occur in people who have suffered heart attacks or who take certain medications. “What we have learned [about the genetic conditions] may be able to translate to the much bigger problem of sudden cardiac death,” says Michael Ackerman, a cardiology fellow at the Mayo Clinic in Rochester, Minnesota.

Holes in the heart

Few heart flaws are more obvious than a hole, or a set of holes, between the two upper chambers known as atria. Gene flaws aren't always to blame; some babies are born with a small defect because a hole that is normally present in the fetal heart didn't close up in time. Such defects often disappear on their own within weeks or months.

But some babies, perhaps as many as 1 in 1500, are born with a hole that is larger than it should ever have been during fetal life, or with multiple holes that make the septum look like Swiss cheese. These atrial-septal defects (ASDs), which usually have to be repaired surgically, don't just result from a timing problem, but are “truly malformations” caused by errors in early development, says Christine Seidman.

Some of these developmental errors run in families, and in two cases researchers have discovered the genetic cause. Last year, the Seidman team and a group led by David Brook of the University of Nottingham in the United Kingdom reported that they had found the mutant gene responsible for Holt-Oram syndrome, a rare hereditary condition that causes holes between the atria and sometimes the lower chambers of the heart, known as ventricles, as well as arm and hand defects. The culprit gene, TBX5, encodes a transcription factor, a protein that regulates other genes, including some of those needed to build a normal heart.

In the work reported in this issue, the Seidmans describe a similar gene that, when mutated, causes another form of ASD, one not accompanied by the arm and hand problems characteristic of Holt-Oram. This gene, too, encodes a transcription factor; it is the human counterpart of a fruit fly gene called tinman because, like the Tin Man in The Wizard of Oz, fruit fly embryos that lack both copies of the gene have no hearts at all. As with TBX5, humans missing only one copy of tinman have heart defects that include ASDs.

Identification of these genes may help solve a critical puzzle, says Seidman: “Adults with ASD appear to die at an earlier age than do normals,” even when the hole is successfully repaired. They apparently have a higher risk for sudden death from heart block, a blockage of the electrical signal that travels through the heart muscle and controls the heartbeat.

For years physicians assumed that the conduction problem at the root of the heart block results from the repair surgery, but Seidman notes that surgery to repair the far more common, nonhereditary holes does not increase the risk of heart block. “It clearly is a component of the genetic defect,” she says, adding that a better understanding of the transcription factors and the genes they regulate may reveal how the mutations lead to heart block. And that in turn may suggest less invasive ways to prevent it.

Currently, heart block is best prevented with an implantable pacemaker, provided it's detected in time. “We need to be more careful in monitoring patients [with mutations] for the development of heart blocks,” Seidman says, so that they can receive pacemakers before a heart block kills them. Now that the affected gene is known, it may become possible to screen infants in families with the mutation to ensure that no cases are missed.

The thick and the thin

Not all genetic heart conditions are evident at birth, as the ASDs are. Roughly 1 in 500 people harbor subtle genetic errors that cause cardiomyopathies, afflictions of the heart muscle that don't become apparent until adolescence or even adulthood. Hypertrophic cardiomyopathy (HCM), the leading cause of sudden death in young athletes, was blamed for the deaths of basketball stars Hank Gathers of Loyola Marymount College in 1990 and the Celtics' Lewis. The condition causes cardiac muscle cells to grow larger, thickening the heart's muscular wall. The affected heart pumps strongly but doesn't relax well for the filling phase of the heartbeat. In contrast, dilated cardiomyopathy (DCM) produces a stretched-out, thin-walled heart that fills with excess blood but can't efficiently pump it out.

Both conditions can lead to the uncoordinated fluttering of the heart called fibrillation, which pumps no blood and is fatal if not halted quickly. And it turns out that both can be caused by mutations in genes encoding proteins, such as myosin, that play a role in heart-muscle contraction.

The mutant myosin gene the Seidman team identified in 1990 causes one form of HCM. Before that work, “it was always thought that the hypertrophy was the primary defect, that there was something wrong with the structure of the heart,” says researcher Ketty Schwartz of the French biomedical research agency INSERM in Paris, whose lab also studies myopathy genes. But the Seidmans found that the mutation actually changes the gene encoding the heavier of the two protein chains that make up the myosin molecule. The discovery that a mutant contractile protein was at fault “was a big surprise,” says Schwartz. “No one thought that the contractile proteins were altered.”

The finding was no fluke, though. It was soon followed by a flurry of confirmations, in which other teams found mutations in the myosin heavy chain in families with HCM. Subsequently, mutations affecting six other proteins that work together to make the muscle contract—the two myosin light chains, tropomyosin, troponin T, myosin-binding protein C, and troponin I—turned up in HCM patients. At last count, 109 mutations in these seven genes had been shown to cause HCMs, says cardiologist Robert Roberts of Baylor College of Medicine in Houston.

The discovery of these mutations and the characterization of the diseases they cause, which is going on in several laboratories, including Schwartz's and the Seidmans', could be a big help in diagnosis. For example, troponin T mutations cause a particularly virulent form of HCM, with high rates of sudden death, but little hypertrophy. That means cases may be missed until fibrillation strikes. “We have these horrific stories where kids who are unrecognized to be affected are dying,” says Christine Seidman. She thinks it may be worthwhile to screen children in afflicted families, and implant defibrillators in those with the mutant gene.

The mutations in myosin-binding protein C could also aid in diagnosis. These turn out to cause a much milder form of HCM that doesn't strike until age 50 or so and previously wasn't considered to be genetic. “We frequently see [older] people with hypertrophy,” says Roberts, and some of them don't have high blood pressure or any other risk factors for the condition. “We now suspect that probably that is a familial form.” Indeed, Christine Seidman's group has already found myosin-binding protein C mutations in such cases. The discovery should enable affected families to take measures to reduce their risk of sudden death.

Researchers have also made progress in understanding how mutations in at least two of the contractile proteins might cause the heart to hypertrophy. Muscles contract when filaments of myosin slide over filaments of actin, causing the muscle cells to shorten. In test tube studies performed in 1992, physiologist H. Lee Sweeney and his co-workers at the University of Pennsylvania, Philadelphia, found that the mutant form of myosin interferes with this sliding. “The [mutant] myosin put a drag on the normal myosin and slowed everything down,” Sweeney says. “We predicted it would greatly drop the power output of any muscle that incorporated it.”

Those predictions were confirmed when he teamed up with Neil Epstein and Lameh Fananapazir of the National Heart, Lung, and Blood Institute to look at samples of muscle with the myosin mutation. “The muscle shortened very slowly,” Sweeney says, “the power output was diminished greatly, and the force was down somewhat as well.” Mutations that cause the most severe power loss also caused the most severe disease. That suggests, he says, that the cardiac muscle cells enlarge to compensate for this reduced power.

Some of the disease-causing mutations in troponin T have just the opposite effect, according to Sweeney's team and that of Larry Tobacman, a biophysicist at the University of Iowa College of Medicine in Iowa City: They make the actin slide past myosin faster than usual. Cells with the mutation “generate reasonable power output,” says Sweeney, and that may explain why the patients' hearts show little hypertrophy. But, he says, “the fact that [the contractile machinery] is cycling so fast is a big problem, because now you are using a lot more energy.” This, he suggests, could cause a local energy shortage when the heart is under stress and might trigger arrhythmias.

As for DCM, the other major form of cardiomyopathy, its most common cause is poor blood supply to the heart muscle, which kills off many of the heart's muscle cells, causing it to stretch and weaken. But 25% of the cases may be genetic. Some of these occur in people with muscular dystrophy, in which the heart, along with other muscles in the body, lacks dystrophin, a protein that transmits the force of the contraction to the protein scaffolding outside the cell.

Two DCM mutations found earlier this year by Utah's Keating seem to have a similar effect. Both strike the gene for actin, which connects myosin to dystrophin and other proteins at the cell membrane. Unlike the mutations that cause HCM, says Keating, “these mutations don't cause a problem with force generation; they cause a problem with force transmission.”

That inadequacy of force transmission may put deadly stress on cardiac muscle cells in times when the heart is working hard, says Keating, and over time that could lead to the death of muscle cells seen in DCM. Other mutations that affect energy metabolism also cause DCM, and defective energy metabolism also can cause cell death. “It may be that what happens in dilated cardiomyopathy is that there is just a lot of cell death,” says Jon Seidman. That may explain why its genetic causes are more diverse than the causes of the other syndromes, he adds, because “lots of different things can kill a cell.”

Unhealthy rhythms

Whether the primary problem is a hole in the heart or a mutation in myosin, what ultimately kills people with these conditions is an arrhythmia, caused by electrical conduction glitches in the heart muscle. A third class of genetic conditions causes these conduction problems directly. The best studied of these syndromes is long QT syndrome (LQTS), which is named after the characteristic change it produces in a patient's electrocardiogram: a lengthening of the QT interval, the time in the heartbeat when the heart muscle is recovering from one contraction, before it can be triggered to contract again.

The defect is dangerous because it not only lengthens the recovery time but also makes it more variable from cell to cell. Normally, the electrical impulse that causes the heart to contract sweeps in a regular fashion through the heart muscle from top to bottom, but in LQTS it may cycle backward, forming little eddies of current that interrupt the heart's rhythm and can send the heart into fibrillation and sudden death.

LQTS can be triggered by a variety of nongenetic causes, ranging from alcoholism to some prescription drugs. Hereditary long QT only affects 1 in 10,000 people, but for those people, the risk of death can be 50% over 10 years. And because young, healthy people don't routinely get electrocardiograms, LQTS often goes undiagnosed. In one-third of the people who die of LQTS, “their death is their first and last symptom,” says Ackerman, of the Mayo Clinic.

In the mid-1990s, Keating's team identified four mutant genes that cause LQTS. All four encode ion channels, the protein gateways that allow charged ions to flow through a cell's membrane. The finding makes biological sense, because the ion flows controlled by these channels produce the action potential—the wave of electrical activity that triggers muscle contraction.

The action potential begins when sodium channels open, allowing positively charged sodium ions to rush into the cell. This reverses the charge across the membrane, making its inner surface momentarily more positive than the outside. That condition, known as depolarization, shuts the sodium channels and after a delay opens potassium channels, which allow positively charged potassium ions to surge out of the cell, returning the membrane to its resting state so that the muscle can contract again.

Potassium channels are “the key off switches for the heart” that end the action potential, says Keating. And three of the four mutations his team identified, underlying at least 80% of the inherited cases of LQTS, are in potassium channel genes. These mutations reduce the number of working channels in the patients' heart cells, thereby delaying the end of the action potential and lengthening the QT interval.

Sodium channel mutations can also cause LQTS. Some mutations result in channels that stay open even after the cell membrane is depolarized, allowing sodium to go on leaking into the cell, where it prolongs the action potential. Another mutation has the opposite effect, decreasing the flow of sodium ions into heart cells and shortening the action potential. That condition, known as Brugada syndrome, also leaves the heart more susceptible to the current eddies that cause fibrillation. It is a major cause of death of young men in Asia—in some countries second only to auto accidents.

The revelation that mutant ion channel proteins underlie hereditary arrhythmias has already changed how physicians treat these conditions. “Understanding precisely the mechanism allows us to target the therapy very specifically,” says Johns Hopkins University cardiologist Gordon Tomaselli. For example, in LQTS patients with the sodium channel mutation, some physicians are using a drug called mexiletine, which blocks the leaky channels. “Lo and behold,” says molecular cardiologist Jeffrey Towbin of Baylor, “if you look at electrocardiogram before and after, the before shows severe long QT syndrome and the after shows near-normalcy.” Likewise, cardiologists have been prescribing potassium supplements for people who have potassium channel mutations to get more ions through the existing channels, although Tomaselli notes that researchers have yet to show that this improves the prognosis.

Some commonly prescribed antibiotics, antihistamines, and antifungal agents increase the chance of heart arrhythmias, and discovery of the channel defects has made it clear as to why. Most of these drugs block HERG, a potassium channel that is mutated in about 30% of LQTS cases. Researchers studying HERG have learned, says Ackerman, that this channel normally acts as a fail-safe: It quickly opens to offset inappropriate electrical signals that could be the trigger for fibrillation. The fact that the drugs as well as mutations inhibit HERG “puts the inherited and acquired forms [of LQTS] … in a common pathway,” says Ackerman.

Heart researchers hope their understanding of all these mutations will reveal more of the common physiological pathways that link genetic causes of cardiac death to their nongenetic counterparts. Then they will be in a much better position to keep hearts—whether flawed by genes or damaged in later life—from suddenly stopping.

19. SPECIAL FOCUS ON CARDIOVASCULAR DISEASE

Infections: A Cause of Artery-Clogging Plaques?

1. Trisha Gura
1. Trisha Gura is a free-lance writer in Cleveland.

Recent evidence suggests that common bacteria and viruses contribute to the development of atherosclerosis, perhaps by triggering inflammation

Some cardiovascular disease experts are finding inspiration in ulcers. Until a few years ago, most experts attributed stomach ulcers to factors much like those thought to cause the fatty arterial deposits, or plaques, that trigger heart attacks and many strokes: diet, lifestyle, and an individual's own genetic susceptibility. Then researchers discovered that many, if not most, peptic ulcers are caused by a common bacterium, Helicobacter pylori—a finding that opened the way to treating ulcers with antibiotics. Now, cardiovascular researchers are being tantalized by hints that the bacteria and viruses that cause such common ailments as pneumonia, gum disease, and, yes, ulcers could be at least contributing factors in plaque formation.

So far, the evidence that infections play such a role is largely circumstantial. Researchers have found signs of infection, such as antibodies to certain pathogens, more often in heart disease patients than in healthy individuals. Investigators have also found microbial DNA, RNA, and proteins in the artery-clogging lesions. However, the actual organisms could not be extracted from plaques and successfully cultured, leaving open the question of how the infectious agents might contribute to formation of the plaques, if they contribute at all. “The organisms could be innocent bystanders,” says Paul Ridker, a cardiologist at Brigham and Women's Hospital and Harvard Medical School in Boston. “These data are helpful but hardly definitive.”

Still, everyone agrees that the idea that infections might promote atherosclerosis, perhaps by triggering inflammation of the vessel walls, should not be dismissed. “The evidence that bacteria are found in plaques needs to be taken seriously,” says Peter Libby, chief of the cardiovascular division at Brigham and Women's and Harvard Medical School, who helped put together a special panel of experts to sift through the current data and look for answers. The panel, commissioned by the National Heart, Lung, and Blood Institute in Bethesda, Maryland, concluded that the current data were “intriguing” and called for further studies of the infection-atherosclerosis link.

If bacteria do turn out to trigger blood vessel disease, then relatively inexpensive antibiotic regimens might be added to the current cholesterol-lowering, blood pressure-reducing heart disease prevention repertoire. It's still far too early to recommend that heart patients generally be put on antibiotics, with their potential for fostering resistant strains of bacteria. But at least half a dozen studies are already under way to test whether these drugs can prevent heart attacks or other coronary events in patients with atherosclerosis and other heart conditions. “At some point, you have to study the organisms when and where the disease is actually taking place,” says Michael Dunne, senior director of clinical research at Pfizer in Groton, Connecticut, a pharmaceutical giant that is conducting one of the antibiotic trials.

The current interest in a possible link between infection and cardiovascular disease had its roots in several papers published in the late 1980s. In one, for example, Petra Saikku and his colleagues at the University of Helsinki in Finland found that 27 out of 40 heart attack patients and 15 out of 30 men with heart disease carried antibodies related to a bacterium called Chlamydia pneumoniae, which is known primarily as a cause of sexually transmitted diseases. Compared to the seven out of 41 control patients who had such antibodies, the results were significant, even though the overall study was small. In a similar vein, Joseph Melnick's group at Baylor College of Medicine in Houston, Texas, and others found that 70% of patients undergoing surgery for atherosclerosis carry antibodies to cytomegalovirus (CMV), a common virus that causes respiratory infections, while only 43% of normal controls do.

At the time, though, many physicians argued that even if the patients did have more infections than healthy controls did, that might be a simple coincidence. They noted, for example, that people who have such infections are more likely to smoke, be older, have poorer access to health care, and live in poverty than those who are healthy. “Those are the same kind of people who have heart attacks,” says Ridker. “I think you have to be careful when interpreting the data.” Dunne agrees: “You could argue that these [studies] were not the strongest link.” But he adds, “At least they were something that put a bug and heart disease on the same radar map.”

Heart experts began taking the possibility of such a link more seriously in the early 1990s, when the pathogens started turning up in the plaques themselves. Techniques including the polymerase chain reaction, which amplifies trace nucleic acid sequences, and immunohistochemistry, which uses specific fluorescent probes to light up telltale microbial proteins, enabled researchers to track down the organisms. “I looked at those data and thought, ‘Wow, here we might have the smoking gun: bacteria in a plaque,’” recalls Brent Muhlestein, a cardiologist at the LDS Hospital and the University of Utah in Salt Lake City. “I had always gone on the assumption that the arterial plaques were sterile.”

Indeed, research by Muhlestein and his colleagues later turned up Chlamydia proteins in 79% of plaque specimens taken from the coronary arteries of 90 heart disease patients. The protein could be detected in fewer than 4% of heart artery walls of normal individuals. Animal studies then provided more direct evidence that the bacterium might contribute to plaque formation. The Muhlestein group and others showed that infecting rabbits with Chlamydia measurably thickens the arterial walls of the animals, especially those given high-fat diets or those genetically predisposed to high blood cholesterol.

What's more, Muhlestein's team also gave one subset of infected rabbits doses of azithromycin, an antibiotic used to treat Chlamydia infections, and found that the arteries of the treated animals looked more like those of uninfected rabbits. “We concluded that infection with Chlamydia actually accelerates the development of atherosclerosis, and treatment with azithromycin prevents it, in the rabbit model,” Muhlestein says.

Since Chlamydia and CMV were implicated as possible contributors to heart disease, other microbes have joined them as suspects. Several teams have evidence implicating the bacteria that cause gum diseases such as gingivitis as a factor in atherosclerosis. In a study reported 2 years ago, for example, James Beck's group at the University of North Carolina, Chapel Hill, looked at dental data collected on 1147 men between 1968 and 1971 and found that those with dental infections tended to have a higher risk of heart disease and strokes. And in the May issue of the journal Circulation, Vincenzo Pasceri's group at the Universitá Cattolica del Sacro Cuore in Rome, Italy, described the results of a small-scale epidemiological study suggesting that H. pylori might be involved in heart disease.

The results of these studies are far from the final word, especially because they are plagued by the same confounding factors as the Chlamydia and CMV studies: Those with bad teeth or ulcers tend to be older, to smoke, and to live in poverty. In addition, no one knows how infections might foster plaque development, and the puzzle is deepened by the range of pathogens under suspicion. Researchers are searching for a common denominator that might link them. One popular hypothesis is that inflammation triggered by the infectious organisms might be a key, as at least a decade of research has already implicated inflammation as contributing to plaque formation.

Evidence that infections might be working that way comes from Pasceri's group in Rome. Their Helicobacter study showed that only the more virulent of two inflammatory Helicobacter strains seemed to correlate with increased incidence of heart disease. Although 48% of heart disease patients carried the more aggressive organism, only 17% of controls did. By contrast, researchers found similar blood levels of antibodies to the more docile Helicobacter strain in both groups of patients.

But it's also unclear how microbes might work in arteries that are some distance from their primary infection sites. Brigham and Women's Libby suggests that there may be what he calls an “echo effect,” in which infections cause a response not only at the site of infection, say, the intestine, but also at secondary sites such as the artery wall. He suggests, for example, that the organisms release toxins into the bloodstream or display molecules on their surfaces that exacerbate inflammatory reactions at the blood vessel linings. Infection echoes could add to other risk factors such as smoking to foster plaque growth.

But even Libby cautions that his echo theory is indeed that: a theory. More definitive answers about whether there is any link between microbes and cardiovascular disease are needed. Those could come from clinical trials designed to test whether antibiotics can prevent the disease.

The results reported publicly so far have been inconclusive, however. For example, this March at the American College of Cardiology meeting in Atlanta, Muhlestein described preliminary results from a trial of the antibiotic azithromycin, which he and his colleagues are conducting in 300 patients with previous heart problems. The antibiotic did not reduce heart attacks or other clinical events such as severe chest pain after 6 months, he said. But it did reduce the blood levels of several inflammatory molecules, and Muhlestein asserts that 6 months might be too soon for an effect to show up in the relatively small number of patients. “We'll be following up the patients for 2 more years to see if there is a reduction in heart attacks, strokes, and other clinical events,” Muhlestein says.

It should be easier to see results in the ambitious Wizard trial, which is sponsored by Pfizer, the manufacturer of azithromycin. Having just completed patient enrollment, Wizard has gathered 3500 heart disease patients from medical centers in the United States, Canada, and Europe who all have documented atherosclerosis and have tested positive for Chlamydia antibodies. The plan is to give half the participants the antibiotic and the other half a placebo. The patients will then be followed for at least 3 years to observe whether the drugs decrease the incidence of heart attack or other coronary events.

“This is kind of a risky trial,” cautions Pfizer's Dunne. He notes, for example, that no one knows the best time to give the drug—it might be too late after heart disease has already developed—nor does anyone know what's the best drug dosage to use.

But given the rewards that would come if a trial turns up positive, these uncertainties have not daunted other drug companies that make antibiotics for treating Chlamydia infections. Abbott Laboratories in North Chicago, Illinois, is currently testing their drug, called clarithromycin, in heart patients in Europe; Hoechst Marion Roussel in Kansas City, Missouri, has completed a 270-patient pilot study with a compound called roxithromycin and is continuing with further studies; and the National Institutes of Health is funding a study of azithromycin, to be conducted on 3500 patients through the University of Washington, Seattle. “If any one of these trials turns out to be positive, I am sure there will be a whole family of studies that will start,” Dunne predicts. “We've got to start somewhere, and this is a question well worth answering.”