News this Week

Science  14 Aug 1998:
Vol. 281, Issue 5379, pp. 890

You are currently viewing the .

View Full Text

Log in to view the full text

Log in through your institution

Log in through your institution


    Physicians Wary of Scheme to Pool Icelanders' Genetic Data

    1. Martin Enserink*
    1. Martin Enserink is a science writer in Amsterdam.

    Discord over a plan to put the health records of every citizen of Iceland into a huge database, and then grant a private company the right to analyze and market the data, has reached a new pitch. Late last month, the Icelandic Health Ministry unveiled a new bill that would make such a deal possible. Although the measure seems to have widespread public support, it has come under sharp attack from some of the people whose backing will be critical to make the scheme work: physicians and scientists. “Patients come and talk to me, and at night I'm supposed to send the information to a third party that can sell it on the world market,” says Tomas Zoega, head of the Psychiatry Department of the National Hospital and chair of the Ethics Committee of the Icelandic Medical Association. “That is extremely troublesome.”

    At the center of this simmering controversy is deCODE Genetics, a company founded in 1997 by former Harvard University geneticist Kari Stefansson. DeCODE intends to mine one of Iceland's most precious resources: the genetic composition of its people. Thanks to its isolated position and several bottlenecks that wiped out large parts of the population, the island has a remarkably homogenic gene pool, making it relatively easy to track down disease-causing mutations that might form the basis for new tests and therapies. In a deal that could be worth more than $200 million, Swiss pharmaceutical company Hoffmann-La Roche has already bought the rights to develop and market drugs resulting from genes deCODE hopes to find for a dozen disorders (Science, 24 October 1997, p. 566 and 13 February 1998, p. 991.)

    Under the plan, deCODE would provide terminals to connect every health care station and hospital in Iceland to a central computer, into which doctors would feed data on their patients. The database would also contain the records of deceased people, which Iceland already stores in well-kept records spanning most of the century. Combined with the country's detailed genealogical records and with blood or tissue samples voluntarily donated by patients, such a database would be a powerful tool to hunt for disease genes.

    The Icelandic government has embraced the plan, which could provide the island's small economy with hundreds of new high-level jobs and millions of dollars. On 31 March, the Health Ministry presented Althingi, the unicameral Icelandic parliament, with a “Health Database Bill” that would provide the legal basis for deCODE to move ahead. But the ministry was forced to withdraw the bill a month later, after strong protests from geneticists and the Icelandic Medical Association and a unanimous appeal for postponement by the staff of the University of Iceland's Faculty of Medicine. Critics complained that storing personal information without prior consent would be unethical and could result in abuse, and they attacked the idea of giving one company a monopoly on what they see as the collective property of a whole nation. Some academics also feared the scheme would hamper their own studies.

    The Health Ministry rewrote the bill and unveiled a new version on 31 July that addresses some of the concerns. For instance, individuals can now ask for their data to be included in such a way that the information could never be traced back to them, although they will have to take the initiative themselves. And an independent committee will oversee the whole project. But Zoega says the bill is still unacceptable to many physicians.

    For example, the scheme would protect Icelandic citizens' privacy by stripping their identity from their records and replacing it with a code before the records are entered into the database. But, says geneticist Jorunn Eyfjord of the Icelandic Cancer Society's research lab, in a country of just over 270,000 people, it would be “naïve” to think this would suffice. A few items of data—such as a person's profession, family relations, and the 5-year interval in which he or she was born—would be enough to give away that person's identity, she says. Others have expressed concern that the data may fall into the wrong hands. The two main trade unions, for instance, worry that employers might try to gather data about their workers.

    “The nature of the proposition sounds very Orwellian,” admits deCODE CEO Stefansson. “So it's very easy to have sort of a visceral reaction to it.” Stefansson also concedes that no database could ever be 100% secure. “But the fact of the matter is that every single piece of information that would be anonymous in this database is now available under name in the hospitals and health care stations.” The net outcome of the law, he says, would be to diminish, not increase, access to personal information.

    To alleviate fears about academic freedom, the new bill grants Icelandic scientists access to the database for noncommercial research. Applications will be handled by a three-person committee, one member to be nominated by the company, one by the health ministry, and one by the university. But the bill gives deCODE exclusive rights to market the data for a period of 12 years. That's an absolute necessity to make the project viable, says Stefansson: “It would be an extremely difficult business proposition without the exclusivity.”

    But others maintain that it's wrong to give a single private company such a monopoly. Just last week, the issue was sharpened when Reykjavik engineer and businessman Tryggvi Petursson announced that, together with two Icelandic biomedical scientists working in the United States, he will set up a company called UVS that will challenge deCODE's hegemony. UVS wants to cooperate with the Icelandic Cancer Society in tracking down cancer genes. Although the company says it could work without the health care system data if it had to, its entry into the field is “a godsend,” says Sigurdur Gudmundsson, chair of the National Bioethics Committee, “because it will make the discussion about monopoly much more focused and real.”

    So far, the bill's opponents have found little support among the general public. In a Gallup poll commissioned by deCODE, 82% of the respondents said they were in favor of the database, and 51% believed that deCODE would be the best party to develop it. “We are bringing tangible benefits to the community, and they have embraced us,” says Stefansson. Critics say, however, that the public has been lured by a slick public relations campaign and unrealistic expectations. DeCODE promises that Icelanders will get any drugs or diagnostics based on their genes for free during the patent period—a promise Eyfjord calls “a joke. … How many drugs do you think are going to be developed, and how many people will really benefit from that?”

    But members of parliament, too, seem more impressed with the benefits of the plan than with its possible downsides. “Judging from their initial comments,” says Gudmundsson, “they love it. I'm almost certain that [the new bill] will fly through parliament.” But deCODE will still need to win over the medical community. Says Zoega: “If we do not cooperate, it simply will not work.”


    Hopes Rise After SOHO Calls Home

    1. James Glanz

    Earlier this week, ground controllers reestablished full radio contact with the Solar and Heliospheric Observatory (SOHO). Many had given SOHO up for lost after a series of command errors caused it to spin out of control and fall silent early on 25 June, but the renewed dialogue with the satellite has raised hopes of bringing the $1 billion spacecraft back to life.

    The first useful transmissions from SOHO carried readings from dozens of onboard temperature and voltage sensors. The message they conveyed is dryly summarized by Francis C. Vandenbussche, the SOHO spacecraft manager from the European Space Agency (ESA) who heads the recovery team: “It's a little bit chilly.” That chill, suggesting frozen fuel tanks, is to be expected: The spacecraft had been tumbling without power for 6 weeks before controllers achieved sporadic contact on 3 August. Full contact came just under a week later, after controllers managed to recharge an onboard battery. Now an expert team of engineers will begin evaluating data from many kinds of sensors on SOHO and come up with a strategy to bring the spacecraft back from the brink. “This was a big, big step” toward recovery, says Bernhard Fleck, the SOHO project scientist for ESA, which operates the craft jointly with NASA. “You can imagine the relief.”

    Cool signal.

    SOHO radioed temperature data indicating its fuel tanks are frozen.


    The debacle on 25 June occurred when SOHO went into a spin that left its solar panels unable to collect sunlight and generate power. Those first sporadic responses on 3 August meant that the panels were in a position to collect some sunlight again. But controllers received only a so-called carrier signal from the spacecraft—“analogous to lifting your phone and getting a dial tone,” says Joseph Gurman, the SOHO project scientist for NASA. Later, 10- to 15-second bursts of signals arrived carrying slight modulations, which encoded information that controllers were initially unable to read.

    The reestablishment of partial contact was soon accompanied by a second bit of good news: A separate effort, in which radio waves were bounced off SOHO using the 305-meter dish at Arecibo, Puerto Rico, determined a precise spin rate for the craft of one rotation in about 53 seconds. “That's good from a structural point of view,” says Gurman. “A bad number might have been 10 times that,” or a rotation every 5 seconds. The information also told engineers why the carrier signal turned on and off: Solar panels face the sun only during half a rotation, and then the spacecraft lost power and went silent again. On 8 August, controllers told SOHO to use the power from its panels to charge up one of its batteries. The operation succeeded, permitting minutes-long establishment of full communication with SOHO by the following day, says Gurman.

    As the expert team, drawn from engineers at ESA and Matra Marconi Space—the company that built SOHO—plots the next moves in the recovery mission, it will face several key decisions. A critical judgment will involve the strategy for gradually returning power to SOHO's many heaters, which normally keep the craft at about room temperature. That would allow controllers to thaw the tank of hydrazine that fuels SOHO's thrusters—which would have to be fired to stop the spin. But the operation could be tricky; it will require a close knowledge of structural details combined with thermal and other information so that the warm-up does not damage frozen components. That effort “is still missing some pieces of the puzzle,” says Vandenbussche. Over the next couple of weeks, he hopes the signals streaming from SOHO will provide those pieces.


    Ultraenergetic Particles Slip Past Cosmic Cutoff

    1. Dennis Normile

    Tokyo—Every so often, a cosmic ray slams into the atmosphere packing 100 million times the energies reached in the world's largest particle accelerators—the energy of a brick falling from a table packed into a single subatomic particle. Until recently all the cosmic ray facilities worldwide had detected a total of just three of these fantastically energetic particles, so small a number that they might have been the few outliers at the very top of the cosmic ray energy spectrum. Now, however, the Japan-based Akeno Giant Air Shower Array (AGASA) collaborat.ion has recorded a further handful of ultraenergetic events—enough to conclude that the upper limit to cosmic ray energies is not yet in sight. And researchers expect more such sightings from new detectors scheduled to come on line in the next few years.

    “It's pretty darn exciting,” says James Cronin, a physicist at the University of Chicago, because these particles are somehow evading a cosmic speed trap. A cosmic ray traveling through space with an energy above about 5 × 1019 electron volts (eV) would tangle with the photons of the microwave background—the low-energy radiation that pervades the universe—and gradually lose energy. This process should set an upper limit to the energies of cosmic rays originating in the distant universe, called the Greisen-Zatsepin-Kuz'min (GZK) limit for the scientists who described it. But the AGASA findings, reported in the 10 August Physical Review Letters, show that “there is really no evidence for the GZK cutoff,” says Cronin. The finding suggests that the particles originate from some unidentified sources close to our galaxy and sets a puzzle for a new generation of cosmic ray detectors to probe.

    AGASA, the current state of the art, is made up of 111 detectors scattered over 100 square kilometers around the mountain town of Akeno in Yamanashi Prefecture, about 120 kilometers west of Tokyo. When an ultrahigh-energy cosmic ray particle—a proton or an atomic nucleus—slams into the atmosphere, it sets off a cascading chain reaction of particle collisions that ends in a shower of electrons or positrons falling on the detectors. Computer analysis can derive the original particle's approximate energy and direction of travel from this jumble of data.

    AGASA is currently the world's largest facility for detecting the most energetic cosmic rays. Even so, particles above 1020 eV are so rare that it has detected just six of them since 1990. But Masahiro Takeda, an astrophysicist at the University of Tokyo's Institute for Cosmic Ray Research, which heads the collaboration, says that's enough to suggest that the GZK limit can be topped by statistically significant numbers of events.

    If the microwave background does set a limit to the energies of cosmic rays coming from great distances, then these highest energy cosmic rays must be coming from within about 50 megaparsecs, or 163 million light-years, of Earth—somewhere among the nearby galaxies. Just where is the question, says Raymond Protheroe, an astrophysicist at the University of Adelaide in Australia. “We have this problem of trying to find what objects could possibly accelerate particles to such energies,” he says.

    Tracking down the mystery sources will require larger arrays that can quickly gather statistically meaningful numbers of these rare events. One, the High-Resolution Fly's Eye project, based in Utah, is partly operational and when completed in late 1999 will be capable of picking up five or six events greater than 1020 eV per year. An even more ambitious project that has been on the drawing boards since 1992, the Pierre Auger Project, passed a significant milestone in late July when the U.S. Department of Energy and the National Science Foundation approved $7.5 million in funding for design and engineering work on the first phase of the project: building an array of 1600 detectors on a 3000-square-kilometer site in Argentina. The $50 million array will be 30 to 40 times as sensitive as AGASA.

    Construction could start as early as this October, says Cronin, who is the spokesperson for the project, which involves 40 institutions in 19 countries. If the effort continues to secure funding, the array could be completed in 4 years. Later, the collaboration hopes to build a second array of a similar size in Utah. Cronin says the recent AGASA results “show that what we started 6 years ago was really on the right track.”


    IBM Puts Fast Chips on a New Footing

    1. Robert F. Service

    IBM announced last week that it will soon begin producing microprocessor chips embodying a technology that it says could boost operating speeds by as much as 35%. The new chips are also expected to use about a third less electricity than today's microprocessors, extending battery life for portable devices such as cellular phones and handheld computers. Competitors say IBM is betting on the wrong horse. But if the gamble pays off, the new chips could help extend Moore's law, the famous trend of performance improvements that has driven advances in microelectronics for decades but is expected to begin flattening out soon.

    The key change comes in the base on which transistors and other chip-based circuitry sit. For transistors to switch on, they must electrically “charge up” the silicon beneath them. In conventional microprocessors, built atop a slab of crystalline silicon, that's a time-consuming and energy-draining process. But in the new chips, IBM engineers embed an insulating layer just below the surface, leaving an ultrathin silicon film on top. This thin silicon layer allows silicon-on-insulator (SOI) chips to charge up much more quickly and efficiently.

    “I believe IBM is right on the money on this one,” says Dimitri Antoniadis, an electrical engineer at the Massachusetts Institute of Technology, who adds that the high-end workstations that are widely used by scientists will likely be early beneficiaries of the new chips. But other industry observers say SOI isn't all it's cracked up to be. Mark Bohr, an electrical engineer with Intel in Hillsboro, Oregon, says his company has looked closely at the new technology and decided that it's not ready for prime time.

    SOI is hardly new. The technology has been around for 30 years and is already used in chips for niche applications, such as those aboard satellites, as well as some types of computer memory. But persistent problems have stood in the way of broader use. For one, the top silicon film often ends up riddled with performance-lowering defects, because creating the underlying insulating layer requires injecting ions into the silicon at great speeds, disrupting the top surface's perfect crystalline order. The insulating layer can also cause transistors on the chip to misfire. It electrically isolates the top silicon layer, which enables it to conduct even when the transistor is turned off.

    Bijan Davari, IBM's head of advanced logic technology development, says it took company researchers 15 years to get around these problems. To make the new wafers, IBM researchers inject molecular oxygen just beneath the silicon surface, using a machine called an ion implanter, to form an insulating layer of silicon dioxide. The IBM team then uses a proprietary recipe for processing the wafers—including baking them at about 1400° Celsius for nearly 12 hours—to anneal the damage this causes to the silicon surface, creating a defect-free film atop the insulating layer. Finally, they alter the doping of the semiconductors to minimize misfiring of the transistors, says Davari.

    The changes result in SOI chips that achieve 35% gains in speed and efficiency without the drawbacks of earlier SOI devices. Davari argues that this will give IBM about a 2-year lead in the race to pack more computing power into less real estate, as it allows the same-sized transistors to operate at faster speeds. But not everyone agrees. “I think that's optimistic,” says Simon Wong, a Stanford University electrical engineer. “The bulk [silicon] technology is very good” and improving rapidly, says Wong. Davari counters that IBM leads in improving bulk silicon chips as well, and their work convinced them that improvements in these chips would soon begin leveling off.

    Bohr argues, however, that because the extra processing steps mean higher costs, “a lot of companies have decided that [SOI] will never be the way to go.” Perhaps, says Antoniadis. “But it's possible that when they see gigahertz processors coming along on SOI, they will have to pay attention.”


    Asteroid Searchers Streak Ahead

    1. Robert Irion*
    1. Robert Irion is a science writer in Santa Cruz, California.

    With comets and asteroids menacing Earth in movie theaters around the world this summer, the once-arcane field of tracking potential threats from near-Earth objects, or NEOs, is suddenly in the limelight. As it happens, it's also making unprecedented progress. NASA's budget for hunting space rocks doubled to $3 million this fiscal year, and the U.S. Air Force recently unveiled a search system that is bagging NEOs at an unprecedented clip. This week, Air Force and NASA scientists will convene to consider boosting the Air Force's role still further, and other search efforts are also steaming ahead. “The scientists doing the work are really making strides,” says Tom Morgan, discipline scientist for planetary astronomy at NASA headquarters in Washington, D.C. “One can afford to be reasonably optimistic about this whole process.”

    Space spy.

    Astronomers track potentially dangerous asteroids with this 1-meter telescope at the White Sands Missile Range in New Mexico.


    But even with their new search power, astronomers are settling in for a long hunt. They have so far spotted less than 10% of the estimated 2000 large NEOs, notes Brian Marsden, director of the Minor Planet Center at the Smithsonian Astrophysical Observatory in Cambridge, Massachusetts. Even after researchers locate the rest, they'll need at least 20 years to track their motions to determine whether any endanger the planet, says Marsden. And for now, no one knows how much technological might the Air Force will ante up—or how long asteroids will remain a high priority at NASA.

    Four years ago, as Comet Shoemaker-Levy 9 blasted into Jupiter, the U.S. Congress asked NASA to devise a 10-year plan to catalog 90% of all asteroids a kilometer across or larger that have orbits approaching Earth's. A team led by the late planetary scientist Eugene Shoemaker said that such a goal would require a $4 million per year search program—quadruple NASA's asteroid survey budget at the time—as well as help from Air Force space-surveillance experts. Now both elements are falling into place.

    In particular, Air Force-sponsored research at the Massachusetts Institute of Technology's Lincoln Laboratory in Lexington has yielded an ultrasensitive and fast charge-coupled device (CCD), an electronic imaging chip similar to those in video cameras. The CCD was built for a suite of 1-meter Air Force telescopes that track satellites and artificial debris orbiting Earth. But after NASA started pushing its asteroid goal, scientists realized the system was ideal for finding NEOs, too. “No commercially available CCD comes close to having these capabilities,” says space-surveillance physicist Grant Stokes, manager of the Lincoln Near-Earth Asteroid Research (LINEAR) program.

    The chip records more light than any other CCD and reads out 5 million pixels of data in hundredths of a second. When combined with fast Air Force telescopes, says Stokes, LINEAR can scour huge swaths of the heavens for faint, moving blips: “We're just about capable of covering the entire visible sky from a single site during 1 month.”

    LINEAR came online in late 1997 at Lincoln Lab's Experimental Test Site in Socorro, New Mexico, and hunts NEOs 10 nights per month. To date, it has unveiled 64 of them, more than all other search programs combined during the same period. A NASA effort at the Jet Propulsion Lab (JPL) in Pasadena, California, called NEAT (Near-Earth Asteroid Tracking), finds almost as many kilometer-sized NEOs, but LINEAR detects many more of the smaller objects, any of which could still wreak havoc if it struck Earth. LINEAR also has spied 10 comets and seven objects called “unusual” by Marsden's clearinghouse. “Whichever category you look at, the rate [from all search programs] has increased five- to 10-fold,” says Gareth Williams, the Minor Planet Center's associate director.

    So astronomers would like more of a good thing, and they're hoping the Air Force will consider asteroid hunting part of its “planetary defense” mission—and pay for it. The Air Force seems willing to contribute. It may build at least one more LINEAR system dedicated to NEO research, says senior scientist John Darrah of the Air Force Space Command in Colorado Springs.

    Meanwhile, other veteran asteroid programs are forging ahead. Spacewatch, at the University of Arizona, will open an additional, larger telescope in 2 years. NASA's NEAT search has new computer equipment and 6 nights per month on an Air Force telescope in Hawaii. Researchers expect that the Air Force will agree to triple NEAT's telescope access at the next meeting between the Air Force and NASA, set for 19 August at JPL, says NEAT principal investigator Eleanor Helin. NASA also anticipates more contributions from NEO observers in the Czech Republic, Italy, Japan, and France.

    This meteoric increase in detection rates will force scientists to collaborate more closely, says Donald Yeomans, director of NASA's new NEO Program Office at JPL: “We will want an efficient overall system rather than a group of individuals all vying for the same prize.” But principal investigator Robert McMillan of Spacewatch notes that NASA's competitive grants program has thus far not fostered cooperation among the groups. He also wonders whether today's asteroid fad will fall to Earth. “Ten years crosses three Administrations,” sighs McMillan. “I'm somewhat skeptical that NASA's enthusiasm will last.”


    Tobacco Consultants Find Letters Lucrative

    1. Jocelyn Kaiser

    Scientists who consult for industry get a lot of grief for being “hired guns.” Now, some critics of the practice are squeezing off a few rounds of their own in response to revelations that surfaced last week. The St. Paul Pioneer Press reported that several scientists received payments from the Tobacco Institute—the industry's public relations arm—in 1992 and 1993 for writing letters to journal and newspaper editors criticizing studies on the health effects of secondhand tobacco smoke. The information, mined from a mountain of documents assembled in Minnesota's lawsuit against the industry, indicates that nine individuals received as much as $10,000 for a letter and that the letters were often reviewed by lawyers before being sent to publications such as the Journal of the National Cancer Institute (JNCI) and The Lancet.

    To some industry critics, this is yet more evidence that tobacco companies tried to twist science to further their aims. “It's an even bigger perversion of the scientific process than I thought it was,” fumes cardiologist Stanton Glantz of the University of California, San Francisco. He and others argue that such letters, which undergo less stringent review than journal articles, may have helped persuade a district judge last month to throw out a 1993 Environmental Protection Agency (EPA) report finding that environmental tobacco smoke (ETS) causes about 3000 cases of lung cancer per year. “They're basically building up a record they could use for political and legal purposes,” says Glantz, whose own work has shown how the tobacco industry has funded research to try to debunk the scientific evidence against tobacco (Science, 26 April 1996, p. 494).

    But authors contacted by Science defend their work, arguing that the letters, based on time-consuming analyses, constitute valid scientific communications. And editors have few qualms about publishing them, noting that in most cases the authors disclosed their industry ties. “This is a tempest in an ink pot,” says George Lundberg, editor of the Journal of the American Medical Association (JAMA), which published two letters. Other observers say what's really eye-catching is the fees the authors fetched. “Anybody who thinks the amount doesn't matter has been holed up in the ivory tower for a very long time,” says bioethicist Arthur Caplan of the University of Pennsylvania. “They would have been hard pressed to take the time to write if there hadn't been a $10,000 prize out there,” he asserts.

    It's clear that EPA was the target of the letter campaign. Apparently in response to the EPA's secondhand smoke report, the Tobacco Institute set up the “ETS Consultant Program Project,” according to documents from a Web archive* created by tobacco companies last February as a result of the Minnesota lawsuit. Documents marked “attorney-client communication” describe letters written by the consultants, including one submitted to Science that was never published. A few sentences in one document describe what the Tobacco Institute was looking for: “Senior cardiologists being contacted to determine interest in a review of relevant literature. … Ideal are people at or near retirement with no dependence on grant-dispensing bureaucracies.” The scientists who wrote letters included several private consultants and some academic researchers, such as Paul Switzer, a statistician at Stanford University.

    Several of the published letters did, to varying degrees, tip readers to their sponsorship. For example, a 17 March 1993 JAMA letter from Chris Collett of Theodor D. Sterling and Associates Ltd. in Vancouver stated that “This comment was supported by the Tobacco Institute.” Others were less forthcoming. In a 6 July 1993 letter to JNCI by statistician Gio Batta Gori, a statement notes that Gori is a former deputy director of NCI's Division of Cancer Causes and Prevention, then adds: “On occasion, Dr. Gori has consulted for the Tobacco Institute.”

    Gori, a consultant in Bethesda, Maryland, says he spent “several hours” on the letters and defends the payments, which included $3555 for the JNCI letter and $6000 for a letter in The Wall Street Journal that apparently wasn't published. “Do you think scientists live out of air? Everybody gets paid by somebody,” he says.

    Although editors contacted by Science were unfazed by the payments, at least one journal is quietly changing its disclosure rules. JNCI Editor-in-Chief Barnett Kramer says his journal plans to modify its rules to make ties more explicit. But “as long as it meets the criteria for publication, we may still publish it,” he says. Lundberg agrees: “If the content is solid, that's what matters.”


    Albanians Vie for Control of Site

    1. Richard Stone

    About 2000 years ago, the Romans built a stunning theater in the coastal Albanian town of Butrint during their military conquests of the Balkans. Today the ruins are witness to another battle, this one between opposing political factions in Albania, for control of this internationally recognized archaeological site. Caught in the crossfire is a British foundation that hopes to protect the theater and other structures spanning nearly 3 millennia in a country trying to emerge from decades of global isolation.

    The battle over Butrint pits Albania's Ministry of Culture against the Institute of Archaeology and the Institute of Monuments, both based in Tirana, the capital. Last month, Culture Minister Edi Rama, appointed in April, stripped both institutes of authority at Butrint after an international body criticized them for failing to work together. The ministry itself took control, and Rama asked the London-based Butrint Foundation to help manage the site. Rama says that foreign support is critical to saving the deteriorating site: “The [Butrint] Foundation will help us develop research and excavations.”

    But on 28 July, archaeologists at the two institutes petitioned Albanian President Rexhep Meidani to block the collaboration. The opponents, who include former Monuments Director Reshad Gega, charge that the Butrint Foundation intends to profit from rising tourism at the site and to control revenue from future exhibitions of Butrint artifacts. Legislator Limoz Dizdari, head of the Albanian parliament's Culture Commission, told a Tirana newspaper that Albania “risks losing its national culture” if the U.K. foundation, which has collaborated with the archaeological institute on research projects at Butrint since 1994, is given a management role.

    However, Rama and others say the petitioners' real beef is with the new government. “They are hostile to reforms being taken in the country” to strengthen ties with the West, says Rama. “We'll show with concrete results that our decision is not against Albania's interests.”

    Archaeologists have traced Butrint's history to the 8th century B.C., when traders from Corfu are thought to have settled the site, on the tip of the Hexamil peninsula in southwest Albania. The Romans took control of Butrint around the 2nd century B.C., and the remains of Butrint's theater—along with a row of remarkable statues that includes the beautiful “goddess of Butrint”—were unearthed by Italian archaeologist Luigi Ugolini in 1920s. His team later excavated the Temple of Asclepulus, an Early Byzantine palace, and a baptistry with an exquisite mosaic floor featuring images of animals (see inset illustrations).

    During the Cold War, when Albania was shuttered from the outside world, Butrint was developed into a tourist attraction for Albanians. In 1992, after Albania opened its borders, the United Nations Educational, Scientific, and Cultural Organization (UNESCO) added Butrint to the list of World Heritage sites, making it eligible for U.N. assistance and a measure of oversight to help ensure its preservation. But during fighting early last year, thieves made off with artifacts from the Butrint museum, pumps for draining the water-logged site, and even interpretive signs. “We were lucky: Nothing supervaluable was stolen,” says Auron Tare, Albanian representative of the Butrint Foundation. Last April a UNESCO-supported conference in Saranda, Albania, convened to devise a plan to save Butrint, decried a recent “history of rivalry and a conspicuous lack of cooperation” between the two institutes, and urged that one entity be chosen to run the site.

    Despite the petition to the president, the Butrint Foundation hopes next year to launch a 5-year research and renovation program at the site. “Rama is under considerable fire for going forward, not backward,” says Butrint Foundation scientific director Richard Hodges, head of the Institute of World Archaeology at the University of East Anglia in Norwich, U.K. “The dispute is the political consequence of an effort to make Butrint a major national asset.”


    The (Political) Science of Salt

    1. Gary Taubes


    “Science … warns me to be careful how I adopt a view which jumps with my preconceptions, and to require stronger evidence for such belief than for one to which I was previously hostile. My business is to teach my aspirations to conform themselves to fact, not to try and make facts harmonize with my aspirations.”

    —Thomas Huxley, 1860

    In an era when dietary advice is dispensed freely by virtually everyone from public health officials to personal trainers, well-meaning relatives, and strangers on check-out lines, one recommendation has rung through 3 decades with the indisputable force of gospel: Eat less salt and you will lower your blood pressure and live a longer, healthier life. This has been the message promoted by both the National Heart, Lung, and Blood Institute (NHLBI) and the National High Blood Pressure Education Program (NHBPEP), a coalition of 36 medical organizations and six federal agencies. Everyone, not just the tens of millions of Americans who suffer from hypertension, could reduce their risk of heart disease and stroke by eating less salt. The official guidelines recommend a daily allowance of 6 grams (2400 milligrams of sodium), which is 4 grams less than our current average. This “modest reduction,” says NHBPEP director Ed Roccella, “can shift some arterial pressures down and prevent some strokes.” Roccella's message is clear: “All I'm trying to do is save some lives.”

    So what's the problem? For starters, salt is a primary determinant of taste in food—fat, of course, is the other—and 80% of the salt we consume comes from processed foods, making it difficult to avoid. Then there's the kicker: While the government has been denouncing salt as a health hazard for decades, no amount of scientific effort has been able to dispense with the suspicions that it is not. Indeed, the controversy over the benefits, if any, of salt reduction now constitutes one of the longest running, most vitriolic, and surreal disputes in all of medicine.

    On the one side are those experts—primarily physicians turned epidemiologists, and administrators such as Roccella and Claude Lenfant, head of NHLBI—who insist that the evidence that salt raises blood pressure is effectively irrefutable. They have an obligation, they say, to push for universal salt reduction, because people are dying and will continue to die if they wait for further research to bring scientific certainty. On the other side are those researchers—primarily physicians turned epidemiologists, including former presidents of the American Heart Association, the American Society of Hypertension, and the European and international societies of hypertension—who argue that the data supporting universal salt reduction have never been compelling, nor has it ever been demonstrated that such a program would not have unforeseen negative side effects. This was the verdict, for instance, of a review published last May in the Journal of the American Medical Association (JAMA). University of Copenhagen researchers analyzed 114 randomized trials of sodium reduction, concluding that the benefit for hypertensives was significantly smaller than could be achieved by antihypertensive drugs, and that a “measurable” benefit in individuals with normal blood pressure (normotensives) of even a single millimeter of mercury could only be achieved with an “extreme” reduction in salt intake. “You can say without any shadow of a doubt,” says Drummond Rennie, a JAMA editor and a physiologist at the University of California (UC), San Francisco, “that the [NHLBI] has made a commitment to salt education that goes way beyond the scientific facts.”

    At its core, the salt controversy is a philosophical clash between the requirements of public health policy and the requirements of good science, between the need to act and the institutionalized skepticism required to develop a body of reliable knowledge. This is the conflict that fuels many of today's public health controversies: “We're all being pushed by people who say, ‘Give me the simple answer. Is it or isn't it?’” says Bill Harlan, director of the office of disease prevention at the National Institutes of Health (NIH). “They don't want the answer after we finish a study in 5 years. They want it now. No equivocation. … [And so] we constantly get pushed into positions we may not want to be in and cannot justify scientifically.”

    The dispute over salt, however, is an idiosyncratic one, remarkable in several fundamental aspects. Foremost, many who advocate salt reduction insist publicly that the controversy is a) either nonexistent, or b) due solely to the influence of the salt lobby and its paid consultant-scientists. Jeremiah Stamler, for instance, a cardiologist at Northwestern University Medical School in Chicago who has led the charge against salt for 2 decades, insists that the controversy has “no genuine scientific basis in reproducible fact.” He attributes the appearance of controversy to the orchestrated resistance of the food processing industry, which he likens to the tobacco industry in the fight over cigarettes, always eager to obfuscate the facts. “My considerable experience indicates that there is no scientific interest on the part of any of these people to tell the truth,” he says.

    While Stamler's position may seem extreme, it is shared by administrators at the NHBPEP and the NHLBI, which funds all relevant research in this country. Jeff Cutler, director of the division of clinical applications and interventions at NIH and an advocate of salt restriction for over a decade, told Science that even to publish an article such as this one acknowledging the existence of the controversy is to play into the hands of the salt lobby. “As long as there are things in the media that say the salt controversy continues,” Cutler says, “they win.” Roccella concurs: To publicize the controversy, he told Science, serves only to undermine the public health of the nation.

    After interviews with some 80 researchers, clinicians, and administrators throughout the world, however, it is safe to say that if ever there were a controversy over the interpretation of scientific data, this is it. In fact, the salt controversy may be what Sanford Miller calls the “number one perfect example of why science is a destabilizing force in public policy.” Now a dean at the University of Texas Health Sciences Center, Miller helped shape salt policy 20 years ago as director of the Center for Food Safety and Applied Nutrition at the Food and Drug Administration. Then, he says, the data were bad, but they arguably supported the benefits of salt reduction. Now, both the data and the science are much improved, but they no longer provide forceful support for the recommendations.


    —Sanford Miller

    That raises the second noteworthy aspect of the controversy: After decades of intensive research, the apparent benefits of avoiding salt have only diminished. This suggests either that the true benefit has now been revealed and is indeed small, or that it is nonexistent, and researchers believing they have detected such benefits have been deluded by the confounding influences of other variables. (These might include genetic variability; socioeconomic status; obesity; level of physical exercise; intake of alcohol, fruits and vegetables, or dairy products; or any number of other factors.)

    The controversy itself remains potent because even a small benefit—one clinically meaningless to any single patient—might have a major public health impact. This is a principal tenet of public health: Small effects can have important consequences over entire populations. If by eating less salt, the world's population reduced its average blood pressure by a single millimeter of mercury, says Oxford University epidemiologist Richard Peto, that would prevent several hundred thousand deaths a year: “It would do more for worldwide deaths than the abolition of breast cancer.” But even that presupposes the 1-millimeter drop can be achieved by avoiding salt. “We have to be sure that 1- or 2-millimeter effect is real,” says John Swales, former director of research and development for Britain's National Health Service and a clinician at the Leicester Royal Infirmary. “And we have to be sure we won't have equal and opposite harmful effects.”

    Decades have passed without a resolution because the epidemiologic tools are incapable of distinguishing a small benefit from no benefit or even from a small adverse effect. This has led to a literature so enormous and conflicting that it is easy to amass a body of evidence—what Stamler calls a “totality of data”—that appears to support a particular conviction definitively, unless one is aware of the other totality of data that doesn't.

    Over the years, advocates of salt reduction have often wielded variations on the “totality of data” defense to reject any finding that doesn't fit the orthodox wisdom. In 1984, for instance, David McCarron and colleagues from the Oregon Health Sciences University in Portland published in Science an analysis of a national health and nutrition database suggesting that salt was harmless. They were taken to task in these pages by Sanford Miller, Claude Lenfant, director of NHLBI, and Manning Feinleib, then head of the National Center for Health Statistics. Among their criticisms was that McCarron and colleagues had not “attempt[ed] to square their conclusions with the abundance of population-based and experimental data suggesting that dietary sodium indeed plays an important role in hypertension.” At the time of the letter, however, Lenfant's NHLBI was about to fund perhaps the largest international study ever done, known as Intersalt, precisely to determine whether salt did play such a role. And even Stamler, the motivating force behind Intersalt, was describing the literature on salt and blood pressure at the time as “replete with inconsistent and contradictory reports.”

    One-sided interpretations of the data have always been endemic to the controversy. As early as 1979, for instance, Olaf Simpson, a clinician at New Zealand's University of Otago Medical School, described it as “a situation where the most slender piece of evidence in favor of [a salt-blood pressure link] is welcomed as further proof of the link, while failure to find such evidence is explained away by one means or another.” University of Glasgow clinician Graham Watt calls it the “Bing Crosby approach to epidemiological reasoning”—in other words, “accentuate the positive, eliminate the negative.” Bing Crosby epidemiology allows researchers to find the effect they're looking for in a swamp of contradictory data but does little to establish whether it is real.

    This situation is exacerbated by a remarkable inability of researchers in this polarized field to agree on whether any particular study is believable. Instead, it is common for studies to be considered reliable because they get the desired result. In 1991, for instance, the British Medical Journal (BMJ) published a 14-page, three-part “meta-analysis” by epidemiologists Malcolm Law, Christopher Frost, and Nicholas Wald of the Medical College of St. Bartholomew's Hospital in London. Their conclusion: The salt-blood pressure association was “substantially larger” than previously appreciated. That same year, Swales deconstructed the analysis, which he describes as “deeply flawed,” at the annual meeting of the European Society of Hypertension in Milan. “There was not a single person in the room who felt the [BMJ] analysis was worth anything after that,” says clinician Lennart Hansson of the University of Uppsala in Sweden, who attended the meeting and is a former president of both the international and European societies of hypertension. Swales's critique was then published in the Journal of Hypertension.

    Just 2 years later, however, the NHBPEP released a landmark report on the primary prevention of hypertension, in which the government first recommended universal salt reduction. The BMJ meta-analysis was cited repeatedly as “compelling evidence of the value of reducing sodium intake.” This spring, however, it was still possible to get opinions about the BMJ review from equally respected researchers ranging from “reads like a New Yorker comedy piece” and the “worst example of a meta-analysis in print by a long shot” to “competently done and competently analyzed and interpreted” and a seminal paper in the field.

    Crystallizing a debate

    The case against salt begins with physiological plausibility. Eat more salt, and your body will maintain its sodium concentration by retaining more water. “If you go on a salt binge,” says Harvard Medical School nephrologist Frank Epstein, “you will retain salt and with it a proportionate amount of water until your kidneys respond and excrete more salt. In most people, you will detect a slight increase in blood pressure when body fluids are swollen like this, although there is a very broad spectrum of responses.”

    Behind this spectrum is a homeostatic mechanism that has been compared to a Russian novel in its complexity. The cast of characters includes some 50 different nutrients, growth factors, and hormones. Sodium, for instance, is important for maintaining blood volume; potassium for vasodilation or constriction; and calcium for vascular smooth muscle tone. Increase your caloric intake, and your sympathetic nervous system responds to constrict your blood vessels, thus raising your blood pressure. Decrease your calories, and your blood pressure falls. To make matters still more complicated, the interplay of these variables differs with age, sex, and even race. Most researchers believe that a condition known as salt sensitivity explains why the blood pressure of some individuals rises with increased salt but not others, but even that is controversial, says Harlan. No diagnostic test exists for salt sensitivity other than giving someone salt and seeing what happens, which still won't predict whether the sensitivity is lifelong or transitory. Despite this complexity, most researchers still believe it makes physiological sense that populations with high-salt diets would have more individuals with high blood pressure than those with low salt diets, and that lowering salt intake would lower blood pressure.


    —Drummond Rennie

    By the 1970s, when the government began recommending salt reduction to treat hypertension—defined as systolic blood pressure higher than 140 mmHg and diastolic higher than 90 mmHg (140/90 mmHg)—the physiological plausibility had been supplemented by a grab bag of not particularly definitive studies and clinical lore. In the 1940s, for instance, Duke University clinician Wallace Kempner demonstrated that he could successfully treat hypertensive patients with a low-salt, rice-and-peaches diet. For years Kempner's regimen was the only nonsurgical treatment for severe hypertension, a fact that may have done more than anything to convince an entire generation of clinicians of the value of salt reduction. In a seminal 1972 paper, Lewis Dahl, a physician at Brookhaven National Laboratory in Upton, New York, and the primary champion of salt reduction in this country until his death in 1975, claimed it was proven that a low-salt diet reduced blood pressure in hypertensives. When it didn't, he said, that only proved that the patient had fallen off the diet, “all protestation to the contrary, notwithstanding.” Whether it was low salt that explained the diet's effect is still debatable, however. Kempner's regimen was also extraordinarily low in calories and fat and high in potassium, factors that themselves are now known to lower blood pressure.

    Dahl furthered the case for a salt-blood pressure link by breeding a strain of salt-sensitive hypertensive rats. Researchers still cite this work as compelling evidence for the role of salt in human hypertension. As Simpson pointed out in 1979, however, Dahl's rats became hypertensive only if fed an amount of salt equivalent to more than 500 grams a day for an adult human—“probably outside the area of relevance,” Simpson noted. Lately, researchers have been touting a 1995 study of chimps fed a high-salt diet. But Harlan notes that “it's unlikely” that any existing animal models of hypertension are particularly relevant to humans.

    Throughout the early years of the controversy, the most compelling evidence against salt came from a type of epidemiologic study known as an “ecologic” study, in which researchers compared the salt intake of indigenous populations—the Yanomamo Indians of Brazil, for instance—that had little or no hypertension and cardiovascular disease to that of industrialized societies. Inevitably the indigenous populations ate little or no salt; the industrialized societies ate a lot. While the Yanomamo ate less than a gram of salt daily, for instance, the northern Japanese ate 20 to 30 grams—the highest salt intake in the world—and had the highest stroke rates. Such findings were reinforced by migration studies, in which researchers tracked down members of low-salt communities who had moved to industrialized areas only to see both their salt intake and blood pressure rise.

    The findings led researchers to postulate an intuitive Darwinian argument for salt reduction: Humans evolved in an environment where salt was scarce, and so those who survived were those best adapted to retaining salt. This trait, so the argument goes, would have been preserved even though we now live in an environment of salt abundance. By this logic, the appropriate intake of salt is that of the primitive societies—a few grams a day—and all industrialized societies eat far too much and pay it for it in heart disease and stroke.

    The catch to this accumulation of data and hypotheses was that it only included half the data. The other half was the half that didn't fit—in particular, data from the epidemiologic studies known as intrapopulation studies. These compared salt intake and blood pressure in individuals within a population—males in Chicago, for instance—and invariably found no evidence that those who ate a lot of salt had higher blood pressure than those who ate little. Among the intrapopulation studies that came up negative were an analysis of 20,000 Americans conducted by the National Center for Health Statistics around 1980.


    —Ed Roccella

    Neither kind of study was capable of giving a definitive answer, however. The ecologic studies were certainly the least sound scientifically, and epidemiologists today put little stock in them. The potentially fatal flaw in ecologic studies is always the number of variables other than the one at issue that might differ between the populations and explain the relevant effect. Populations that eat little salt, for instance, also consume fewer calories; eat more fruits, vegetables, and dairy products; are leaner and more physically active; drink less alcohol; and are less industrialized. Any one of these differences or some combination of them might be responsible for the lower blood pressure. Indigenous people also tend to die young from infectious diseases or trauma, notes Epstein, while industrialized societies live long enough to die of heart disease.

    Both ecologic and intrapopulation studies also suffer from the remarkable difficulty of accurately assessing average blood pressure—which can vary greatly from day to day—or a lifetime intake of salt. Most of the early ecologic studies based their assessments of salt intake on guesses rather than measurements. In 1973, when University of Michigan anthropologist Lillian Gleibermann published what's still considered a seminal paper linking salt and blood pressure, she based her conclusions on 27 ecologic studies, only 11 of which actually tried to measure sodium intake. A 24-hour collection of urine is considered to be the best assessment of salt intake, because we quickly excrete in our urine all the salt we consume. But even that will only reflect the salt intake of those 24 hours, not necessarily of an entire month, year, or lifetime. “You need at least five to 10 measures of sodium in urine collected on different days to get a measure of habitual intake,” says Daan Kromhout, a nutritional epidemiologist at the National Institute of Public Health and the Environment in the Netherlands. “You can't do that in an epidemiologic field situation.”

    To researchers who accept the salt-blood pressure hypothesis, these measurement problems served to explain why intrapopulation studies wouldn't see an association even if one existed. Quite simply, the link between salt and blood pressure, however potent, would likely be washed out by the measurement errors. Moreover, any experiment large enough to have the statistical power to overcome these errors would be prohibitively expensive.

    In the early 1980s, London School of Tropical Medicine and Hygiene epidemiologist Geoffrey Rose suggested another reason why the intrapopulation studies might fail to detect benefits of salt reduction that could still have a significant public health impact. Rose speculated that if the entire developed world consumed too much salt, as ecologic studies suggested, then epidemiology would never be able to link salt to hypertension, regardless of how causal the relationship. Imagine, he wrote, if everyone smoked a pack of cigarettes daily; then any intrapopulation study “would lead us to conclude that lung cancer was a genetic disease … since if everyone is exposed to the necessary agent, then the distribution of cases is wholly determined by individual susceptibility.” Thus, as with salt and high blood pressure, the clues would have to be “sought from differences between populations or from changes within populations over time.” By the same logic, cutting salt consumption a small amount might have little effect on a single individual—just as going from 20 cigarettes to 19 would—but a major impact on mortality across an entire population.

    Although Rose's proposition made intuitive sense, it still rested on the unproven conjecture that avoiding salt could reduce blood pressure, a conjecture that was beginning to seem extraordinarily resistant to any findings that might negate it. In 1979, for instance, Stamler and his Northwestern colleagues tested the hypothesis in an intrapopulation study of Chicago schoolchildren. They compared blood pressure in 72 children to salt intake, estimated from seven consecutive 24-hour urine samples, enough to reliably reflect habitual sodium intake. They reported a “clear-cut” relationship between sodium and blood pressure in the children but then tried twice to reproduce the result and failed twice.


    “A variety of potential explanations of this phenomenon could be advanced,” the authors wrote, one of which was the obvious: “No relationship in fact exists between sodium and [blood pressure]. …” They then listed five reasons why they might have missed the expected relationship—insensitive measurement techniques, for instance, or genetic variability obscuring the role of sodium, or the possibility that “the true relationship is not yet evident in children.” Because the first of the three studies was positive, Stamler and his colleagues concluded that their data were “not wholly negative” and “do in fact suggest a weak and inconsistent relationship.”

    This logic served to manifest what Simpson called “the resilience and virtual indestructibility of the salt-hypertension hypothesis. Negative data can always be explained away.”

    “Another thing I must point out is that you cannot prove a vague theory wrong. … Also, if the process of computing the consequences is indefinite, then with a little skill any experimental results can be made to look like the expected consequences.”

    —Richard Feynman, 1964

    Through the early 1980s, the scientific discord over salt reduction was buried beneath the public attention given to the benefits of avoiding salt. The NHBPEP had decreed since its inception in 1972 that salt was an unnecessary evil, a conclusion reached as well by a host of medical organizations, not to mention the National Academy of Sciences and the Surgeon General. By 1978, the Center for Science in the Public Interest, a consumer advocacy group, was describing salt as “the deadly white powder you already snort” and lobbying Congress to require food labeling on high-salt foods. In 1981, the FDA launched a series of “sodium initiatives” aimed at reducing the nation's salt intake.

    Not until after these campaigns were well under way, however, did researchers set out to do studies that might be powerful enough to resolve the underlying controversy. The first was the Scottish Heart Health Study, launched in 1984 by epidemiologist Hugh Tunstall-Pedoe and colleagues at the Ninewells Hospital and Medical School in Dundee, Scotland. The researchers used questionnaires, physical exams, and 24-hour urine samples to establish the risk factors for cardiovascular disease in 7300 Scottish men. This was an order of magnitude larger than any intrapopulation study ever done with 24-hour urine samples. The BMJ published the results in 1988: Potassium, which is in fruits and vegetables, seemed to have a beneficial effect on blood pressure. Sodium had no effect.

    With this result, the Scottish study vanished from the debate. Advocates of salt reduction argued that the negative result was no surprise because the study, despite its size, was still not large enough to overcome the measurement problems that beset all other intrapopulation studies. When the NHBPEP recommended universal salt reduction in its landmark 1993 report, it cited 327 different journal articles in support of its recommendations. The Scottish study was not among them. (In 1998, Tunstall-Pedoe and his collaborators published a 10-year follow-up: Sodium intake now showed no relationship to either coronary heart disease or death.)

    The second collaboration was Intersalt, led by Stamler and Rose. Unlike the relentlessly negative Scottish Heart Health Study, Intersalt would become the most influential and controversial study in the salt debate. Intersalt was designed specifically to resolve the contradiction between ecologic and intrapopulation studies. It would compare blood pressure and salt consumption, as measured by 24-hour urine samples, from 52 communities around the globe, from the highest to the lowest extremes of salt intake. Two hundred individuals—half males, half females, 50 from each decade of life between 20 and 60—were chosen at random from each population. In effect, Intersalt would be 52 small but identical intrapopulation studies combined into a single huge ecologic study.

    After years of work by nearly 150 researchers, the results appeared in the same 1988 BMJ issue that included the Scottish Heart Health Study. Intersalt had failed to confirm its primary hypothesis, which was the existence of a linear relationship between salt intake and blood pressure. Of the 52 populations, four were primitive societies like the Yanomamo with low blood pressure and daily salt intake below 3.5 grams. They also differed, however, in virtually every other imaginable way from the 48 industrialized societies that had higher blood pressure. The remaining 48 revealed no relationship between sodium intake and blood pressure. The population with the highest salt intake, for instance—in Tianjin, China, consuming roughly 14 grams a day—had a median blood pressure of 119/70 mmHg, while the one with the lowest salt intake—a Chicago African-American population at 6 grams a day—had a median blood pressure of 119/76 mmHg. Only body mass and alcohol intake correlated with blood pressure in this comparison.

    The Intersalt researchers did derive two positive correlations between salt and blood pressure. One weak association appeared when they treated the 10,000-plus subjects as a single large population rather than 52 distinct populations. It implied that cutting salt intake from 10 grams a day to four would reduce blood pressure by 2.2/0.1 mmHg. The more potent association was between salt intake and the rise in blood pressure with age: Populations that ate less salt experienced a smaller rise than did populations that ate more salt. If this relationship was causal, Intersalt estimated, then cutting salt intake by 6 grams a day would reduce the average rise in blood pressure between the ages of 25 and 55 by 9/4.5 mmHg.

    These findings made Intersalt Rorschach-like in its ability to generate conflicting interpretations. John Swales wrote off the results in an accompanying BMJ editorial, saying the potential benefit, if any, was so small it “would hardly seem likely to take nutritionists to the barricades (except perhaps the ones already there).” Today, the majority of the researchers interviewed by Science, including Intersalt members such as Daan Kromhout and Lennart Hansson, see it as a negative study. Says Hansson, “It did not show blood pressure increases if you eat a lot of salt.”

    Stamler and other Intersalt leaders vehemently disagree. When the results were published, Stamler described them as “abundant, rich, and precise confirmation” of the sodium-blood pressure association and used them to advocate a 6-gram “reduction in salt intake for everyone.” In this view, the definitive positive finding was the correlation between salt consumption and rising blood pressure with age. Intersalt's Hugo Kesteloot, for instance, an epidemiologist at the Catholic University of Leuven in Belgium, says this was “the most interesting finding” and “confirmatory.” Officials at the NHBPEP and NHLBI sided with this interpretation. In 1993, the NHBPEP report on primary prevention of hypertension cited Intersalt for confirming the “strong positive relationship” between sodium intake and blood pressure reported by Dahl in 1972, which was precisely what it did not do. NHLBI's Cutler still describes the results as “overwhelmingly positive.”


    —Olaf Simpson

    Critics, however, noted that the association Stamler and his colleagues found so telling—between salt intake and blood pressure rise with age—was not included among the hypotheses that Intersalt had clearly delineated in prestudy publications describing its methodology. This made the finding appear to be a post hoc analysis, a practice known pejoratively as “data dredging.” In such situations, the researchers are no longer testing hypotheses, as the scientific method requires, but are finding hypotheses that fit data already accumulated. Although this doesn't mean the new hypotheses are not true, it does mean they have not been properly tested.

    Because Intersalt wasn't designed to test a link between salt and a rise in blood pressure with age, explains NIH's Bill Harlan, the association reported later could be treated as no more than an inference: “If you [were going] in with that as a specific hypothesis, you would have set the study up differently,” for example, by including a wider range of ages and a larger sample of each population. David Freedman, a UC Berkeley statistician, puts it more bluntly, saying that the conclusion about salt and rising blood pressure with age looked like “something they dragged in when the primary analyses didn't go their way.”

    Although Intersalt members agree that testing a hypothesized link between salt and rising blood pressure with age was not in their proposals, they insist it was always part of the plan. “It just wasn't in by omission. Stupidly,” says Intersalt's Paul Elliot, an epidemiologist at London's Imperial College School of Medicine. Alan Dyer of Northwestern University, the collaboration's biostatistician, says, “It just was one of those things that didn't get written down.” Stamler insists it was recorded in the minutes of a meeting and in an early publication, and that the accusations of “retrospective data-dredging” are “factually wrong” and should be retracted.

    Far from delivering the last word on salt, Intersalt had dissolved in ambiguous data and contradictory interpretations. And that was just round one.

    Intersalt tries again

    In 1993, after the NHBPEP cited Intersalt as supporting a recommendation of universal sodium reduction, the Salt Institute, a Washington-based trade organization of salt producers, began a concerted effort to obtain Intersalt's raw data. The institute's director, Richard Hanneman, says he wanted to examine the reported association between salt intake and rise in blood pressure with age. He and some of the researchers who consult for the institute for $3000 a year—McCarron; University of Alabama, Birmingham, cardiologist Suzanne Oparil; University of Toronto epidemiologist Alexander Logan; and UC Davis nutritionist Judy Stern—were puzzled by what they saw as a contradiction in the data. If higher salt intake resulted in a greater increase in blood pressure as the population aged, they reasoned, the centers with high salt intakes should have had higher median blood pressures, which wasn't the case. Only if the Intersalt centers with high salt intake had lower blood pressure to start with could their median blood pressures have come out roughly equal, as Intersalt reported. While this seemed counterintuitive, Intersalt had not published the data—the blood pressure of the 20- to 29-year-olds—that would allow the hypothesis to be checked independently.

    Hanneman failed to get Intersalt's raw data, but he did obtain enough secondary data to publish a paper in May 1996, in an issue of the BMJ dedicated to Intersalt. Hanneman claimed to confirm that Intersalt centers with higher salt intake did indeed have lower systolic blood pressures in their youngest cohorts. Accompanying editorials, all written by outspoken advocates of salt reduction, harshly rejected the analysis. Malcolm Law, for instance, dismissed Hanneman's ideas as a “bizarre hypothesis” and an example of “the lengths to which a commercial group will go to protect its market when presented with clear evidence detrimental to its interests.” But none of these commentators addressed the apparent contradiction in Intersalt's claims. Other researchers who read the paper—Intersalt collaborator Friedrich Luft, for instance, a nephrologist at Berlin's Humboldt University, and Freedman, who read it at Science's request—noted flaws in Hanneman's reanalysis but also agreed that the Intersalt findings seemed inexplicable.

    Dueling trends.

    The relation of salt and blood pressure for all 52 Intersalt populations (red) and for the 48 industrialized populations without very low salt consumption (brown).


    This particular dispute turned out to be moot, however, given the controversy ignited by another paper in the same issue: Intersalt's own reanalysis of its data. Under the title Intersalt Revisited, Stamler and his colleagues addressed what they considered a problem in their original publication: that they may have underestimated the true association between salt and blood pressure.

    Their reanalysis stepped into one of the most controversial areas in epidemiology, known as regression dilution bias. The gist is that if an association between two variables—such as salt and blood pressure—is real, any errors in measuring exposure to either variable will only serve to “dilute” the apparent cause and effect. In this case, because both 24-hour urine samples and single blood pressure readings are likely to stray from the long-term averages, Intersalt's analysis would have underestimated the true strength of the effect of salt on blood pressure. “If [the association] is real,” says Elliot, “it is biased toward the null, and so you have to accept the reality that it must be larger than measured.” Statistical techniques could then be used to correct it upward to its proper size. The catch, of course, is that such corrections would inflate a spurious association as well.

    Stamler and colleagues, certain of the reality of the salt-blood pressure link, now corrected their 1988 estimates for regression dilution bias. With a few other corrections, the net effect was to enhance the apparent benefits of salt reduction from something ambiguous in 1988 to consistent, “strong, positive” associations in 1996. Cutting daily salt intake by 6 grams, they now concluded, would drop blood pressure by 4.3/1.8 mmHg, a benefit three times larger than originally estimated. “Now the position has been clarified,” wrote Law. “All the Intersalt analyses confirm salt as an important determinant of blood pressure.”

    But the position had not been clarified. The BMJ editors had initially commissioned a commentary to run with Intersalt's reanalysis from epidemiologists George Davey Smith of the University of Bristol in the United Kingdom and Andrew Phillips of the Royal Free Hospital School of Medicine in London. The critique they submitted was so damning of Intersalt Revisited, however, that the BMJ editors felt compelled to reveal it to the Intersalt authors before publication. According to BMJ editor Richard Smith, Stamler and his colleagues objected so strongly to the commentary that the BMJ agreed to run it 6 weeks later, disassociated, at least in time, from the work it called into question.

    Positive finding?

    Intersalt data show a correlation between salt consumption and the rise in blood pressure with age.

    As Davey Smith explained to Science, their commentary identified a litany of problems with Intersalt Revisited, from “O-level mathematical mistakes” to basing their statistical corrections on assumptions unsupported by data. For instance, in order to correct for regression dilution bias, Stamler and his colleagues assumed that changes in sodium intake and blood pressure in any individual were independent of each other over periods of a few weeks. But if blood pressure and salt intake did fluctuate together, Davey Smith and Phillips noted, then the Intersalt corrections would result in “an inappropriately inflated estimate.” The two epidemiologists cited studies concluding that blood pressure and salt intake are related in the short term and pointed out that “the very hypothesis under test—that sodium intake … is related to blood pressure—would predict [these] associations.”

    In their response, published in the same issue, Stamler and his colleagues insisted that their corrections were legitimate because the “totality of the evidence—the only sound basis for judgment on this matter—supports the conclusion that this association is causal.” They cited the “independent expert groups, national and international,” that had concluded habitual high salt intake was a causal factor of high blood pressure, although they neglected to mention that those groups had all relied on Intersalt circa 1988 to reach their conclusions. Intersalt also listed seven reasons why their original estimate was “probably underestimated” but seemed to make no attempt to find reasons why it might have been overestimated. “It was embarrassing to read,” Harvard School of Public Health epidemiologist Jamie Robins told Science, while describing Intersalt's arguments as “arcane, bizarre, and special pleading.”

    The commentary and response led to yet more letters in the BMJ the following August. Now Davey Smith and Phillips were joined by a half-dozen other researchers criticizing Intersalt Revisited, such as Nick Day, head of the biostatistics unit of the British Medical Research Council (MRC) in Oxford. “As soon as you start making big corrections [to your original findings],” says Day, “people begin to get suspicious.”

    Day describes the problem with Intersalt Revisited as one of “garbage in, garbage out” and believed it had implications well beyond the salt controversy: Stamler and his colleagues, like many epidemiologists, assumed they could correct for underlying uncertainties in their data with statistical methods. “It doesn't work,” he says. “There will always be uncertainty surrounding what you've done, and if what you've done makes quite a serious difference to the crude observed relationships, then it puts a great haze of doubt over the whole thing. If you have an underlying uncertainty—that is, ‘garbage in’—it is never going to be refined into gold.”

    This assessment is rejected by Stamler and most of his Intersalt Revisited co-authors, although not all of them. Michael Marmot, for instance, an epidemiologist at the University College London Medical School and a signatory of Intersalt Revisited, told Science that, in retrospect, the reanalysis was not compelling. “Somebody looking at this from the outside,” he says, “could well take the view that [the corrections] were done for one reason alone, which was to increase the size of the associations. They would not be crazy for taking such a view just based on reading the paper.”

    Trials and tribulations

    In the grand scheme of the salt controversy, a study such as Intersalt, revisited or not, should have been irrelevant. After all, as researchers on both sides agree, Intersalt was an observational study showing at best weak associations in a field of research where randomized, controlled clinical trials—the “gold standard” of epidemiology—should be able to establish a cause and effect, if any exists. “You kind of can't believe it's an issue,” says Robins, for instance. “They can actually run randomized experiments [on salt reduction], and they've run lots of them.” All a researcher needs is to randomize subjects into two groups, one reducing salt intake, one eating normally, and then see what happens.

    But the results were as ambiguous as anything else in the salt dispute. Doing the trials correctly turned out to be surprisingly difficult. Choosing low-salt foods, for instance, inevitably leads to changing other nutrients, as well, such as potassium, fiber, and calories. Placebo effects and subtle medical intervention effects have to be avoided carefully. “If you just study people for 10 weeks, you will detect some changes over time which have nothing to do with the experiment you're carrying out,” says Graham Watt, who in the mid-1980s ran three of the first double-blind, placebo-controlled trials on salt reduction.

    A technique known as meta-analysis has lately become the route to clarity in such situations. The idea is that if a host of clinical trials gives ambiguous results, the true size of the effect might be assessed by pooling the data from all the studies in such a way as to gain statistical power. But meta-analysis is controversial in its own right. It might have been the ideal solution to the salt controversy had not the salt controversy turned out to be the ideal situation to demonstrate the questionable nature of meta-analysis. As Harvard School of Public Health epidemiologist Charles Hennekens puts it: “It's all so arbitrary, and you'd like to believe it's arbitrary in a random way, but it turns out to be arbitrary in the way the investigators want it to be.”

    In 1991, Cutler, Elliot, and collaborators generated the first meta-analysis of randomized clinical trials on the salt question. They found 21 trials in hypertensive subjects, although only six were placebo-controlled, and six in normotensives, of which only those done by Watt were double-blind and placebo-controlled, and those showed zero benefit from salt reduction. By amassing these trials together, however, the controlled with the uncontrolled, Cutler and Elliot deduced that a 3- to 6-gram reduction in daily salt consumption would drop blood pressure by 5/3 mmHg in hypertensives and 2/1 mmHg in normotensives. This relationship was “likely to be causal,” they then concluded, because “the results are consistent with a large body of epidemiological, physiological, and animal experimental evidence.” This, of course, was exactly the point of contention.

    Cutler's meta-analysis was promptly overshadowed by the three-part extravaganza published in the BMJ in April 1991 by Malcolm Law and his colleagues. Their conclusions were unprecedented: They deduced that salt reduction has an effect on blood pressure nearly double that found by Cutler and Elliot. Law and his colleagues predicted that “moderate” universal salt reduction—cutting daily intake by only 3 grams—would benefit the population more than treating all hypertensives with drugs, while cutting intake by 6 grams a day would prevent 75,000 deaths a year in Britain alone.

    They derived these conclusions in three steps. First, they analyzed the ecologic studies to estimate the average apparent effect of salt on blood pressure. They then “quantitatively reconciled” this estimate with the numbers derived from the intrapopulation studies after suitably correcting those upward for regression dilution bias. Having demonstrated that the ecologic and intrapopulation studies were not in fact contradictory, as had been believed for 20 years, they then proceeded to determine whether this reconciled estimate was consistent with all the relevant clinical trials. These, says Law, turned out to be dead on, thus demonstrating that all studies were in agreement about the considerable benefits of salt reduction.

    Although this “quantitative review,” as Law calls it, has its supporters, they are in a minority. Its critics—including epidemiologists and statisticians who read the paper at the request of Science—insist the work is so flawed as to be effectively meaningless. Take the selection of which studies to include and which to discard: In the analysis of the ecologic studies, Law and his colleagues chose 23 studies done between 1960 and 1984, and one from Szechuan, China, published in 1937. They then excluded Intersalt, the mother of all ecologic studies, from the analysis because its well-calibrated, standardized blood pressure measurements often yielded numbers 15 mmHg lower than those made in comparable communities by the older, uncalibrated, nonstandardized studies. Critics likened this decision to tossing the baby and keeping the bath water. Law told Science that they excluded Intersalt because the original results were “inadequate” and “too low,” but that this was not the case with “Intersalt Revisited,” a study he would have included had it been available.

    As for the analysis of clinical trials, noted Swales, Law and his colleagues synthesized the results of 78 trials, of which only 10 were actually randomized. One study even predated the era of modern clinical research. The fall in blood pressure that Law and his colleagues attributed to sodium, says Swales, was likely due to the “impact of poor controls.” Even Richard Smith, the BMJ editor who published the research, described it to Science as “not the best we've ever done.”


    —Lennart Hansson


    —Malcolm Law

    Law, however, says the study has stood up well, noting that its findings agree with those of Intersalt Revisited. And despite the critiques, Law's meta-analysis is still one of the most highly cited papers in the salt literature and was one of the bedrocks—along with Intersalt, the study Law considered inadequate—of the 1993 NHBPEP primary prevention report.

    Poles apart

    Over the past 5 years, two conspicuous trends have characterized the salt dispute: On the one hand, the data are becoming increasingly consistent—suggesting at most a small benefit from salt reduction—while on the other, the interpretations of the data, and the field itself, have remained polarized. This was vividly demonstrated by two more salt-blood pressure meta-analyses. In 1993, with the appearance of the NHBPEP primary prevention report, the Campbell's Soup Co. enlisted the University of Toronto's Logan to do the first of them. Logan had studied salt reduction in the early 1980s and found it to be of “very little” use. With funding from Campbell's, he now identified 28 randomized trials in normotensives and 28 in hypertensives. Meanwhile, Cutler learned of Logan's new analysis and countered by updating his own.

    The results of the two studies were virtually identical—or at least, “more similar than they are different,” says Cutler, who based his new meta-analysis on 32 relevant studies. For a reduction of roughly 6 grams of salt, Cutler claimed the trials demonstrated a blood pressure benefit of 5.8/2.5 mmHg in hypertensives and 2.0/1.4 mmHg in normotensives. Logan claimed a benefit of 3.7/0.9 mmHg in hypertensives and 1.0/0.1 in normotensives. Considering the possible errors, says Robins, “those are the same data. The rest is smoke and mirrors.”

    Logan and Cutler then went about interpreting the data in opposite ways that happened to coincide with their established opinions. Logan and his collaborators noted that these estimates were probably biased upward by negative publication bias—in which studies finding no effect are not published—and by a placebo effect. They said there was some evidence suggesting that sodium restriction might be harmful and concluded that “dietary sodium restriction for older hypertensive individuals might be considered, but the evidence in the normotensive population does not support current recommendations for universal dietary sodium restriction.” Cutler and his colleagues claimed that the numbers did not appear to be biased upward by either a placebo effect or a negative publication bias. They said there was no evidence suggesting that salt reduction can be harmful and concluded that the data supported a recommendation of sodium restriction for both normotensives and hypertensives.

    Logan's paper got the better press, because it contradicted the established wisdom and was published in JAMA in 1996, a year before Cutler's paper appeared in the American Journal of Clinical Nutrition. But advocates of salt reduction—notably Graham MacGregor of St. George's Hospital Medical School in London, author of two popular cookbooks on low-salt and no-salt diets—suggested to reporters that Logan's meta-analysis could not be trusted because of a conflict of interest from the Campbell's funding. In a JAMA editorial accompanying Logan's meta-analysis, NHLBI director Claude Lenfant recommended that the study be ignored, in any case, on the familiar grounds that “the preponderance of evidence continues to indicate that modest reduction of sodium … would improve public health.”

    Despite Lenfant's assessment, the latest salt studies seem to agree with the negligible benefit of salt reduction suggested by Logan's interpretation. That was the bottom line of the University of Copenhagen meta-analysis, published in JAMA in May, and also of the NHLBI-funded Trials of Hypertension Prevention Phase II (TOHP II) published in March 1997. TOHP II, a 3-year clinical trial of 2400 people with “high normal” blood pressure, coordinated by Hennekens at Harvard Medical School, found that a 4-gram reduction in daily salt intake correlated with a 2.9/1.6-mmHg drop in blood pressure after 6 months. That benefit, however, had mostly vanished by 36 months, and Hennekens agrees that it could have been due to a medical intervention effect.

    Of all these studies, the one that may finally change the tenor of the salt debate was not actually about salt. Called DASH, for Dietary Approaches to Stop Hypertension, it was published in April 1997 in The New England Journal of Medicine. DASH suggested that although diet can strongly influence blood pressure, salt may not be a player. In DASH, individuals were fed a diet rich in fruits, vegetables, and low-fat dairy products. In 3 weeks, the diet reduced blood pressure by 5.5/3.0 mmHg in subjects with mild hypertension and 11.4/5.5 mmHg in hypertensives—a benefit surpassing what could be achieved by medication. Yet salt content was kept constant in the DASH diets, which meant salt had nothing to do with the blood pressure reductions.

    Adding up the evidence.

    In a meta-analysis of 56 clinical trials done since 1980 in people with normal blood pressure, extreme salt reduction offered little benefit.


    Indeed, if the DASH results stand up, says Day, they suggest that fruits and vegetables may be the true cause of the effects attributed to salt in the old ecologic studies. Societies that have high salt intakes tend to consume highly salted preserved foods simply because they do not have year-round access to fruits and vegetables. Now the DASH collaboration has embarked on a follow-up to differentiate the effects of salt from those of the DASH diet. The researchers are working with 400 subjects, randomized to either a control diet or the DASH diet and to three different levels of salt intake—3, 6, or 9 grams daily. Results are expected in 2 years.


    —Bill Harlan

    Picking your battles

    In 1976, when the salt controversy was new, Jean Mayer, then president of Tufts University, called salt “the most dangerous food additive of all.” Today the debate has devolved into an argument over whether extreme reductions in salt intake, perhaps impossible to achieve in the general population, can drop blood pressure by as much as 1 or 2 millimeters of mercury, and if so, whether anyone should do anything about it. For people with normal blood pressure, such a benefit is meaningless; for hypertensives, clinicians say that medications have a much greater effect at a cost of a few cents a day. But what works for the individual and what works for public health are still two different things. To Stamler, for instance, or Cutler, there is no question that a population that avoids salt will have less heart disease and strokes. And salt intake, they argue, is far easier to change than, say, smoking or inactivity, because much can be accomplished by convincing industry to put less salt in processed foods.


    —Jeff Cutler

    Whether it's worth it is the question. For the agencies involved to induce the public to avoid salt, they must convince individuals that it's bad for their individual health, which, for those with normal blood pressure, it almost assuredly isn't. Although this explains the single-mindedness of the promotional message out of the NHLBI and NHBPEP, it can also make the agencies and administrators look disingenuous. Moreover, public health experts firmly believe that the public can only be sold so many health recommendations. “How much of the government's moral weight do you expend on this particular issue?” says University of Toronto epidemiologist David Naylor. “You have to pick your battles. Is this a battle worth fighting?” Hammering on the benefits of salt reduction, say Naylor, Hennekens, and others, may come at the expense of advocating weight loss, healthy diets in general, and other steps that are significantly more beneficial.

    The argument that salt reduction is a painless route to lower blood pressure also assumes that there is no downside to this kind of social engineering. Social interventions can have unintended consequences, notes NIH's Harlan, which seemed to be the case, for instance, with the recommendation that the public consume less dietary fat. “It was a startling change to a lot of us,” Harlan says, “to see the proportion of fat in the diet go down and weight go up. Obviously it's not as simple as it once seemed.”

    The last 5 years have also seen two studies published—the latest this past March in The Lancet—suggesting that low-salt diets can increase mortality. Both studies were done by Michael Alderman, a hypertension specialist at New York City's Albert Einstein College of Medicine and president of the American Society of Hypertension. Epidemiologists—and Alderman himself—caution against putting too much stock in the studies. “They are yet more association studies,” says Swales. “Any insult you make of Intersalt you can make of those as well.” But Alderman also notes that only a handful of such studies comparing salt intake to mortality have ever been done, and none have come out definitively negative. “People just rely upon statements that [salt reduction] can't really do any harm,” says Swales. “It may or may not be true. Individual harmful effects can be as small as beneficial effects, and you can't detect those in clinical trials either.”

    After publication of his second study, Alderman recruited past and present presidents of hypertension societies and the American Heart Association and wrote to Lenfant at the NHLBI “urging prompt appointment of an independent panel of qualified medical and public health scientists to review existing recommendations [on salt consumption] in light of all available data.” In April Lenfant told Science that he had agreed to proceed with the review. If such a panel should convene, Hennekens has one observation worth keeping in mind: “The problem with this field is that people have chosen sides,” he says. “What we ought to do is let the science drive the system rather than the opinions.”


    Superstrong Nanotubes Show They Are Smart, Too

    1. Robert F. Service


    Just 2 years ago, chemist Richard Smalley of Rice University in Houston, Texas, shared the Nobel Prize in chemistry for his part in discovering an entirely new form of carbon: spherical molecules dubbed fullerenes that are radically different from the stacked sheets of graphite and the glittering pyramidal matrix of diamond. But ask Smalley what interests him today, and he doesn't mention the prizewinning spheres. What's captured his interest and that of hundreds of other scientists are nanotubes, the fullerenes' elongated cousins.

    First discovered 7 years ago by Japanese electron microscopist Sumio Iijima, nanotubes are essentially tiny strips of graphite sheet, rolled into tubes and capped with half a fullerene at each end. They are just a few billionths of a meter, or nanometer, across and up to 100 micrometers long. And although they may resemble nothing more glamorous than microscopic rolls of chicken wire, nanotubes have emerged as stars of the chemistry world. They're stronger than steel, lightweight, and able to withstand repeated bending, buckling, and twisting; they can conduct electricity as well as copper or semiconduct like silicon; and they transport heat better than any other known material.

    With this roster of qualities, nanotubes “have to be good for something,” Smalley is fond of saying. Indeed, there's no shortage of ideas. The current list of possible uses includes: superstrong cables, wires for nanosized electronic devices in futuristic computers, charge-storage devices in batteries, and tiny electron guns for flat-screen televisions. “I think it's inevitable that there will be a whole field of organic chemistry devoted to analyzing, [functionalizing], and separating these things,” says Smalley.

    Cast your mind back 13 years ago to the discovery of fullerenes, and the hype may sound familiar. But this time around, researchers are confident that the promise will become reality—if they can solve some nagging fabrication problems. At least one Massachusetts materials company is convinced: Hyperion Catalysis International is already manufacturing nanotubes in bulk for use in plastics for the automotive and computer industries. The tubes help the normally insulating material conduct electrical charges away and prevent the buildup of static electricity. And researchers are already beavering away at more challenging tasks, such as turning nanotubes into electronic devices, battery components, and display elements. In comparing the likely utility of nanotubes and fullerenes, nanotube researcher Alex Zettl of the University of California, Berkeley, says: “If I were to write down all the different applications, I'd have a sheet with fullerene applications and a book for nanotubes. There's orders-of-magnitude difference in the potential.”

    In the pipeline

    The key to this potential lies in the unique structure of nanotubes and the defects that can form in their network of carbon bonds. Nanotubes come in two classes: One, called single-wall nanotubes (SWNTs), is made up of just a single layer of carbon atoms; the other, multiwalled nanotubes (MWNTs), consists of up to dozens of concentric tubes wrapped one inside another like a coaxial cable. MWNTs—the type Hyperion is making—are typically riddled with defects, because as the tubes take shape, defects that form get trapped by overlying tubes. Single-wall tubes, by contrast, are often defect free. The defects, or lack of them, “are terribly important,” says Smalley. “Virtually all of the special properties of nanotubes derive from their perfect graphitic structure.”

    That structure depends on the unique properties of its building material—carbon. Carbon is the elemental equivalent of the perfect neighbor, friendly and easygoing. Under intense pressure, carbon atoms form bonds with four neighboring carbons, creating the pyramidal arrangement of diamond. But carbon regularly forgoes that fourth bond and links up with just three neighbors, creating graphite's network of hexagonal rings. This arrangement leaves graphite with a host of unpaired electrons, which essentially float above or below the plane of carbon rings. In this arrangement, the electrons are more or less free to buzz around graphite's surface, which makes the material a good electrical conductor.

    But graphite has one weak point—its edges. Carbon atoms at the border of a graphite sheet are out on a limb, with additional unattached bonds looking for something with which to react. That's what makes nanotubes possible. When carbon vapor is heated to some 1200 degrees Celsius, carbon rings assemble, creating a small graphite sheet. But the edges are so energetically unstable that the graphite sheet begins to curl until two edges knit themselves together. Additional carbon rings come along to form the end caps, creating the quintessential nanotube. Because it lacks any edges at all, the tube is very stable.

    Like graphite, nanotubes have roving electrons that can move freely among the carbon rings. “The tubes inherit this same property,” says Smalley. Theorists suggested early on that nanotubes should be good conductors, but it was not until the past year that teams have confirmed this with the help of scanning tunneling microscopes (STMs) that can pin down individual tubes and measure their conducting abilities. It is this area—using nanotubes as electronic materials—that is currently exciting the most interest among nanotube researchers. Several reports have recently demonstrated that SWNTs can function not only as conductors but also as semiconductors that can carry an electrical current under some conditions but not others. The demonstration created a buzz because semiconductor switches form the heart of computers.

    Defects appear to be the key to this dual behavior. In January, teams led by Cees Dekker at Delft University of Technology in the Netherlands, as well as Charles Lieber at Harvard, confirmed theoretical predictions that the conducting properties of nanotubes are linked to the arrangement of hexagonal carbon rings around the tube. When those hexagons line up along the long axis of a nanotube, the tube conducts as easily as a metal. But twist the hexagons so that they spiral around the tube, and it will conduct like a semiconductor, carrying current only after a certain threshold voltage is applied to push it through.

    If you can make nanotubes with two different characters, why not a single tube with a split personality? Last fall, Zettl's group isolated a single nanotube that had one metallic region and another semiconducting region (Science, 3 October 1997, p. 100). The hybrid tube is the result of a specific defect: Adjacent six-carbon hexagons are replaced by a five-carbon pentagon linked to a seven-carbon heptagon. This defect changes the way the rest of the hexagons wrap around the tube. One end of the tube, made up of spiraling hexagons, behaves like a semiconductor, while the other end—composed of hexagons in straight lines—is a metallic conductor. Such tubes, Zettl's group showed, can act as a molecular diode, a device that allows electrical current to flow in one direction, from a semiconductor to a metal, but not in reverse.

    And researchers keep coaxing their tiny tubes to display new electronic talents. In the 12 June issue of Science (p. 1744), Walt A. de Heer, a physicist at the Georgia Institute of Technology in Atlanta, and his colleagues confirmed another theoretical prediction: that nanotubes can carry current at room temperature with essentially no resistance, thanks to a phenomenon known as ballistic transport. De Heer likens the normal flow of electrons to water coursing down a river, slowed by the constant friction with the riverbed and collisions with rocks. Ballistic transport is more akin to water after it has spilled over the lip of a waterfall: It simply flies through space, unimpeded by friction or collisions.

    De Heer's group showed, by measuring the electrical transport down individual MWNTs, that if a tube has even one defect-free layer, electrons can stream through the tube without scattering off defects or atoms. “The upshot is that they move freely without losing energy,” says de Heer. Researchers have achieved ballistic transport in other types of electronic devices, but usually over very short distances and at temperatures just a whisker above absolute zero. Ballistic transport in nanotubes is akin to the way photons fly down optical fibers without losing energy. “We're coming into an area where electronics starts to resemble optics,” says de Heer. “It gives you all sorts of options for making low-loss circuitry,” says Smalley. “And mind you, that's at room temperature. Room temperature!”

    Although these demonstrations have come just in the past year, several groups have already begun to move on to making ultrasmall electronic devices. In the 7 May issue of Nature, Dekker's team reported the creation of the first room-temperature, nanotube-based transistor. To construct the device, Dekker and his colleagues simply laid a semiconducting nanotube across a pair of tiny gold pads grown atop an insulating layer of silicon dioxide. The silicon dioxide itself sits on a layer of silicon that served as the “gate” to switch conduction in the nanotube on and off. When the researchers applied no voltage to the gate, current was unable to pass through the nanotube from one gold pad to the other. But when they applied a voltage to the silicon layer, this created charge carriers in the nanotube, allowing current to flow between the two gold pads.

    The nanotube device didn't make a particularly great transistor, for it made poor electrical contact with the gold pads. But de Heer calls it a landmark achievement nonetheless, because “it shows that these are leading to real applications.” Smalley echoes the sentiment, again underscoring the point that these nanodevices work at room temperature. “It means the old dreams of molecular electronics can be redreamed,” he says. And none too soon. Smalley points out that the semiconductor industry's own timeline shows that by 2006 it will have reached a physical limit of current technology to shrink silicon-based transistors. Carbon nanotubes, say Smalley and others, may be a strong contender for silicon's replacement.

    Working stiff

    Even without those dazzling electronic possibilities, nanotubes' unmatched mechanical properties would still make them candidates for stardom. Nanometer for nanometer, nanotubes are up to 100 times stronger than steel. When a group led by Harvard's Lieber tested the bending strength of tubes on the end of an atomic force microscope (AFM), they found that they easily beat the stiffest silicon-carbide nanorods that are used to make high-strength composites. “This is the stiffest stuff you can make out of anything,” says Smalley. Try to have a nanosized tug-of-war by pulling on opposite ends of a nanotube, and you won't find much give. “It doesn't pull apart,” says Zettl.

    Down the road, the stiffness of nanotubes could make them great for building strong, lightweight composites. However, they're already coming in handy for improving the capabilities of the sharp, needlelike tips of atomic imaging machines. Such imagers scan their pointed silicon tips over a surface and map its contours by either letting the tip touch the surface—in an AFM—or holding it close and passing a current from tip to surface—in an STM. When it comes to seeing fine contours, however, the normally squat tips have trouble reaching down inside atomic-scale crevices.

    But in 1996 Smalley and his group showed that they could image the bottom of these trenches by attaching a carbon nanotube to the end of a silicon STM tip. Lieber and his colleagues quickly picked up on the idea as a way to obtain better images of biological molecules which, because of their pliability, posed even more of a problem for standard STM tips. Earlier this year, they reported in the Journal of the American Chemical Society that a nanotube-based tip did indeed allow them to create images of biological molecules with better resolution than that of conventional tips. Lieber and his team extended this feat with a report in the 2 July issue of Nature that they had attached different chemical groups to the end of a multiwalled nanotube, creating a nanotube-based atomic imager capable of recognizing specific chemical groups on a surface, and therefore recording not just the surface contours but identifying the actual molecules as well.

    “It's great work,” says Chunming Niu, a nanotube researcher at Hyperion Catalysis. An array of nanotube probe tips, each outfitted with different functional groups, could offer researchers an entirely new way to map surfaces. That could be particularly useful to biotech researchers interested in mapping the structure of cell membranes and other cellular structures, says Niu. Indeed, Lieber says that he and his Harvard colleagues are already hard at work on such applications.

    Gears to guns

    Compared to some potential applications of nanotubes, improving the performance of STMs may even seem unimaginative. Take the case of the nanotube gear. Researchers at NASA's Ames Research Center in Mountain View, California, have recently developed computer models of nanotube gears that have benzene groups arrayed around the tube to act as cogs. As one of these nanocylinders rolls, its tiny teeth turn the nanotube like a microscopic drive shaft.

    Nanogears are likely to remain simulations for some time, however, as there's no obvious way to build them. But other seemingly exotic applications are nearing reality. In 1995, de Heer—then at Ecole Polytechnic Fédérale de Lausanne in Switzerland—and colleagues wired up an array of nanotubes to act as tiny electron guns, similar to the large-scale devices that scan the back of TV screens and generate color images. At the time, de Heer suggested that the tiny electron guns could someday be used to create flat-screen televisions. That day is fast approaching. At the Second International Vacuum Electron Sources Conference last month in Tsukuba, Japan, Yahachi Saito and his colleagues from Mie University and Ise Electronics Corp. unveiled the first nanotube-based display lighting elements (Science, 31 July, p. 632). Saito says he hopes to commercialize flat-panel displays using the new nanotube electron guns by the year 2000.

    Other researchers are also looking into using nanotubes as a storage medium for both hydrogen gas to power fuel cells and liquid electrolytes for batteries. Hydrogen-burning fuel cells, for example, are widely considered to offer a promising clean technology for powering cars. The technology's Achilles' heel, however, is the gas tank, because high pressures are needed to store enough hydrogen to drive very far. But open-ended nanotubes can, at just below room temperature, suck up and hold onto large amounts of hydrogen with little added pressure. Raising the temperature slightly shakes the adsorbed hydrogen loose and releases the gas, making nanotubes a near-ideal hydrogen storage medium, easily beating out the current competition. “The attractive interaction between hydrogen and carbon allows you to reduce the amount of pressure you need” to store hydrogen, says Donald Bethune, a physicist at IBM's Almaden Research Center in San Jose, California, who has pioneered research in the area.

    Mass production

    Before these potential applications can move out of the lab, however, researchers must overcome a very large hurdle: producing enough of the stuff. Hyperion uses a catalytic process to produce hundreds of kilograms of MWNTs every day. But any industrial process is going to require tons. The situation is even worse for the more highly sought-after SWNTs, which today can be made only by the handful. A start-up company in Lexington, Kentucky, called CarboLex has recently begun selling SWNTs for about $200 a gram. At those prices, nanotubes won't be replacing graphite in tennis rackets anytime soon.

    “We're still sort of stuck” in efforts to produce the tubes in bulk, says Smalley. But he is optimistic that new synthetic schemes will solve this problem. “I believe that within 10 years, some smart aleck will find a way to grow single-wall nanotubes from the gas phase off of little catalyst particles in ton amounts, just like polypropylene,” says Smalley. If so, the tiny tubes face a big future.


    Nanotubes: The Next Asbestos?

    1. Robert F. Service

    The unrivaled mechanical properties of nanotubes, and their potential for molecular-scale electronic devices, are firing the imaginations of chemists across the world. But there is potentially one very big fly in the ointment: toxicity. Nanotubes are rigid graphite cylinders, each just one or more nanometers wide and up to 100 micrometers long. It just so happens that this resembles the shape of asbestos fibers that have been linked to cancer. Could nanotubes be toxic? “Certainly it's a concern,” says Chunming Niu, a chemist with Hyperion Catalysis International, a company based in Cambridge, Massachusetts, that produces carbon nanotubes.

    The dangers of asbestos first came to light in the early 1960s, when studies linked exposure to these silicate fibers with mesothelioma—a rare cancer of the lining of the chest or abdomen that's commonly fatal. Asbestos fibers were found to be so small that they could be inhaled into the deep lung, where they could stick around for decades. Once there, metals in the silicate fibers could act as catalysts to create reactive oxygen compounds that go on to damage DNA and other vital cellular components.

    Whether nanotubes could reproduce this behavior is unknown: Their toxicity has yet to be tested. But already views on their safety differ sharply. “[Nanotubes] may be wonderful materials,” says Art Langer, an asbestos expert at the City University of New York's Brooklyn College. “But they reproduce properties [in asbestos] that we consider to be biologically relevant. There is a caution light that goes on.” Most notably, says Langer, nanotubes are the right size to be inhaled, their chemical stability means that they are unlikely to be broken down quickly by cells and so could persist in the body, and their needlelike shape could damage tissue.

    For those reasons, Niu says that Hyperion is careful about how it handles the material: “We treat our nanotubes as highly toxic material.” The company produces about 300 kilograms of multiple-walled nanotubes every day and ships them to clients for use in electrically conducting plastics. But rather than shipping the nanotubes as a powder, Niu says Hyperion first incorporates the tubes into a plastic composite so that they cannot be inhaled.

    Researchers such as Brooke Mossman, however, doubt that nanotubes will turn out to be dangerous even if they find their way into the body. Mossman, a pathologist and asbestos expert at the University of Vermont College of Medicine, notes that it is the ability of asbestos to generate reactive oxygen compounds that makes it carcinogenic. But the graphitic carbon structure of nanotubes is not likely to react with cellular components to produce damaging byproducts. “We've worked with a lot of carbon-based fibers and powders and not seen any problems,” says Mossman.

    Richard Smalley, a nanotube chemist at Rice University in Houston, agrees. In addition to being unreactive, most nanotubes when synthesized come out as a tangled mass of fibers rather than individual spears, he says. Ironically, his team recently developed the first technique to cut individual nanotubes into short, spearlike segments (Science, 22 May, p. 1253). But for now only a few grams of those tubes exist. Until more research determines whether nanotubes are dangerous, researchers are treating them with caution.


    Cracks: More Than Just a Clean Break

    1. Alexander Hellemans
    1. Alexander Hellemans is a science writer in Naples, Italy.


    Fracture is ubiquitous in our daily lives: Cups break, handles snap off, windows shatter. It is also one of the most important causes of economic loss in industrialized society, as bridges fail, ships break up, and buildings collapse—all because of cracks. According to a 1983 study by the U.S. Department of Commerce, materials failure costs Americans roughly 4% of the gross national product. But despite the scale of the problem, what makes some cracks remain small and benign while others rip through a material at high speed is still a puzzle. James Langer of the University of California, Santa Barbara, who has studied cracks for the last 10 years, says that he once thought he understood how they work. “I am now pretty sure I don't,” he notes.

    Crack research is finally on the move, however. The catalyst for this change is a more realistic view of the materials that cracks propagate through. Far from being the fault-free, regular crystals beloved by computer modelers, they are full of defects—microscopic impurities and dislocations in their crystal structure—that play a key role in the evolution of cracks. Scientists have found that defects are not only pivotal to the path and speed of the propagating crack, but that the crack tip itself transforms the material by creating a cloud of microcracks and dislocations around itself.

    Computer modelers have been having trouble keeping up with this new trend. Simulating a crack that spawns a complex, three-dimensional network of defects around itself requires them to model hundreds of millions of interacting atoms, many more than are necessary for a crack that cleaves a material cleanly, and so they are limited by available computer power. Experimentalists, on the other hand, are racing ahead and probing the role of defects with advanced microscopy and other new experimental techniques. Although many researchers who spoke with Science believe it will take years to understand cracks well enough to predict where and when they will occur, they hope that the two branches of the field will converge and produce a better understanding of the basics of crack behavior. “We have learned a lot about the interaction of the defects,” says Peter Gumbsch of the Max Planck Institute for Metals Research in Stuttgart, Germany.

    Wolfgang Knauss of the California Institute of Technology (Caltech) in Pasadena says that scientists began studying cracks in earnest during the 1940s and '50s. They were prompted by a number of perplexing catastrophes, such as the failures of the wartime Liberty freighters, the first all-welded ships—of the 4700 ships built, more than 200 developed catastrophic fractures, some simply splitting in two while in port—and the loss of two Comet airliners, the pioneering, British-designed commercial jets. Early studies focused on why some materials behave in a ductile fashion, blunting cracks and stopping them easily, while others are brittle, allowing cracks to propagate in a flash over great distances. Researchers also hoped to understand why materials that are ductile in some conditions are brittle in others, like the steel hull of the Titanic. Ductile at room temperature, in the freezing waters of the North Atlantic and under heavy impact, it responded as a perfectly brittle material, instantly splitting open after the ship struck the iceberg.

    Models lag. The experimental and modeling branches of crack research have always worked side by side. Following the disasters 4 and 5 decades ago, engineers started testing materials by stretching and bending them to breaking point under controlled conditions. Theorists fed these data into macroscopic mathematical models that generally simulated materials as perfectly homogeneous substances characterized by general properties, such as elastic constants. Such models are still far from producing reliable predictions of the failure of materials, says Ladislas Kubin of ONERA, France's national aerospace research center in Châtillon, near Paris: “If the model works for one material, you have no certainty it will work for another one.”

    Basic researchers similarly began to investigate how materials fail. Although they could not at that time examine samples at the atomic level, they knew that their models would have to describe how individual atoms or molecules interact with each other around the propagating crack tip. But the models had a similar lack of success in predicting how real materials behave. These microscopic models generated “purely mathematically sharp cracks that existed in ideally brittle materials,” says Knauss, adding: “The answers don't help you really understand what goes on in the laboratory.” For example, such models invariably yielded a maximum propagation speed for the crack equal to the speed of sound over the surface of the material—known as the Rayleigh speed. But in experiments, such high speeds have never been encountered. “It is typically one-third,” says Knauss. “If you get half of it, you are doing very well.”

    In many cases, a simple lack of number-crunching power has hampered simulation efforts. Many rely on a technique called molecular modeling, in which an artificial array of atoms is created and the forces between them specified. Because of the complexity of accounting for all the forces between atoms, current simulations run on supercomputers can handle a maximum of about 100 million atoms or molecules. Although this sounds like a lot, it still represents a very tiny bit of material. “Even if they can deal with 1 billion atoms, you still only have a micron cube. This is not enough; a real plastic zone [formed at the tip of a crack] extends over millimeters,” says Gumbsch.

    Over the past few years, however, several groups have reported computer simulations that come closer to mimicking physical reality. Brad Lee Holian at the Los Alamos National Laboratory in New Mexico says that his group and others have done molecular dynamics simulations that show a “range of behaviors, ranging from brittle to ductile behavior.” Holian thinks a good choice of parameters describing the forces between atoms and defects, and a simulation large enough to encompass several tens of millions of atoms, are the key to success. Farid Abraham of the IBM Almaden Research Center in San Jose, California, says his team has improved on its simulation results by combining macroscopic continuum models with atomistic ones. “We now see a general convergence of the numerical methods, where all of the computational disciplines for the various size scales, the macroscopic to the microscopic, are coupled,” he says.

    Nevertheless, modelers are still struggling to keep up with the phenomenon that experimental studies suggest may be the key to crack behavior: the microcracks and dislocations that are formed around the crack tip. “That is something the modelers, including us, are still working on,” says Gumbsch. There have been a few attempts at simulating dislocations near the crack tip in three dimensions, Gumbsch says, but modelers usually try to simulate just one dislocation at a time, looking at how it nucleates from a step in the crack front. “We still don't have reliable simulations,” says Gumbsch.

    Get to the point. With computer models still falling well short of a realistic depiction of cracking, researchers have been devising new kinds of experiments to study fracture in real materials. The key numbers describing crack formation are the speed of the crack tip and its path. Although cracks travel at less than the speed of surface waves on a material, their pace is still daunting. Typically, cracks in silicon crystals can travel at speeds of 4 to 5 kilometers per second.

    To measure such speeds, scientists first resorted to high-speed photography, applying stress to materials and snapping up to 2 million frames per second to follow the evolution of cracks. But a method developed in 1992 by a team led by Michael Marder of the University of Texas, Austin, called the “potential drop” experiment, now gives the most accurate results. Researchers first coat a Plexiglas or glass sample with a thin layer of aluminum, then put the sample under stress and pass an electric current through the aluminum. As the crack forms, the aluminum layer is ruptured and the researchers measure the current drop to determine the speed of the crack with an accuracy of 20 meters per second.

    These experiments raised the puzzle that researchers are now starting to solve: the lower-than-expected speed of cracks. Although speed does increase with increasing load on the material, it levels off far short of the Rayleigh speed. No matter how hard you pull apart a sample of Plexiglas, a crack will not travel faster than about 800 meters per second. Researchers are now starting to understand this discrepancy, as they realize that cracks are far more complex phenomena than previously thought.

    The source of the speed limit seems to be the “cloud” of tiny cracks that forms around the crack tip. These microcracks, ignored in early models, are caused by dislocations spreading out from the crack tip, says Holian. At each point the crack tip has to “choose” which particular microcrack to follow. This explains why the direction of a crack can change suddenly. “It is possible that the crack will follow the dislocation along a [plane of atoms] that is at an angle to the crack tip,” Holian says. The surface left by a fracture records what happened during crack propagation, including the dislocations and microcracks generated by the crack tip, and researchers have tried to characterize fracture surfaces mathematically. The surface exhibits a roughness that can be described as a fractal surface—similar shapes can be found over a wide range of different size scales, says Kubin of ONERA, from nanometers to hundreds of micrometers: “There is something universal in the geometry of the crack surface.”

    Researchers now believe that the processes responsible for this fractal geometry—the constant creation of microcracks and the shifts of direction—are also what slow the crack down. “This process takes a substantial amount of time away from the idealized crack propagation process,” says Caltech's Knauss. He and his colleagues tested this hypothesis by artificially creating very weak joints in materials, which prevent microcracks from forming. In such materials, they observed crack speeds of up to 95% of the Rayleigh wave speed. “If you get rid of this multiple flaw generation, you deal with idealized materials, and the idealized high-speed crack model is applicable,” he says.

    As well as simply slowing down the crack, the creation of defects ahead of the tip actually changes the mechanical properties of some materials locally, in a process called brittle-to-ductile transition. “At a certain temperature and certain strain rate, the crack starts emitting dislocations” that make the material ductile, says Kubin. Dislocations are shifts of layers of atoms in the crystal lattice, and because dislocations can move through the material, it becomes ductile. This transition can bring the crack to a halt in some materials.

    Studying the brittle-to-ductile transition is very difficult in anything but the simplest of materials, says Steve Roberts of Oxford University. Recent, as-yet-unpublished research with tungsten samples has convinced Gumbsch and his colleagues that what determines whether the transition takes place is the ability of the dislocations to spread out from the crack tip rapidly, making the area around it ductile. “If dislocations can be generated from the crack, that is, injected into the matrix [of the material], and can move away, the material responds in a ductile way. If the dislocations cannot move away from the crack fast enough, then it responds in a brittle way,” says Gumbsch.

    Gumbsch and his colleagues have recently performed experiments in which they investigated tungsten “post-mortem” with scanning electron microscopes to see the details of how a ductile response can slow down or stop a crack. The microscopic observation of dislocations created at different temperatures in the samples shows that a dislocation moving away from a crack “intercepts” part of the external force field. “It prevents the crack from seeing the outside load,” says Gumbsch. If the crack tip creates enough dislocations, they will shield the crack entirely and it will slow down or even stop. “If you can create sufficiently many dislocations, and they move away fast enough, then the crack tip will deform and the material response will be ductile,” he adds.

    These brittle-to-ductile transitions occur in a range of materials, such as silicon and metals with a crystal structure known as body-centered cubic, but the temperatures at which the transition takes place are sometimes difficult to pin down because there is no precise way of identifying ductile behavior. It is a “gray area,” says Holian, but he adds that the transition is more likely to occur at higher temperatures.

    Although researchers are learning a huge amount through experimental studies of cracks, Kubin is convinced that computer modelers will have to incorporate this new knowledge if crack researchers are going to achieve a true understanding of material failure. “In this domain, you can make experiments and experiments and experiments, but it will not advance your understanding. We should not look at complicated situations and materials; we should try to understand basic things.”

Stay Connected to Science