News this Week

Science  15 Jun 2007:
Vol. 316, Issue 5831, pp. 1550
  1. DRUG SAFETY

    Heart Attack Risk Overshadows a Popular Diabetes Therapy

    1. Jennifer Couzin
    Metadata moment.

    The Cleveland Clinic's Steven Nissen testified before Congress about potential links between heart attacks and use of the diabetes drug Avandia.

    CREDITS: JOE MARQUETTE/BLOOMBERG NEWS/LANDOV; (INSET) JB REED/BLOOMBERG NEWS/LANDOV

    Regulators and physicians are once again on the defensive, nervously struggling to interpret revelations about a popular drug. This time the worry focuses on a treatment for diabetes that two separate analyses have linked to heart attacks. The drug, Avandia, has been on the market for 8 years and has been taken by millions of diabetes patients worldwide. In the 3 weeks since a physician at the Cleveland Clinic in Ohio warned about a heart attack hazard, the furor has prompted congressional hearings, patient anxiety, and demands that the U.S. Food and Drug Administration (FDA) explain why potentially severe problems with an approved drug have gone undetected until now.

    The Avandia case shares eerie parallels with that of the anti-inflammatory drug Vioxx—except that Vioxx was clearly linked to heart attacks in a single, massive clinical trial and was quickly pulled off the market in 2004 by its maker, Merck. The Avandia case is murkier. Steven Nissen, chair of cardiovascular medicine at the Cleveland Clinic, found an alarming signal in a meta-analysis of 42 Avandia trials, and experts are still debating its implications. Nissen began investigating after noting an increase in heart attacks in two trials published last year. There, the differences were not statistically significant, but “I sat up and took notice,” says Nissen, who also spoke out against Vioxx.

    To bring together a much broader swath of Avandia studies, Nissen used data released under a legal settlement by Avandia's maker, GlaxoSmithKline. Sued in 2004 by New York Attorney General Eliot Spitzer on charges of concealing data on the antidepressant Paxil, the company had agreed to post online trial results. During a frenetic week in late April, Nissen and Cleveland Clinic statistician Kathy Wolski melded data from dozens of Avandia trials, including results of 27 still unpublished. They found that patients on Avandia were 43% more likely to have heart attacks than those in a comparison group.

    After Nissen and Wolski's results appeared online on 21 May in the New England Journal of Medicine (NEJM), Glaxo and FDA revealed that Glaxo had performed a similar meta-analysis last year and found an increased heart attack risk of 31%. Glaxo had given the information to FDA and posted it quietly on its Web site—so quietly that Nissen didn't discover it until 2 days before submitting his own meta-analysis for publication.

    But what, exactly, have the new data revealed? Even Nissen agrees that meta-analyses, including his and Wolski's, have drawbacks. The number of heart attacks identified in more than 27,000 people was small: 86 in the Avandia group and 72 in the comparison group, in trials that lasted at least 24 weeks. And because the individual studies were not looking for heart attacks, they did not use a uniform definition.

    Furthermore, meta-analyses combine trials that are different lengths and based on different types of comparisons. Because of the statistical challenges, a meta-analysis is often “an absolutely imprecise measure of risk,” says Darren McGuire, a cardiologist at the University of Texas Southwestern Medical Center in Dallas. But in the case of Avandia, “it's the best we have,” says McGuire, and “that's the problem. It's uncertain, … but you can't sweep it under the carpet.”

    Researchers note that Nissen's work is strengthened by Glaxo's own meta-analysis. The company, however, is playing down the results. Meta-analyses are “ways of asking questions,” not answering them, says Anne Phillips, Glaxo's clinical vice president for the cardiovascular and metabolic medicines development center. The company followed up with an observational study of 33,000 patients in a health insurer's database and found no increase in heart attack risk for those on Avandia, in a report published last week in Pharmacoepidemiology and Drug Safety.

    Outside Glaxo, McGuire and many others are concerned enough to be asking how Avandia might cause heart attacks. The drug belongs to a class called thiazolidinediones; it reduces blood sugar mainly by making many of the body's tissues more sensitive to insulin. This allows those tissues to better respond to the hormone and keeps glucose levels healthy.

    Thiazolidinediones, however, have broad effects, turning off or on dozens of genes. The class has a troubled history. One member, the diabetes drug Rezulin, was yanked off the market in 2000 after being linked to liver failure. Thiazolidinediones, including the two currently on the market, are known to increase the risk of fluid buildup and heart failure and carry warnings to that effect. Yet the heart attack signal has come as a surprise to many, in part because a study published in 2005 suggested that the other available thiazolidinedione—a drug called Actos from Takeda Pharmaceuticals and Eli Lilly—protects against heart attacks.

    The heart attack risk, if confirmed, may be specific to Avandia, and some observers suspect that the cause may be the drug's effect on lipids. A study published in 2005, led by Ronald Goldberg of the University of Miami, reported that whereas Actos lowers triglycerides, Avandia raises them. Avandia also raises LDL, or “bad” cholesterol, more so than Actos, and it raises HDL, or “good” cholesterol, less. David Nathan, director of the diabetes center at Massachusetts General Hospital in Boston, says this difference has made him reluctant to prescribe Avandia. On the other hand, says Jorge Plutzky, a preventive cardiologist at Harvard Medical School in Boston, the fluid buildup associated with both Actos and Avandia may increase heart attack risk.

    Many researchers were banking on a definitive answer from a large clinical trial run in Europe, Australia, and New Zeland called RECORD and funded by Glaxo, examining the cardiovascular effects of Avandia. In an interim analysis in NEJM last week (the trial is to end in late 2008), the study leaders found no heart attack signal associated with Avandia; they could not rule out the risk, either. But because the study reported a much lower rate of heart problems than predicted, some are concerned that it may not be powerful enough to answer this question.

    “The sin of it is that we're still not sure” what's going on, says Nathan. Observers hope to learn more from an FDA advisory committee meeting slated for 30 July. For now, doctors and their patients are left to sift through data that's far from complete.

  2. PARTICLE PHYSICS

    Delay in Europe Could Mean Extra Year for U.S. Collider

    1. Adrian Cho

    Physicists were hardly surprised when officials at the European lab CERN announced last week that the world's new highest-energy atom smasher, the Large Hadron Collider (LHC), will not start up in November as planned. Assembly of the $3.8 billion accelerator near Geneva, Switzerland, was more than a month behind, leaving no time for a scheduled month-long “engineering run” before power becomes prohibitively expensive in the winter (Science, 6 April, p. 31).

    But with the LHC's start-up delayed until April 2008, physicists in the United States are mulling another possibility: At a meeting last week at Fermi National Accelerator Laboratory (Fermilab) in Batavia, Illinois, they began discussing whether to run the lab's venerable Tevatron collider for another year, through 2010.

    Just a year ago, Fermilab physicists worried that the Tevatron would shut down before 2009 (Science, 2 June 2006, p. 1302). The 2009 run now seems likely, and researchers are speculating about running longer. “I would not want to see anything done that would preclude running in 2010,” says Terry Wyatt of Manchester University in the U.K., who is co-spokesperson for the team working with D0, one of two particle detectors fed by the Tevatron. “Signing contracts to start dismantling the machine in October 2009 would be crazy.”

    The decision will begin with deliberations by the Particle Physics Project Prioritization Panel (P5), which advises the U.S. Department of Energy (DOE) and the National Science Foundation. At the meeting, P5 heard updates from researchers working with D0 and the rival CDF detector. The panel will make its official recommendation for 2009 next month. Because the Tevatron is cranking out copious data while the LHC is delayed, “one can anticipate that we will recommend running in 2009,” says P5 chair Abraham Seiden of the University of California, Santa Cruz.

    Making the call for 2010 will be tougher. “It would take some unusual circumstances to justify running beyond 2009,” Seiden says. The Tevatron might get a further lease on life if experimenters see signs of the long-sought Higgs boson or other particles, or if the LHC comes on slowly.

    The timing will be tricky. To help DOE set its 2010 science budget, P5 must weigh in by the end of next summer. But the LHC won't start smashing particles until next July. “The problem is that by next summer the LHC is barely turning on, so we won't know how things are going to go,” says Pier Oddone, director of Fermilab.

    Overtime?

    Fermilab's Tevatron collider could keep cranking through 2010.

    CREDIT: FERMILAB

    Fermilab officials must also determine whether enough people will stick around to keep the Tevatron going. Researchers are flocking to the LHC, and neither D0 nor CDF has a formal agreement to continue beyond 2009. However, Fermilab's Robert Roser, co-spokesperson for CDF, says that if researchers see solid evidence of the Higgs or something else, many will stay to cinch the discovery: “That would more or less guarantee we'll have enough people.”

    Running the Tevatron another year would cost about $30 million, Oddone says. He hopes DOE could squeeze that amount from its $750 million particle-physics budget without affecting Fermilab's neutrino experiments, work on the proposed International Linear Collider, and other projects.

    No one expects the Tevatron to run beyond 2010. To make progress, experimenters must continually improve the collider to double and redouble the size of their data set. By running through 2010, the Tevatron should double the data it will have produced by the end of next year. But with no upgrades in the works, doubling it again would take 4 more years. By then, the LHC should have long since buried the Tevatron.

  3. GLOBAL HEALTH

    Bush Boosts AIDS Relief: Cause for Applause and Pause

    1. Jon Cohen

    When President George W. Bush announced on 30 May that he wants to double the budget for the AIDS program he initiated to help financially strapped countries battle their epidemics, Democrats and Republicans alike applauded the move. Reflecting the sentiment on both sides of the aisle, Representative Tom Lantos (D-CA), who chairs the House Committee on Foreign Affairs that first authorized the President's Emergency Plan for AIDS Relief (PEPFAR) in 2003, said it was “music to my ears, and I will do all I can to ensure harmonious support for it.” Many AIDS researchers immediately started to think about how PEPFAR II, as some are calling it, could reach more people with its prevention, care, and treatment efforts.

    But HIV/AIDS advocates who closely track PEPFAR's every move had a less charitable reaction to the proposed $30 billion plan, asserting that it's misleading to call it a doubling. Congress, they note, has steadily appropriated more money to PEPFAR, with the fiscal year 2008 payout totaling $5.4 billion—close to the $6 billion a year Bush wants for PEPFAR II. “To call the $30 billion a doubling is really disingenuous,” contends Asia Russell, who heads international advocacy for the Health Global Access Project in New York City. “Bush really announced a maintenance spending level for the next 5 years.” Even Anthony Fauci, head of the National Institute of Allergy and Infectious Diseases in Bethesda, Maryland, and an architect of the original PEPFAR, says $30 billion represents a “slight to modest increase.”

    Making the grade.

    If PEPFAR stays on track, it will support the treatment of 2 million HIV-infected people in poor countries by the end of 2008.

    CREDITS: (PHOTO) BRENT STIRTON/GETTY IMAGES; DATA SOURCE: PEPFAR

    On top of these monetary concerns, advocates and scientists alike want Congress to remove constraints on how PEPFAR money can be spent. When Bush first proposed PEPFAR as a 5-year, $15 billion program, he insisted on specific provisions in the legislation, the most controversial of which requires that one-third of the money spent on prevention must go toward promoting abstinence before marriage. A March Institute of Medicine (IOM) report strongly urged Congress to remove these earmarks.

    Several scientists who were involved with the IOM evaluation, although praising Bush's leadership and ongoing commitment, see ample room for improvement. One of those, James Curran, former HIV/AIDS chief at the Centers for Disease Control and Prevention in Atlanta, Georgia, and now dean of the Rollins School of Public Health at Emory University, also in Atlanta, says that earmarking funds for abstinence programs often pits PEPFAR against other donors and country plans. A study done by the Government Accountability Office reported in April 2006 that “meeting the spending requirement can undermine the integration of prevention programs” by, for example, forcing countries to cut funds for thwarting mother-to-child transmission or for education campaigns that target groups at high risk of becoming infected.

    PEPFAR, which is authorized to run through 2008, to date has supported the anti-HIV treatment of more than 1.1 million people in 12 sub-Saharan African countries, Haiti, Guyana, and Vietnam. (Support can include anything from providing drugs to training health-care workers, funding labs, or helping these 15 “focus” countries develop policy.) By the end of September 2006, PEPFAR officials say it had supplied anti-HIV drugs to more than 500,000 infected pregnant women to prevent transmission of the virus to their babies and provided care to 2 million AIDS orphans. Another 18.7 million people received HIV tests and counseling. “Lots of people's lives are being saved,” says Mark Dybul, the Global AIDS Coordinator who heads PEPFAR.

    Bush's latest proposal is “unquestionably a doubling of resources,” says Dybul. He says PEPFAR's critics highlight “relatively minor issues” such as the size of the increase and the earmarks, ignoring the fact that many other developed countries contribute relatively little to the global HIV/AIDS efforts. “If I were an activist, I wouldn't be focusing on the $30 billion but asking where's the rest of the world so that we can expand service?” (The Group of Eight industrialized nations, which includes the United States, last week announced that it would spend $60 billion on HIV/AIDS “over the coming years,” half of which would come from PEPFAR II.)

    In March, Representative Barbara Lee (D-CA) introduced a bill that would strike the abstinence-only earmark from the original PEPFAR authorization; the bill is currently under consideration. Barry Bloom, dean of Harvard School of Public Health in Boston, says he hopes Congress excises all of the earmarks when it writes the legislation reauthorizing PEPFAR, probably in early 2008. “That kind of blanket micromanagement ends up wasting the public's money,” says Bloom.

    Other critical issues are how many people PEPFAR will treat and whether it will expand into more countries. Dybul says PEPFAR will hit its target of having 2 million on treatment next year. PEPFAR II calls for supporting the treatment of another half-million people, bringing the total to 2.5 million. The relatively small increase reflects the fact that PEPFAR has slowly ramped up whereas PEPFAR II will start off with at least 2 million on the treatment rolls and millions more in prevention programs. And rather than adding more focus countries, PEPFAR II calls for establishing “partnership compacts” that would require new countries to increase their own spending on HIV/AIDS and health-care systems in order to receive PEPFAR funds.

    Michael Merson, who heads the Duke Global Health Institute in Durham, North Carolina, and once ran the Global AIDS Program for the World Health Organization, says he'd be “delighted” if Congress supported the president's request: “Whether it's enough or not enough, it's a lot more than we're doing for other global health needs.”

  4. MEDICINE

    Initiative Aims to Merge Animal and Human Health Science to Benefit Both

    1. Martin Enserink

    Medical and veterinary science are like siblings who have grown apart. But now, there's a flurry of efforts to reunite them. Proponents of this idea, called “one medicine” or “one health,” say that breaking down the walls between the two fields will help fight diseases that jump from animals to humans, such as SARS and avian influenza, and advance both human and animal health.

    In April, the American Veterinary Medical Association (AVMA) decided to establish a 12-member task force to recommend ways in which vets can collaborate with colleagues in human medicine. In late June, the house of delegates of the American Medical Association (AMA) will vote on a resolution in support of strengthened ties between schools of medicine and veterinary science, increased collaboration in surveillance and the development of diagnostics, drugs, and vaccines across species barriers, and a “dialogue” with AVMA. The theme is also on the program at infectious-disease meetings in Europe and the United States this year.

    Although still closely connected in the 19th century, human and animal medicine became increasingly disconnected in the 20th. Recent health emergencies have occasionally driven them back into each other's arms. The global outbreak of the H5N1 avian influenza strain, for instance, has led to closer ties between the human and animal health agencies at the global and country levels.

    But such collaborations are the exception when they should be the norm, especially now, with dangerous new zoonoses popping up, says Laura Kahn, an internist at Princeton University. During the 1999 West Nile outbreak in the United States, vets saw birds dying while doctors noticed an uptick in patients with neurological symptoms, but it took a while before someone made the connection. Concrete plans still need to be fleshed out, Kahn says, and obstacles abound: For instance, educational collaborations could be tough in the United States, where there are only 28 vet schools, often in rural areas, versus more than 140, mostly urban-based, schools of medicine.

    The benefits of collaboration could go beyond zoonoses, says Jakob Zinsstag of the Swiss Tropical Institute in Basel. For instance, in Chad, Zinsstag has helped introduce joint vaccination campaigns for livestock and humans, which has helped raise vaccination rates of hard-to-reach nomadic populations. In the United Kingdom, the Comparative Clinical Science Foundation has announced plans to fund cross-species studies in areas as diverse as cancer, aging, and genetic disorders—which will yield different insights than the use of animals as models for human disease, Kahn says.

    It's all connected.

    Human and animal medicine should grow closer together, One Health supporters say.

    CREDIT: TEXAS DSHS

    The term “one medicine” was coined in 1960s by Calvin Schwabe, a veterinary scientist and epidemiologist at the University of California, Davis, who died last year. The push to put his ideas into practice originates from a fairly small number of people. Kahn, who became a convert from studying emerging diseases and bioterrorism, got the ball rolling with an article in Emerging Infectious Diseases last year. She also wrote a “vision statement”—together with Florida veterinarian Bruce Kaplan and former government virologist and biotech executive Thomas Monath, now at the investment firm Kleiner Perkins Caufield & Byers in Menlo Park, California—supported by dozens of prominent researchers. They found an enthusiastic champion in AVMA President Roger Mahr.

    In a way, the movement is also a struggle by veterinarians for a place at the table in public health, says Joan Hendricks, dean of the University of Pennsylvania's vet school. “We have been knocking politely at the door for a while,” she says, but human medicine has been slow to respond. But if the AMA resolution gets passed next week, she adds, “it would be a phenomenal support.”

  5. HIV/AIDS

    AIDSTruth.org Web Site Takes Aim at 'Denialists'

    1. Jon Cohen
    Truth be told.

    Protesters around the world assailed South Africa's health minister for questioning the worth of anti-HIV drugs.

    CREDIT: ALEX WONG/GETTY IMAGES

    For 20 years, a small but vocal group of AIDS “dissenters” has attracted international attention by questioning whether HIV causes the disease. Many AIDS researchers from the outset thought it best to ignore these challenges. But last year, another small and equally vocal group decided to counter the dissenters—whom they call “denialists”—with a feisty Web site, AIDSTruth.org. It has started to attract international attention itself. “It's great,” says Mark Wainberg, head of the McGill AIDS Centre in Montreal, Canada. “We really need to get more people to understand that HIV denialism does serious harm. And we were in denial about denialism for a long time.”

    Launched by AIDS researchers, clinicians, and activists from several countries, AIDSTruth.org offers more than 100 links to scientific reports to “debunk denialist myths” and “expose the denialist propaganda campaign for what it is … to prevent further harm being done to individual and public health.” The site also has a section that names denialists and unsparingly critiques their writings, variously accusing them of homophobia, “scientific ignorance of truly staggering proportions,” conspiracy theories, “the dogmatic repetition of the misunderstanding, misrepresentation, or mischaracterization of certain scientific studies,” and flat-out lies. “There was a perceived need to take these people on in cyberspace, because that's where they operate mostly, and that's where the most vulnerable people go for their information,” says immunologist John Moore, an AIDS researcher at the Weill Medical College of Cornell University in New York City.

    Peter Duesberg, a prominent cancer researcher at the University of California, Berkeley, whom colleagues have pilloried ever since he first questioned the link between HIV and AIDS in 1987, remains unswayed by the Web site, which he derides in an e-mail interview as a “scientifically worthless mix of ad hominems, opinions, intolerance, and religious energy—instead of a theory and facts.” Duesberg maintains that “many essential questions” about what he calls the “HIV-AIDS hypothesis” remain unanswered.

    Two factors led Moore and like-minded thinkers (who now number 11) to take off the gloves and hit back with AIDSTruth.org, which went online in March 2006. One was an article in that month's issue of Harper's magazine, “Out of Control, AIDS and the Corruption of Medical Science,” which chronicled Duesberg's travails for challenging dogma and also questioned the safety and effectiveness of an anti-HIV drug that's widely used to prevent transmission from an infected mother to her baby. Moore and other Web site co-founders wrote a 35-page critique of the article. The second trigger was the situation in South Africa. “Many people who had fought denialism in the early 1990s had lost interest in the subject, but in South Africa, it was at its peak,” explains another founder of the Web site, Nathan Geffen of South Africa's Treatment Action Campaign. Geffen and others worried that his government might use the Harper's article to justify further inaction. “South Africa has more people living with HIV than any other country, and it's also been a place where AIDS denialism has had political support with terrible results.”

    The no-frills Web site receives no funding, doesn't pay contributors, and features no ads. It refuses to debate whether HIV causes AIDS, which it says “is as certain as the descent of humans from apes and the falling of dropped objects to the ground.” It has also posted articles by authors of peer-reviewed publications who believe their findings have been distorted by people trying to prove that HIV/AIDS is a ruse. “The denialists tend to be grotesquely inaccurate,” says Richard Jefferys, an activist with the Treatment Action Group in New York City who also helped start the site. “It's almost like the more outrageously inaccurate the claim is, the more they repeat it.”

    To the delight of Jefferys and others, a Supreme Court judge in Australia in April cited a debunking article on AIDSTruth.org in a closely followed case that involved a man convicted of endangering life for not revealing he was infected with HIV to sexual partners. The man appealed, claiming that no studies prove HIV causes AIDS. His defense consisted of two “expert” witnesses, one of whom was extensively questioned about allegations that she had misused a researcher's results on sexual transmission of HIV. The questions were inspired by an editorial posted on AIDSTruth.org. The judge concluded that neither defense witness—both of whom are branded as denialists on AIDSTruth.org—was qualified to express opinions on these questions. “There's a constant concern that by rebutting these things, you're giving them more credence—there's a thin line between slaying the monster and feeding it,” says Jefferys. “The judge's decision made the Web site seem really worthwhile.”

    AIDSTruth.org has seen its popularity rise from about 60 unique visits a day to 150. But as Moore notes, “we're certainly not high up in the Google rankings.” Then again, he argues, any effective rebuke to the “anti-scientific” opinions that attract so much attention is worth the effort. “If you ignore the denialists, they're not going to disappear,” says Moore. “And they don't like the fact that we can get in their faces. They're used to being unchallenged.”

  6. HYDROLOGY

    River-Level Forecasting Shows No Detectable Progress in 2 Decades

    1. Richard A. Kerr

    And you thought weather forecasters had it tough. Hydrologists looking to forecast the next flood or dangerously low river flow must start with what weather forecasters give them—predictions of rain and snow, heat and cold—and fold that into myriad predictive models. Then those models must in turn forecast how rain and any melted snow will flow from rivulet to river while liable to loss to evaporation, groundwater, reservoirs, and farmers' fields. During their century in the forecasting business, hydrologists have developed a modicum of skill, but a newly published study fails to find any improvement during the past 20 years in forecasting river levels out to 3 days.

    “It's a pretty shocking result,” says hydrologist Thomas Pagano of the U.S. Department of Agriculture's Natural Resources Conservation Service in Portland, Oregon, who was not involved in the study. If the new results are widely applicable, “we're treading water in terms of skill.” The answer, Pagano and others say, is for hydrologic forecasters to evaluate their past performance much more rigorously.

    Grading past forecasts has long been standard practice in weather forecasting. Such forecast verification has shown that the introduction of Doppler radar in the early to mid-1990s really did lengthen warning times of tornadoes. Weather forecasters also compare proposed improvements in forecasting procedures against past performance before adopting them. Yet “little verification of hydrologic forecasts has been conducted to date,” says hydrologist Edwin Welles of the National Weather Service (NWS) in Silver Spring, Maryland.

    So Welles—who has worked at NWS since 1994—tackled hydrologic verification in his 2005 dissertation for the University of Arizona. He considered NWS forecasts and observations of river levels during 10 years at four locations in Oklahoma and during 20 years at 11 locations along the mainstem of the Missouri River. On the Missouri, a forecast location had 500 to 1000 upstream basins feeding water to it. Each basin required its own set of calibrated predictive models, each predicting a different step in water flow, such as how much water was added by melting snow versus how much soaked into the ground.

    In the April Bulletin of the American Meteorological Society (BAMS), Welles and colleagues report mixed results. Forecasters showed real skill in predicting river levels 1 and 2 days in advance compared with assuming that river levels would not change. But despite new models, more-powerful computers, better ways of displaying data and results, and even improved precipitation forecasts from NWS, the 1- and 2-day predictions didn't become more accurate over the 1 or 2 decades of the verification study, at least in the two areas studied.

    Flat-lining.

    Although errors in river-level forecasts (solid lines) can be smaller than a simple assumption of no change (dotted lines), errors have not declined with changes in forecasting procedures.

    CREDIT: ADAPTED FROM E. WELLES ET AL., BULLETIN OF THE AMERICAN METEOROLOGICAL SOCIETY (APRIL 2007)

    Troubleshooting hydrologic forecasting to understand why it's been resisting improvement will take “objective study and well-structured verification,” says Welles, “not expert opinion or ad hoc experience.” BAMS Editor-in-Chief Jeff Rosenfeld agrees. Writing in an accompanying editorial, he finds that the Welles paper makes the point that “forecasting must include verification if it is to be scientific. Every forecast is like a hypothesis, and in science every hypothesis must ultimately be tested.”

    NWS is taking Welles's research seriously. It began verifying river forecasts at all 4000 of its locations last year. And last fall, an NWS team produced a plan based on Welles's research that should lead to a single hydrologic verification system by 2011. By then, forecasters should be stroking against the current toward better forecasts.

  7. GENOMICS

    DNA Study Forces Rethink of What It Means to Be a Gene

    1. Elizabeth Pennisi

    Genes, move over. Ever since the early 1900s, biologists have thought about heredity primarily in terms of genes. Today, they often view genes as compact, information-laden gems hidden among billions of bases of junk DNA. But genes, it turns out, are neither compact nor uniquely important. According to a painstaking new analysis of 1% of the human genome, genes can be sprawling, with far-flung protein-coding and regulatory regions that overlap with other genes.

    DNA work.

    For Ewan Birney, coordinating 300 authors to analyze 1% of the human genome (graphic, blue bars) was a rewarding challenge.

    CREDITS (LEFT TO RIGHT): PHOTOLAB AT EMBL; GALT BARBER/UCSC

    As part of the Encyclopedia of DNA Elements (ENCODE) project, 35 research teams have analyzed 44 regions of the human genome covering 30 million bases and figured out how each base contributes to overall genome function. The results, compiled in a paper in the 14 June issue of Nature and 28 papers in the June issue of Genome Research, provide a litany of new insights and drive home how complex our genetic code really is. For example, protein-coding DNA makes up barely 2% of the overall genome, yet 80% of the bases studied showed signs of being expressed, says Ewan Birney of the European Molecular Biology Laboratory's European Bioinformatics Institute in Hinxton, U.K., who led the ENCODE analysis.

    Given the traditional gene-centric perspective, that finding “is going to be very disturbing to some people,” says John Greally, a molecular biologist at Albert Einstein College of Medicine in New York City. On the other hand, says Francis Collins, director of the National Human Genome Research Institute (NHGRI) in Bethesda, Maryland, “we're beginning to understand the ground rules by which the genome functions.”

    Once the human genome sequence was in hand by 2003, NHGRI set up ENCODE to learn what those 3 billion or so bases were all about. The initial 4-year, $42 million effort, which tackled 1% of the human genome, brought new and existing experimental and computational approaches to bear, mapping not just genes but also regulatory DNA and other important features such as gene start sites. NHGRI now plans to spend about $23 million annually over the next 4 years to perform a similar analysis of the whole genome, expecting that the lessons learned from the pilot ENCODE and new sequencing technologies will greatly reduce the costs of this extended project (Science, 25 May, p. 1120). “The goal is to measure all the different kinds of features across the human genome and ask which features go together to understand the whole package,” says George Weinstock, a geneticist at Baylor College of Medicine in Houston, Texas.

    A key component of the pilot ENCODE is an analysis of the “transcriptome,” the repertoire of RNA molecules that cells create by transcribing DNA. For protein-coding genes, most of their RNA transcripts—the messenger RNA (mRNA)—get translated into chains of amino acids by ribosomes. For other types of “genes,” RNA is the end product: Ribosomal RNAs become the backbone of the ribosome, for example.

    Researchers used to think very little RNA was produced beyond mRNA and a smattering of RNA end products. But about half the transcripts that molecular biologist Thomas Gingeras of Affymetrix Inc. in Santa Clara, California, discovered in his RNA survey 2 years ago didn't fit into these categories (Science, 20 May 2005, p. 1149), a finding ENCODE has now substantiated. The ENCODE researchers knew going in that the DNA they were studying produced about eight non-protein-coding RNAs, and they have now discovered thousands more. “A lot more of the DNA [is] turning up in RNA than most people would have predicted,” says Collins.

    ENCODE has produced few clues as to what these RNAs do—leaving some to wonder whether experimental artifacts inflated the percentage of DNA transcribed. Greally is satisfied that ENCODE used enough different techniques to show that the RNA transcripts are real, but he's not sure they're biologically important. “It's possible some of these transcripts are just the polymerase [enzyme] chugging along like an Energizer bunny” and transcribing extra DNA, he suggests.

    But in the 8 June issue of Science (p. 1484), Gingeras and his colleagues reported that many of the mysterious RNA transcripts found as part of ENCODE harbor short sequences, conserved across mice and humans, that are likely important in gene regulation. That these transcripts are “so diverse and prevalent across the genome just opens up the complexity of this whole system,” says Gingeras.

    The mRNA produced from protein-coding genes also held surprises. When Alexandre Reymond, a medical geneticist at the University of Lausanne, Switzerland, and his colleagues took a close look at the 400 protein-coding genes contained in ENCODE's target DNA, they found additional exons—the regions that code for amino acids—for more than 80%. Many of these newfound exons were located thousands of bases away from the gene's previously known exons, sometimes hidden in another gene. Moreover, some mRNAs were derived from exons belonging to two genes, a finding, says Reymond, that “underscores that we have still not truly answered the question, 'What is a gene?'” In addition, further extending and blurring gene boundaries, ENCODE uncovered a slew of novel “start sites” for genes—the DNA sequences where transcription begins—many located hundreds of thousands of bases away from the known start sites.

    Before ENCODE started, researchers knew of about 532 promoters, regulatory DNA that helps jump-start gene activity, in the human DNA chosen for analysis. Now they have 775 in hand, with more awaiting verification. Unexpectedly, about one-quarter of the promoters discovered were at the ends of the genes instead of at the beginning.

    The distributions of exons, promoters, gene start sites, and other DNA features and the existence of widespread transcription suggest that a multidimensional network regulates gene expression. Gingeras contends that because of this complexity, researchers should look at RNA transcripts and not genes as the fundamental functional units of genomes. But Collins is more circumspect. The gene “is a concept that's not going out of fashion,” he predicts. “It's just that we have to be more thoughtful about it.”

  8. SYNTHETIC BIOLOGY

    Attempt to Patent Artificial Organism Draws a Protest

    1. Jocelyn Kaiser

    An activist group's concern about maverick genome sequencer J. Craig Venter's intention to patent an entirely synthetic free-living organism has thrown a spotlight on the emerging intellectual-property landscape in this hot new field. The protesters claim that Venter wants his company to become the Microsoft of synthetic biology, dominating the industry.

    Venter hopes to use the artificial life form, which he says does not yet exist, as a carrier for genes that would enable the bug to crank out hydrogen or ethanol to produce cheap energy. Duke University law professor Arti Rai says the patent, if awarded, "could be problematic" only if Venter's product became the standard in the field. But Venter says this application is just the start: He plans to patent methods that would cover more than the single microbe described in the application. "We'd certainly like the freedom to operate on all synthetic organisms" that could serve as a chassis for swapping out genes, says Venter, whose research team is at the nonprofit J. Craig Venter Institute in Rockville, Maryland, but who recently started a company to commercialize the work.

    Filed last October and published by the U.S. Patent and Trademark Office on 31 May, the application describes "a minimal set of protein-coding genes which provides the information required for replication of a free-living organism in a rich bacterial culture medium." The application cites work by Hamilton Smith and others on Venter's team on a simple bacterium called Mycoplasma genitalium that they are using to determine the minimum number of genes for life. They want to synthesize this "minimal genome" from scratch, get it working inside a cell, then add genes to produce cheap fuels (Science, 14 February 2003, p. 1006).

    In a press release, the ETC Group, a technology watchdog in Ottawa, Canada, called Venter's "monopoly claims … the start of a high-stakes commercial race to synthesize and privatize synthetic life forms." ETC calls for the U.S. and international patent offices to reject the patent so that societal implications can be considered. ETC also cited a recent Newsweek interview in which the scientist says he wants to create "the first billion- or trillion-dollar organism."

    Venter says this is just one of several patent applications that would give his company, Synthetic Genomics Inc., exclusive rights to methods for making synthetic organisms. The artificial Mycoplasma "may or may not be" the one used to generate hydrogen or ethanol, he says; his team is working on several species. "We haven't given any thought to" the licensing conditions, but in any case, they would not impede work in academic labs, says Venter, adding, "This is a problem that we hope will have hundreds of solutions."

    Future monopoly?

    Craig Venter wants to patent methods for making synthetic organisms.

    CREDIT: MICHAEL NAGLE/GETTY IMAGES

    Rai says the notion that Venter's Mycoplasma strain will dominate the way Microsoft's Windows did is tenuous because "about 10 things would have to happen," among them that Venter would create the organism, get the patent, and others would adopt his technology as the standard. Even if that happened, Venter "could do well [financially] and do good," she says, by licensing the technology at low cost as a research tool, as happened with the original patents on recombinant DNA technology.

    Other synthetic biologists don't seem fazed. "He's shooting an arrow in the general direction that things are going," says Frederick Blattner of the University of Wisconsin, Madison, who has patented a stripped-down Escherichia coli and founded a company called Scarab Genomics that is commercializing the technology while disbursing it to academic researchers for a small cost. The more pertinent question, says Harvard's George Church, is whether the inventors' claims to have devised something useful will hold up, as there's no obvious reason why a completely synthetic Mycoplasma is needed rather than, say, modified E. coli to make hydrogen.

    Massachusetts Institute of Technology synthetic biologist Tom Knight, who has pointed out that anyone could get around the patent simply by adding more than the 450 genes stipulated, says his complaint is that the application doesn't explain how to build the artificial cell. "I think it's rather tasteless," Knight says.

  9. PALEOANTHROPOLOGY

    Food for Thought

    1. Ann Gibbons

    Did the first cooked meals help fuel the dramatic evolutionary expansion of the human brain?

    Joy of cooking.

    Cooked food provides easy calories, but did H. erectus have campfires?

    CREDIT: © JOHN GURCHE

    Richard Wrangham was lying beside a fire at home on a cold winter night in Boston 10 years ago when his mind wandered to the first hominids to cook food. He imagined a small group of Homo erectus huddled around a campfire in Africa, roasting a leg of wildebeest and sharing a morsel of singed potato or manioc.

    As a Harvard University primatologist who studies wild chimpanzees in Africa, Wrangham knew that cooking is one of the relatively few uniquely human abilities. He also knew that our habit of predigesting our food by heating it allows us to spend less energy on digestion. And he suddenly realized that cooking is not merely the basis of culinary culture. It would have given our ancestors a big evolutionary advantage. “With cooking, we should see major adaptive changes,” says Wrangham. He argues that cooking paved the way for the dramatic expansion of the human brain and eventually fueled cerebral accomplishments such as cave painting, writing symphonies, and inventing the Internet. In fact, Wrangham presents cooking as one of the answers to a long-standing riddle in human evolution: Where did humans get the extra energy to support their large brains?

    Expanding the brain demands a new supply of energy, because human brains are voracious. The brain consumes 60% of the energy expended by a resting newborn baby. And a resting adult's brain uses 25% of its energy, as opposed to 8% used on average by ape brains. But humans consume about the same amount of calories as smaller-brained mammals of similar body size—for example, small women have the same basal metabolic rate as large chimpanzees.

    One classic explanation for this phenomenon is that humans saved energy by shrinking their gastrointestinal organs, effectively trading brains for guts as they shifted to a higher quality diet of more meat. That theory is now gathering additional support (see sidebar, p. 1560).

    Wrangham thinks that in addition, our ancestors got cooking, giving them the same number of calories for less effort. He floated his hypothesis back in the late 1990s (Science, 26 March 1999, p. 2004), but now he's championing it with a slew of new data, some of which he presented at a recent symposium.* “Even small differences in diet can have big effects on survival and reproductive success,” he says.

    Other researchers are enthusiastic about the new results. They show “the fundamental importance of energy budgets in human evolution,” says paleoanthropologist Robert Foley of Cambridge University in the U.K. But many aren't convinced by Wrangham's arguments that the first cooked meal was prepared 1.9 million to 1.6 million years ago, when the brain began to expand dramatically in H. erectus. They think that although saving energy by shrinking the gut may have been important at this time, the culinary explosion came later, perhaps during the evolution of our own species less than half a million years ago. “What all these adaptations are about is increasing the bang for the buck nutritionally,” says William R. Leonard, a biological anthropologist at Northwestern University in Evanston, Illinois. “The challenge ultimately is to work out the exact timing of what led to what.”

    Boosting brainpower

    Even those unsure about the role of cooking in human evolution agree that something crucial must have happened to our ancestors' energy budget. Line up the skulls of early hominids and you'll see why: From 1.9 million to 200,000 years ago, our ancestors tripled their brain size.

    The earliest members of the human family, including the australopithecines that lived from 4 million to 1.2 million years ago (see timeline, below), had brains about the size of chimpanzees. The brain didn't expand dramatically until just after H. erectus appeared in Africa about 1.9 million years ago (where it is also known as H. ergaster), with a brain that eventually averaged 1000 cubic centimeters (cc), or about twice the size of a chimpanzee's. The next leap in brain capacity came 500,000 to 200,000 years ago with the evolution of our own species, whose brains average 1300 cc, and of Neandertals (1500 cc).

    CREDIT: J. NEWFIELD/SCIENCE

    What spurred this dramatic growth in the H. erectus skull? Meat, according to a longstanding body of evidence. The first stone tools appear at Gona in Ethiopia about 2.7 million years ago, along with evidence that hominids were using them to butcher scavenged carcasses and extract marrow from bones. But big changes don't appear in human anatomy until more than 1 million years later, when a 1.6-million-year-old skull of H. erectus shows it was twice the size of an australopithecine's skull, says paleoanthropologist Alan Walker of Pennsylvania State University in State College. At about that time, archaeological sites show that H. erectus was moving carcasses to campsites for further butchering and sharing; its teeth, jaws, and guts all got smaller. The traditional explanation is that H. erectus was a better hunter and scavenger and ate more raw meat than its small-brained ancestors.

    But a diet of wildebeest tartare and antelope sashimi alone isn't enough to account for these dramatic changes, says Wrangham. He notes that H. erectus had small teeth—smaller than those of its ancestors—unlike other carnivores that adapted to eating raw meat by increasing tooth size. He argues that whereas earlier ancestors ate raw meat, H. erectus must have been roasting it, with root vegetables on the side or as a fallback when hunters didn't bring home the bacon. “Cooking produces soft, energy-rich foods,” he says.

    To find support for his ideas, Wrangham went to the lab to quantify the nutritional impact of cooking. He found almost nothing in food science literature and began to collaborate with physiologist Stephen Secor of the University of Alabama, Tuscaloosa, who studies digestive physiology and metabolism in amphibians and reptiles. Secor's team fed 24 Burmese pythons one of four diets consisting of the same number of calories of beef: cooked ground beef, cooked intact beef, raw ground beef, or raw intact beef. Then they estimated the energy the snakes consumed before, during, and after they digested the meat, by measuring the declining oxygen content in their metabolic chambers. Pythons fed cooked beef spent 12.7% less energy digesting it and 23.4% less energy if the meat was both cooked and ground. “By eating cooked meat, less energy is expended on digestion; therefore, more energy can be used for other activities and growth,” says Secor.

    Secor also helped Wrangham and graduate student Rachel Carmody design a pilot study in which they found that mice raised on cooked meat gained 29% more weight than mice fed raw meat over 5 weeks. The mice eating cooked food were also 4% longer on average, according to preliminary results. Mice that ate raw chow weighed less even though they consumed more calories than those fed cooked food. “The energetic consequences of eating cooked meat are very high,” says Wrangham.

    The heat from cooking gelatinizes the matrix of collagen in animal flesh and opens up tightly woven carbohydrate molecules in plants to make them easier to absorb. This translates into less time spent chewing: Chimpanzees spend 5 hours on average chewing their food whereas hunter-gatherers who cook spend 1 hour chewing per day. In fact, Western food is now so highly processed and easy to digest that Wrangham thinks food labels may underestimate net calorie counts and may be another cause of obesity.

    The immediate changes in body sizes in the mice also suggest that our ancestors would have been able to get rapid benefits out of cooking, says Wrangham. That's why he thinks there would be little lag time between learning to cook and seeing anatomical changes in humans—and why he thinks early H. erectus must have been cooking. Less chewing and gnawing would lead to smaller jaws and teeth, as well as to a reduction in gut and rib cage size—all changes seen in H. erectus. Those changes would be favored by selection: Dominant chimpanzees that nab the biggest fruit in a tree, for example, have more offspring, says Wrangham. “It seems to me that groups that cook would have much higher reproductive fitness,” he says.

    Under fire

    Wrangham's synthesis of nutritional, archaeological, and primatological data adds up to a provocative hypothesis that hot cuisine fueled the brain. “It's such a nice explanation,” says paleoanthropologist Leslie Aiello, president of the Wenner-Gren Foundation in New York City. She says the smaller teeth in H. erectus indicate to her that it wasn't chewing much tough raw food: “Something must be going on. If only there were evidence for fire.”

    And that's the stumbling block to Wrangham's theories: Cooking requires fire. Irrefutable evidence of habitual cooking requires stone hearths or even clay cooking vessels. Solid evidence for hearths, with stones or bones encircling patches of dark ground or ash, has been found no earlier than 250,000 years ago in several sites in southern Europe. Charred bones, stones, ash, and charcoal 300,000 to 500,000 years ago at sites in Hungary, Germany, and France have also been assigned to hearths. And burned flints, seeds, and wood found in a hearthlike pattern have been cited as signs of controlled fire 790,000 years ago in Israel (Science, 30 April 2004, p. 725).

    But even the earliest of those dates are long after the dramatic anatomical changes seen in H. erectus, says Wrangham. He notes that evidence for fire is often ambiguous and argues that humans were roasting meat and tubers around the campfire as early as 1.9 million years ago.

    Indeed, there are a dozen claims for campfires almost that ancient. At the same meeting, paleoanthropologist Jack Harris of Rutgers University in New Brunswick, New Jersey, presented evidence of burned stone tools 1.5 million years ago at Olduvai Gorge in Tanzania and at Koobi Fora in Kenya, along with burned clay. H. erectus has been found at both sites. Claims by other researchers include animal bones burned at high temperatures 1.5 million years ago at Swartkrans, South Africa, and clay burnt at high temperatures 1.4 million years ago in the Baringo basin of Kenya.

    But where there is smoke there isn't necessarily cooking fire: None of these teams can rule out beyond a doubt that the charring comes from natural fires, although Harris argues that cooking fires burn hot at 600° to 800°C and leave a trail different from that of bush fires, which often burn as low as 100°C.

    All the same, those most familiar with H. erectus aren't convinced they were chefs. Walker says that if the species was cooking with fire, he and others should have found a trail of campfires associated with its bones and stone tools. Others agree: “I think Wrangham's timing is wrong; cooking is associated with the rapid expansion of the brain in Neandertals and modern humans in the past half-million years,” says neurobiologist John Allman of the California Institute of Technology in Pasadena. Paleoanthropologist C. Loring Brace of the University of Michigan, Ann Arbor, agrees. He notes that less than 200,000 years ago, the lower faces of Neandertals and modern humans became smaller, and this is about the same time evidence appears for earth-oven cookery: “While fire has been under control back near 800,000 years, its use in the systematic preparation of food has only been over the last 100,000-plus years.”

    Others, such as Carel van Schaik of the University of Zurich, think that cooking may have played an important role early on, along with other adaptations to expand human brainpower. As Aiello observes, the big brain was apparently the lucky accident of several converging factors that accentuate each other in a feedback loop. Critical sources of energy to fuel the brain came from several sources—more meat, reduced guts, cooking, and perhaps more efficient upright walking and running. The order in which our ancestors adopted these energy-saving adaptations is under hot debate, with the timing for cooking hardest to test. Regardless, “it's all beginning to come together,” says Aiello.

    • *“Primatology Meets Paleoanthropology Conference,” 17-19 April, University of Cambridge, United Kingdom.

  10. PALEOANTHROPOLOGY

    Swapping Guts for Brains

    1. Ann Gibbons

    Cooking is the latest theory to explain how humans can feed their voracious brains enough calories to survive (see main text), but there's another, classic explanation: As our ancestors began to eat more meat, they took in enough calories at each meal to permit their guts to shrink, saving energy from digestion that in turn helped fuel the brain. Called the expensive tissue hypothesis, this theory was proposed back in 1995 when paleoanthropologist Leslie Aiello, then of University College, London, and physiologist Peter Wheeler of Liverpool John Moores University discovered the tradeoff in guts and brains in 18 species of primates. They found that our gastrointestinal tract is only 60% of the size expected for a primate of similar size (Science, 29 May 1998, p. 1345).

    Now, after a decade of stasis, the idea is getting new support from studies of birds, fish, and primates. Researchers are also expanding the theory by showing that these energetic tradeoffs only happen in animals that grow up slowly, suggesting that a slowdown in juvenile development set the stage for a large brain. “I'm extremely happy that people are taking this up,” says Aiello. “Now we're going to get somewhere with it.”

    Aiello's hypothesis languished in the late 1990s, after researchers sought an energetic tradeoff in other animals—and didn't find it. Small-brained birds such as chickens don't have big guts, for example. And pigs have small brains and small guts.

    But more nuanced animal studies are now shoring up the theory. Last year, Carel van Schaik and Karin Isler of the University of Zurich clarified the situation in birds. Most birds, streamlined for long flights, already possess relatively small guts, they explained in the Journal of Human Evolution. So the energetic tradeoff happens instead between brains and pectoral or breast muscles: Bigger birds such as turkeys need bigger pectoral muscles to get airborne, and they have smaller brains—these are “the dumb ones we like to eat,” says van Schaik. Those with bigger brains have smaller pecs.

    Big, bigger, biggest.

    The brain of Australopithecus (top) is half the size of Homo erectus (middle) and one-third the size of H. sapiens (bottom).

    CREDIT: JEFFERY H. SCHWARTZ/NATIONAL MUSEUM OF KENYA

    The hypothesis also “holds up very well” in six other species of primates, based on preliminary data on wild monkeys and apes, says van Schaik. Brainy capuchin monkeys, for example, eat a high-quality diet of insects and bird eggs and have tiny guts. Howler monkeys have tiny brains and big guts to digest bulky leaves and fruit. “The new data beautifully show the tradeoff, gram for gram, between the brain and gut,” says neurobiologist John Allman of the California Institute of Technology in Pasadena.

    In another paper in press in the Journal of Human Evolution, van Schaik and Duke University graduate student Nancy Barrickman confirm earlier reports that primates can grow a big brain only if they adopt a particular life history strategy. Van Schaik's team shows that 28 species of primates, from tiny mouse lemurs to great apes, have slowed their metabolic rates, thus prolonging their juvenile years, postponing the age at which their first offspring are born, and living longer. This kind of life history is thought to be an adaptation for decreasing infant mortality and boosting maternal health in animals in which the adults have good chances for survival, says paleoanthropologist Jay Kelley of the University of Illinois, Chicago. It also allows primate brains time to grow larger and more complex before adulthood. “Once they slowed down life history, the big brain was inevitable,” says Kelley.

    Preliminary evidence from the eruption of molars in three Homo erectus juveniles, including the Nariokotome skeleton from Kenya, fits with this idea: The teeth suggest that this species had just begun a particularly dramatic developmental slowdown during childhood, a slowdown that reached an extreme in modern humans, says Christopher Dean of University College London.

    In this scenario, our ancestors slowed down their development even more as their brains got larger, which required additional energy from a smaller gut and better diet. Thus, the expensive-tissue hypothesis works in primates and other mammals whose young grow up slowly, but not in animals that grow up fast and die young, such as pigs. The energetic tradeoff with the brain can only happen if brains have enough time during development to grow big. “It turns out Leslie [Aiello] was exactly right—with a footnote,” says van Schaik.

  11. MARINE BIOLOGY

    Florida Red Tide Brews Up Drug Lead for Cystic Fibrosis

    1. Carol Potera*
    1. Carol Potera is a science writer in Great Falls, Montana.

    Among the nasty compounds produced by the organism responsible for Florida's red tides is one with some surprising properties

    Split personality.

    Karenia brevis (inset), which causes Florida's red tides (above), produces an antidote to its own bronchoconstricting toxins.

    CREDITS (TOP TO BOTTOM): PAUL SCHMIDT/CHARLOTTE SUN HERALD; CARMELO TOMAS/UNCW CENTER FOR MARINE SCIENCE

    Karenia brevis packs a powerful punch for a tiny organism. The culprit behind Florida's notorious red tides, the dinoflagellate produces a dozen toxins. And when a red tide coincides with an onshore breeze, emergency rooms brace for an influx of patients: The organism's airborne poisons, collectively known as brevetoxins, constrict bronchioles and send asthmatics and others with breathing difficulties scrambling for treatment. So the last thing you might expect from this nasty organism is a compound that alleviates wheezing and shortness of breath and helps clear mucus from the lungs. Yet one oddball in K. brevis's armamentarium, a compound called brevenal, does just that, at least in sheep. It's being evaluated as a potential treatment for the debilitating lung disorder cystic fibrosis (CF), which afflicts 30,000 people in the United States, and researchers are poised to test it on Florida's endangered manatees next time some of the mammals are poisoned by a red tide.

    Brevenal's surprising properties have been under investigation since the compound was first discovered in 2004 at the Center for Marine Science (CMS) at the University of North Carolina, Wilmington. New findings reported at the Society of Toxicology meeting in Charlotte, North Carolina, in late March indicate that the compound binds to a novel receptor in the lung, and that a synthetic version seems to work as well as the natural compound in laboratory and animal tests. Yet to be determined, however, is just why K. brevis produces a compound that counteracts some of the effects of its own fearsome suite of toxins. But then again, it's not clear why it produces those toxins either, notes CMS Director Daniel Baden.

    From beach to bedside?

    The oceans have long been touted as a potential source of new drug candidates, and researchers have systematically scoured sponges, corals, and marine microorganisms for likely compounds. Brevenal wasn't found that way: A shortage of guppies for routine toxicology screening led to its serendipitous discovery.

    CMS pharmacologist Andrea Bourdelais was measuring the lethality of extracts isolated from brevetoxins by adding a tiny bit of test material to a beaker containing water and a guppy. If the fraction is toxic, the fish dies. Toxicologists usually retire fish that survive such tests to prevent subsequent chemical interactions, but the laboratory's supply of guppies was running low, so Bourdelais reused the survivors. When she added a known toxic fraction to beakers with leftover guppies, to her surprise, they did not die. “I had a spontaneous gut feeling—a gee-whiz moment—that the first material was an antidote to the second one,” Bourdelais recalls.

    Bourdelais subsequently showed that the mysterious extract (later named brevenal) protects guppies from death by brevetoxins in a dose-dependent fashion. The lab already had discovered that brevetoxins act on sodium channels, so Bourdelais used a standard lab test to check whether brevenal prevents the toxins from binding to the sodium channel receptor. It did. Bourdelais then sent the mysterious compound to William Abraham, research director at Mount Sinai Medical Center in Miami Beach, Florida, who had determined that all brevetoxins set off bronchoconstriction in a sheep model of asthma. Brevenal, he discovered, suppresses this effect.

    Defective sodium transport is a hallmark of CF; it draws water away from airway surfaces, making mucus drier and stickier. Sodium channels are therefore a primary target for CF drugs, so Abraham compared brevenal to the CF drug amiloride in the sheep model. In the January 2005 American Journal of Respiratory and Critical Care Medicine, he reported that brevenal not only blocks bronchoconstriction, but it also increases mucus clearance—and it does so at concentrations 1 million-fold lower than amiloride. “We were excited that brevenal may have potential as a CF drug,” says Abraham, based on its apparent potency compared to amiloride, which has a mediocre track record in the clinic. Also intrigued was AAI Pharma Inc., a company headquartered in Wilmington, North Carolina. It negotiated an exclusive license in 2004 to explore brevenal's potential as a treatment for CF.

    Since then, Baden, Bourdelais, Abraham, and their colleagues have continued to probe brevenal's modus operandi. At the toxicology meeting, they reported that it acts on a new drug target: It binds a novel receptor in lung tissue associated with voltage-gated sodium channels; amiloride binds a related receptor, the epithelial sodium channel receptor. Baden also reported that chemists in the laboratory of Makoto Sasaki at Tohoku University in Sendai, Japan, have synthesized brevenal from cheap starting materials. Dubbed ME-1, the synthetic agent performs as well as natural brevenal in receptor-binding assays and in preventing bronchoconstriction and clearing lung mucus in sheep, Baden reported.

    Promises of new therapies for CF surface regularly, but many fizzle out. And in spite of its early promise, brevenal still has a long way to go. Steve Fontana, vice president of legal affairs at AAI Pharma, says the company's scientists are evaluating brevenal and its derivatives for safety and biological activity. Once they find the best drug candidate, the company will file an Investigational New Drug application with the U.S. Food and Drug Administration (FDA), but clinical trials are several years out.

    In fact, humans may not be the first test subjects for brevenal's therapeutic potential. That honor may go to Florida's endangered manatees.

    “A red tide event spreads like a wildfire and poisons birds, fish, sea turtles, manatees, and dolphins,” says Andrew Stamper, a veterinarian at Disney's Animal Programs in Lake Buena Vista, Florida. In March and April of this year, about 30 manatees died following a red tide spike, and 150 died in 1996 from red tide poisoning. Only 3000 of the mammals are estimated to live along Florida's coast.

    In February, Stamper received a “compassionate use” permit from FDA to evaluate the safety and effectiveness of brevenal in manatees. Stamper's colleague, veterinarian David Murphy of Lowry Park Zoo in Tampa, Florida, will test brevenal on rescued manatees brought to the zoo's rehabilitation center. When poisoned by brevetoxins, manatees become paralyzed and drown because they cannot hold their head above water to breathe. Murphy straps lifejackets underneath rescued manatees and supports their half-ton bodies in shallow tanks. Normal breathing resumes in a few days, but full recovery takes months. Brevenal “will add a new weapon in our arsenal,” Murphy says. The next time a red tide hits, “we'll be ready to go,” says Stamper.

  12. GEOPHYSICS

    Stalking a Volcanic Torrent

    1. John Bohannon

    The setting of the climax of the Lord of the Rings, New Zealand's Mount Ruapehu is earning a second reputation as a laboratory for understanding killer mudslides

    MOUNT RUAPEHU, NEW ZEALAND—From a helicopter, the steaming lake nestled in the snowy crater below looks inviting, like a giant Jacuzzi for Maori gods. But taking a dip would be a bad idea: Mount Ruapehu's rocky chalice burbles with scalding sulfuric acid. The otherworldly volcano was used for scenes of the hobbit Frodo ascending perilous Mount Doom in the Lord of the Rings. But the real Mount Doom is a killer, too. Cradling a deep lake between its 2500-meter peaks, Ruapehu is prone to lahar flows, one of the most dangerous—and least understood—volcanic hazards. In 1953, a lahar (an Indonesian word meaning mudslide) here knocked out a train bridge; 5 minutes later, a passenger train plummeted into a gorge, killing 151 people.

    Earlier this year, Ruapehu's acidic lake was unleashed again. Noxious waters blasted down the slopes, picking up rocks as big as cars along the way. But this time, not a single person was hurt.

    Not only was the lahar predicted by an early warning system, but the event also generated “orders of magnitude more data than for any other lahar event anywhere in the world,” says Vernon Manville, a volcanologist at the Institute of Geological and Nuclear Sciences (GNS) in Taupo, New Zealand. “This has been a 10-year experiment in the making.” The information mother lode should help scientists better protect the millions of people who live in the path of lahar-generating volcanoes around the world.

    Wiring up Mount Doom

    In 1995, Ruapehu's roughly 50-year cycle of eruptions kicked in again. Gobs of lava burped up from the bottom of the crater, adding 7 meters of loose ash and stones, called tephra, to the rim. The deepened crater soon filled with snowmelt from above and sulfuric acid and other material from fumaroles below. One of Ruapehu's grumbles late last year triggered an earthquake that whipped the lake into a frenzy, slamming the crater walls with 6-meter waves. “It was clear that it was only a matter of time” before the tephra rim failed and caused a lahar, says Manville.

    Bracing for the big one.

    Readying for a world first: recording data from a volcanic lahar in action.

    CREDIT: GNS SCIENCES

    But exactly when the big one would strike was unknown. The inherent unpredictability of lahars is what makes them so dangerous, says Cynthia Gardner, a geologist at the Cascades Volcano Observatory in Vancouver, Washington. Most eruptions are preceded by a telltale increase in underground vibrations, swelling of the slopes, and changes in the composition of vented gases. But a lahar “can occur without warning,” Gardner says. Besides eruptions and earthquakes, even a heavy rain can be enough to loosen unstable material at the top of many steep volcanoes. Gravity does the rest.

    Another deadly aspect of lahars is the great distances they can travel down river valleys, sometimes hundreds of kilometers from a volcano. “Imagine,” says Gardner, “a rushing surge of water coming toward you that's tens of meters thick” and carrying boulders, trees, and even houses. The best chance of survival is to get out of the way—only lahars are too fast to outrun. The deadliest lahar in recent history occurred in 1985, when an eruption of the Nevado del Ruiz volcano in Columbia triggered mudslides 50 meters thick that buried a town 70 kilometers away, killing 23,000 people.

    To warn of an impending lahar at Ruapehu, a team led by Harry Keys, an engineer at the New Zealand Department of Conservation, wired up the mountain. His group installed underground microphones, called geophones, at the lake's rim to record vibrations. A buried wire was set to trip if the tephra dam broke, and a depth meter was deployed in the lake to record sudden drops. Geophones on the slopes listened for approaching flows. When any signal leaped above the background noise, an alarm message was sent automatically to scientists, police, and the highway authorities.

    Mount Doom, brooding.

    Data from Ruapehu's lahar flow will help better protect millions of people living near other volcanoes.

    CREDIT: GNS SCIENCES

    Scientists thought this would give a good idea of when a Ruapehu lahar would strike; where was another question. “What we're learning is that lahars evolve,” says Sarah Fagents, a geophysicist at the University of Hawaii, Manoa. Although a lahar may start as a liquid with the low viscosity of water, it quickly becomes as thick as wet concrete as it picks up fine sediment, then morphs back into a less-viscous flow as it sheds particulates down the slope. As if that weren't already hard enough to simulate on a computer, as a lahar rips up material here and dumps it there, the changing terrain steers the flow.

    “A whole lot of footwork” was required to record how Ruapehu's slopes looked before a lahar struck, says Alison Graettinger, a graduate student at the University of Waikato in nearby Hamilton. The grunt work included taking samples and Global Positioning System measurements to map out the composition and distribution of material on the slopes. The researchers also used light detection and ranging (LIDAR) technology to build three-dimensional models of land features by firing laser pulses and measuring the delay of the reflection.

    By early 2007, all that remained on the scientists' to-do list was to install the last lahar-weighing sensor and a camera. Before the team could finish, nature intervened.

    Chronicling its wrath

    At 11:21 a.m. on 18 March, a 20-meter-wide section of the tephra dam crumbled. The breach trebled in size in 10 minutes. At its peak, the lake discharged the equivalent of an Olympic swimming pool of water every 4 seconds. By the time the flow ended a couple of hours later, 1.3 million cubic meters had drained.

    For the first time ever, researchers knew a lahar was on the way and watched the event unfold.

    The ultimate cause of the dam's failure was 5 days of heavy rain that nudged up the water level inside the crater by a half-meter. However, signs of trouble had begun appearing months earlier. In January, the team found that the dam was becoming more electrically conductive—an indicator that water was infiltrating the dam's porous matrix—and that tephra sediment was trickling down its outer surface. The major collapse was preceded by several smaller ones starting more than an hour before—all captured on camera. These early tremors were also picked up by the geophones and set off the alarm system.

    It was clear this was no false alarm when a “landslide-type failure” cut the trip wire. Layers began sloughing from the dam's outer surface inward until erosion reached “a critical point” and rapidly accelerated about 15 minutes later, says Chris Massey, a GNS geological engineer in Wellington. The lahar traveled 155 kilometers along river paths until it reached the ocean at 3 a.m. the next morning. The damage was minor—a small bridge, a block of toilets, and a farmer's pump shed were destroyed—and the alarm system allowed authorities to close highways well in advance to clear the way.

    Since that day, a team led by Manville and Shane Cronin, a volcanologist at Massey University, has worked nonstop to gather data from Ruapehu before wind and rain erase the evidence. The techniques used to establish the “before” picture—such as LIDAR scans and field sampling—are being repeated to get the “after.” Data analysis is just beginning, but it is already providing “a unique insight into how natural dams fail,” Massey says. For example, he says, it is clear that the dam's stability “was highly sensitive to relatively minor changes in lake water level.”

    Down the mountain, other data revealed that as the lahar surged along a riverbed, it may have created a soliton, a standing wave that can propagate over great distances without losing energy or changing shape. Photographs of the lahar's leading edge seem to show such a wave, and analysis of water samples indicates that the lahar was pushing a huge swell of fresh river water ahead of it. Solitons have been observed in canals, says GNS geophysicist Desmond Darby, but this is the first evidence of one generated by a volcano. Although the soliton didn't make the lahar any more vicious, it's “an interesting phenomenon,” Manville says. The Ruapehu researchers will present findings next month at a meeting of the International Union of Geodesy and Geophysics in Perugia, Italy.

    The ongoing scrutiny of Ruapehu's latest lahar could save lives elsewhere. Probably the biggest threat is in Ecuador, where some 100,000 people live in the direct path of potential lahars from Mount Cotopaxi, says Patricia Mothes, a volcanologist at the Geophysical Institute in Quito. The data coming out of New Zealand get at “some of the questions that I always have” about how to assess lahar risk, she says, such as which conditions are more likely to spawn sediment-laden lahars capable of “transporting huge boulders very long distances.”

    Someday, Mount Doom may become more famous for saving lives than for menacing hobbits.

Log in to view full text