News this Week

Science  08 Aug 2008:
Vol. 321, Issue 5890, pp. 754

You are currently viewing the .

View Full Text

Log in to view the full text

Log in through your institution

Log in through your institution


    Scientists Seek Answers, Ponder Future After Anthrax Case Suicide

    1. Martin Enserink,
    2. Yudhijit Bhattacharjee

    Did he really do it?

    That's the main question on the minds of many scientists this week after an Army researcher apparently close to being indicted for the worst bioterror attack in U.S. history took his own life. As researchers tried to make sense of scraps of information filtering out in the media, many were hoping prosecutors would soon reveal their entire case against Bruce Ivins of the U.S. Army Medical Research Institute of Infectious Diseases (USAMRIID) in Frederick, Maryland, so the country can scrutinize the evidence that led the Federal Bureau of Investigation (FBI) to believe he mailed the anthrax-laden letters in fall 2001. That evidence likely includes sophisticated and possibly debatable scientific analyses.

    For those who knew Ivins, the question is personal. “He was a nice guy, a sweet guy,” says fellow anthrax veteran Martin Hugh-Jones of Louisiana State University in Baton Rouge. “He wasn't on my shortlist of possible suspects.” In the wider field, the case is prompting other questions: If one of their own committed the crimes, will the biodefense budget, which ballooned after 2001, shrivel? Will public confidence in the field decline, and will rules for handling possible bioterror agents become draconian?

    Ivins, 62, died at a Frederick hospital on 29 July after taking an overdose of painkillers. An author on 54 PubMed-listed papers, he had spent most of his career developing drugs and vaccines against anthrax, studies for which mice, rabbits, and monkeys were frequently exposed to the deadly disease. As Science went to press, the FBI had not named Ivins as a suspect, saying more news would be forthcoming after survivors and victims' families had been notified. Ivins's lawyer, Paul Kemp of Rockville, Maryland, has declared his client's innocence, alleging in a statement that mounting FBI pressure had “led to his untimely death.”

    The case against Ivins is likely to rest on “a combination of investigations—including good old gumshoe work, science, and perhaps other sources of information and evidence,” says Randall Murch, a bioforensics expert who worked for the FBI and is now a Virginia Polytechnic Institute and State University administrator based at the university's office in Alexandria.

    Case shut?

    Researchers are clamoring for the FBI to release evidence implicating Bruce Ivins in the anthrax attacks.


    In a statement, the FBI credited “new and sophisticated scientific tools” for the “substantial progress” made recently. That, and a report in the Los Angeles Times that the bureau recruited Ibis Biosciences, a California company specializing in high-throughput genetic analyses, leads microbial genomicists such as J. Craig Venter of the Venter Institute (JCVI) in Rockville, Maryland, to speculate that an approach called metagenomics—which looks for the genomic makeup of an entire population of cells instead of a single one—may have been used to try to link the Bacillus anthracis spores mailed to two U.S. Senators and media outlets to those used in Ivins's lab.

    News reports early this week also said genomic analyses suggested that the anthrax powder was a mix of two strains, one obtained at Dugway Proving Ground, a testing facility in Utah, and the other from USAMRIID. Opinions differ sharply among experts about whether a so-called lyophilizer, which Ivins was reported to have used, would suffice to produce the extremely fine, floating powder found in the Senate envelopes.

    Whatever the scientific evidence, it would face stiff challenges in court, experts say; in contrast to human DNA traces, whose utility has become well-accepted, microbial forensic evidence is largely untested. Now that Ivins won't face trial, it's even more important that scientists be able to pore over all the evidence, says anthrax researcher R. John Collier of Harvard Medical School in Boston. “I would love to see what they have,” Collier says. Still, scientific scrutiny can't replace a court of law, some say. “What's the forum? Are we going to discuss genetic frequencies in a dark hall in a Marriott somewhere?” another anthrax scientist asks.

    The questions are critical because the FBI was wrong before. Just 6 weeks ago, the government agreed to pay $5.8 million to Steven Hatf ill, a former colleague of Ivins's at USAMRIID whose life was turned upside down in 2002 after then-Attorney General John Ashcroft called him a “person of interest” in the anthrax attacks. Virologist Thomas Geisbert, associate director of Boston University's (BU's) emerging infectious diseases lab, says he can't help wondering whether Ivins's death could be the result of “another Hatf ill situation”—except that Ivins was unable to handle the intense pressure. There were signs of mental instability; Ivins had recently been hospitalized for erratic behavior, and on 24 July, a Maryland court issued a restraining order against Ivins at the request of a therapist who said he had a history of making homicidal threats.

    The FBI, which had little microbial forensic experience back in 2001, relied on a network of labs—including Ivins's at USAMRIID—to aid its investigation. (The Institute for Genomic Research in Rockville, Maryland, not only sequenced many anthrax strains but worked on the case before it was integrated into JCVI, says Venter.) The bureau has ordered researchers not to discuss or publish that work. “As a scientist, I hope I'll be able to do that now,” says Geisbert, who in his previous job at USAMRIID produced electron micrographs of the spores used in the letters sent to the Senate.

    Many believe that the case is bound to have wider ramifications for the biodefense field. Before 2001, such research was largely confined to USAMRIID and the Centers for Disease Control and Prevention in Atlanta, Georgia. The anthrax letters, which plunged a nation reeling from 9/11 into further anxiety, helped spur a massive increase in the biodefense budget—now some $5.4 billion a year—and a construction boom in biosafety labs. “The entire rationale for that expansion was fraudulent,” says Richard Ebright, a prominent biodefense critic at Rutgers University in Piscataway, New Jersey, because it assumed a threat from outside the country. The boom has made the country less safe, Ebright maintains: “The spigot needs to be closed.”

    Others say the threat remains real. “It would be unfortunate if people take away the message that the only individuals we should be concerned about are deranged biodefense scientists,” says biosecurity expert Gerald Epstein of the Center for Strategic and International Studies in Washington, D.C. But he acknowledges that the debate about how much biodefense is enough will likely reignite.

    There may be other consequences, says Paul Keim, an expert in microbial fingerprinting at Northern Arizona University in Flagstaff who has also been recruited by the FBI. After the anthrax attacks, Congress passed legislation to limit access to bioterror agents to licensed researchers and imposed strict rules on where and how they can be used. Although researchers have decried them as overly cumbersome, the anthrax case may renew pressure to stiffen them further, Keim says. Additional measures could include cameras in virtually every lab, speculates Alan Pearson of the Center for Arms Control and Non-Proliferation in Washington, D.C. “The solution may well be if you work with pathogens like smallpox and anthrax, be prepared to be watched,” he says.

    The involvement of a U.S. scientist would also give new ammunition to local groups that have tried to stop construction of new biosafety labs. At BU, now a major academic biodefense hub, “we have had a lot of opposition—and this is not going to help,” Geisbert says.


    Scientists Targeted in California Firebombings

    1. Greg Miller

    Early Saturday morning, a Molotov-cocktail-like device set fire to the home of a developmental neurobiologist at the University of California, Santa Cruz (UCSC). His family fled down a fire escape from a second-story window. Around the same time, a similar device destroyed the car of another UCSC researcher. As Science went to press, no one had claimed responsibility for the attacks, but the university and police suspect they are the work of animal-rights extremists.

    In recent years, universities and law enforcement officials in the United States have had to grapple with increasingly personal threats, harassment, and attacks on animal researchers and their families (Science, 21 December 2007, p. 1856). California has been an epicenter: In the past few years, several biomedical researchers at UC Los Angeles have been targeted, and more recently, scientists at other UC campuses have endured harassment and had their homes vandalized. Twenty-four UC Berkeley researchers and seven staff members have been harassed in the past few months, according to a university spokesperson. In February, six masked intruders tried to force their way into the home of a UCSC researcher during a birthday party for her young daughter.

    Crime scene.

    Police suspect animal-rights extremists are behind the destruction of a UCSC researcher's car last weekend.


    Concerns were sparked again last week in Santa Cruz by pamphlets discovered in a downtown coffee shop and turned in to police. Titled “Murderers and Torturers Alive and Well in Santa Cruz,” they contained the photographs, home addresses, and phone numbers of 13 UCSC faculty members, along with “threat-laden language” condemning animal research, says Captain Steve Clark of the Santa Cruz police.

    David Feldheim, the neurobiologist whose house sustained substantial damage in the fire attack, was one of those listed. Feldheim uses mice in studies of how the brain's visual system develops. The researcher whose car was destroyed was not on the list, Clark says. UCSC spokesperson Jim Burns declined to name that researcher or say whether he or she uses animals for research. A third researcher, who was named in the pamphlet, lives “almost next door,” Clark says, raising the possibility that the culprits missed their intended target.

    On Saturday, UCSC Chancellor George Blumenthal condemned the attacks as “criminal acts of anti-science violence.” Several hundred people gathered on campus Monday for a rally in support of the researchers who'd been attacked, according to the Santa Cruz Sentinel, and officials announced a $30,000 reward for information leading to the arrest and prosecution of those responsible.

    The Santa Cruz police have handed over the investigation to the Federal Bureau of Investigation, which will investigate the incidents as acts of domestic terrorism. “We have some good leads and some helpful witnesses,” Clark says, but so far no suspects.


    Court Ruling Scrambles Clean Air Plans, Leaving a Vacuum

    1. Erik Stokstad

    The U.S. Environmental Protection Agency (EPA), state regulators, and the electric power industry are struggling to come to grips with the impact of a surprise court decision last month that dismantled a major air-pollution regulation. According to EPA estimates, the rules would have prevented more than 13,000 premature deaths by 2010, cut haze, and reduced acid rain, but a federal appeals court, citing fundamental flaws, ordered EPA to scrap the plan. “We're back to square one,” says John Walke of the Natural Resources Defense Council in Washington, D.C.

    EPA has until 25 August to appeal the decision. Officials there warn that without the new rule, air pollution could worsen, and power companies that proactively implemented pollution controls will in effect be penalized. In a Senate hearing last week, industry representatives and regulators called for Congress to come up with new legislation quickly. But it's unlikely to happen soon, leaving state regulators scrambling to figure out how they will meet air-quality standards.

    The regulation, called the Clean Air Interstate Rule (CAIR), was designed to fix a problem faced by many East Coast states: So much pollution blows in from other states that they can't meet EPA standards for air quality. CAIR would have capped the amount of pollution in the entire region and issued each state “allowances” representing units of pollution. Power companies had already started buying and selling these allowances, which provided a financial incentive to clean up their power plants. This “cap-and-trade” scheme was widely seen as a rapid and cost-effective way to clear the air in both upwind and downwind states, similar to the successful scheme enacted to control acid rain in 1990.

    CAIR was also based on a trading program for nitrogen oxides (NOx), which lead to smog. Since 2000, this program, which covers 20 eastern states and operates during the 5-month ozone season, has cut summer NOx emissions by 60%. It was to end in September as CAIR phased in, although EPA is now mulling whether to extend it.

    CAIR was designed to further cut both sulfur dioxide (SO2) and NOx emissions over the entire year in 28 eastern states. By 2010, it was projected to lower SO2 emissions by 45% from 2003 levels and NOx by 53%. With even tighter caps implemented in 2015, the system was predicted to save up to $100 billion in health and other costs, as well as end chronic acidity in Adirondack lakes by 2030. EPA considers it “the most significant action to protect public health and the environment” in nearly 20 years.

    Not everyone was happy, however. Environmentalists and some states complained that the rule was too lax; some sued. Several power companies and states sued EPA for other reasons, including how it distributed the allowances. On 11 July, the U.S. Court of Appeals for the District of Columbia Circuit decided that CAIR had “more than several fatal flaws.” Among them, it ruled that CAIR was not stringent enough and that 2015 was too late for tightening the caps.

    The loss of CAIR will likely slow efforts to improve visibility in national parks and set back international negotiations over long-range transport of air pollution, predicts Brian McLean, who directs EPA's Office of Atmospheric Programs. And regional pollution could well increase because without cap-and-trade incentives, power companies might not add or turn on pollution-control equipment, which is expensive to install and operate. State regulators hope to encourage companies to keep the equipment running, but “there is no guarantee that will happen,” Christopher Korleski, who heads Ohio's Environmental Protection Agency, told a Senate Environment and Public Works subcommittee.


    Smog and acid rain may increase if power plants turn off scrubbers.


    The court decision penalizes companies that have already spent billions on new equipment to prepare for CAIR, McLean told the Senate subcommittee. PPL Corp. in Allentown, Pennsylvania, for example, has invested nearly $1.5 billion in scrubbers, driven largely by the expected market value of pollution allowances. Those values collapsed after the ruling, and PPL lost roughly $70 million on the SO2 allowances it had banked. Companies also fear that states will force them to use their new equipment while dirtier competitors won't bear those costs. “That will have a chilling, long-term effect” on investment in pollution-control technology, predicts Eric Svenson of the Public Service Enterprise Group, an energy company in Newark, New Jersey.

    Everyone agrees that something needs to be done. Twelve states have asked EPA to repromulgate a CAIR-like rule acceptable to the courts. But PPL Executive Vice President William Spence predicts that any regulatory solution “will continue to be plagued by litigation.” EPA's McLean says the agency is evaluating its options. The Senate subcommittee chair, Tom Carper (D-DE), plans to have more hearings this fall, but Senator George Voinovich (R-OH) said he doubts the Senate will deal with the issue until late spring.


    Ethics Questions Add to Concerns About NIH Lines

    1. Gretchen Vogel,
    2. Constance Holden

    Some federally funded scientists are having second thoughts about working with the 21 human embryonic stem (ES) cell lines available to them under President George W. Bush's policy, following a report indicating that the cells are getting increasingly stale—not only scientifically but ethically as well.

    A recent article by Rick Weiss of the Center for American Progress in Washington, D.C., has drawn attention to a paper by bioethicist Robert Streiffer in the May issue of The Hastings Center Report. Streiffer, of the University of Wisconsin, Madison, says consent forms signed by embryo donors for the approved lines are inadequate by today's standards. “We know how to do things better now,” says Streiffer, who believes this is yet another reason why the Administration's policy, which limits federal funding to work with cell lines derived before August 2001, is untenable.

    Streiffer says most of the consent forms fall short of standards for informed consent in embryo research that were in place as early as 1994. And only one, from the University of California, San Francisco, comes close to meeting 2005 guidelines from the National Academy of Sciences. He singles out forms from two companies—BresaGen, now owned by Novocell, and Cellartis—as particularly inadequate. BresaGen's had only one sentence saying that defective embryos created from in vitro fertilization might be used in research. Cellartis told donors that cells would be destroyed after a research project. Other forms failed to mention that embryos would be destroyed and that cells derived from them could end up in experiments around the world.

    ES cell alternative?

    iPS cells have been used to create motor neurons (above).


    “I was shocked,” says Lorraine Iacovitti, a neurologist at the Thomas Jefferson University Medical College in Philadelphia, Pennsylvania, who has used one of the BresaGen cell lines. Most researchers “just assumed that the consent had been taken care of.” Now two universities, Stanford and Johns Hopkins, are considering prohibiting work with the companies' five cell lines, which are not widely used.

    Story Landis, chief of the stem cell task force at the National Institutes of Health (NIH), says no changes are planned in response to the report. Allan Robins of Novocell in Athens, Georgia, says NIH officials told BresaGen in 2001 that “they felt what we had done was reasonable.” He says that ideally, the company would ask the donors for retroactive consent, but it is impossible to trace them. A representative from Cellartis told Science the company is preparing a correction to Streiffer's article.

    Many scientists say they would prefer to work with new human ES cell lines rather than any of the aging lines on the presidentially approved list. In addition to the ethical concerns, the cells are problematic for scientific reasons—for one, they were grown on mouse “feeder” cells, which makes them unsuitable for use in human therapies. But some scientists have been constrained from switching to new lines because they would lose federal funding.

    Two pending developments may change that. Both senators Barack Obama and John McCain have said that they support congressional efforts to expand the number of cell lines available to federally funded researchers. If the new president doesn't act fast enough, Congress likely will; both houses have twice passed such legislation only to be thwarted by Bush vetoes.

    In addition, remarkable progress with a new type of cell—induced pluripotent stem (iPS) cells—promises an alternative to cell lines derived from embryos. When Japanese researcher Shinya Yamanaka announced 2 years ago that he had cultivated colonies of ES-like pluripotent cells by inserting just four genes into mouse skin cells, many thought it would be years before the same could be done with human cells. But last year, two groups pulled it off (Science, 1 February, p. 560).

    In the past 4 months, scientists have used iPS-derived cells to treat blood and neurological disorders in rats and mice, for instance. Two groups at Harvard University have developed colonies of iPS cells from patients with a variety of genetic diseases. Yet other work has shown that small molecules can be substituted for some potentially cancer-causing genes used to derive the original iPS cells.

    Major hurdles remain. It's not clear whether iPS cells will behave exactly like ES cells. And they can't be used therapeutically because the viral vectors scientists use to introduce genes could be hazardous. But given the speed of developments, at least one stem cell researcher, Rudolf Jaenisch of the Massachusetts Institute of Technology in Cambridge, believes “we will solve this much earlier than we think.”


    Phoenix's Water May Be Gumming Up the Works

    1. Richard A. Kerr

    Many media outlets hailed the Phoenix lander last week for confirming the presence of water on Mars. Actually, the two Viking spacecraft won that honor 3 decades ago when they confirmed that the northern polar cap contains water ice. Then rumors began flying that Phoenix has evidence of “potential habitability” on polar Mars, which no one was yet ready to discuss. But the mission's most dramatic achievement so far (see sidebar for others) has been touching martian water ice—which may also be creating the mission's biggest challenge.

    Out, damned dirt.

    The Phoenix robotic arm (above right) can scrape up enough dirty ice for a sample, but it won't fall out of the scoop (inset).


    The often icy soil that Phoenix was sent to analyze “has very interesting physical properties,” as Phoenix team member William Boynton put it last week—so interesting that team members spent the middle third of the mission trying in vain to get an ice-rich sample into one of the lander's two prime analytical instruments. Now Phoenix has moved on to less challenging, less icy samples while team members try to sort out the mysteries of alien dirt.

    This isn't Phoenix's first problem with martian soil. When the lander's robotic arm dumped a scoop of non-icy near-surface soil onto a screen leading to the Thermal and Evolved-Gas Analyzer (TEGA), the soil just sat there. After scientists vibrated the screen a halfdozen times over several days, the soil just as mysteriously relented and suddenly fell through the screen and filled the TEGA sample cell.

    “Right now, I can say none of us knows” what's going on with Phoenix soil, says team member Douglas Ming of NASA's Johnson Space Center in Houston, Texas. Ideas abound, though. The clumpiness—seen on lander and rover missions since the Viking days—might reflect either a buildup of electrostatic charge on the finest particles, a mechanical interlocking at particle edges, or the dampening effect of salts, says Ming. Whatever the cause, lander operators have sidestepped the soil clumpiness by having the arm slowly sprinkle soil from the scoop. In the meantime, however, the extended vibration of the screen apparently caused an electrical short circuit, which forced team scientists to consider that their next TEGA analysis might be their last.

    Under pressure to get results, team members went for the gold: the rock-hard dirty ice at the bottom of a 5-centimeter-deep trench dug through the soil. Only ice could yield the isotopic composition of martian water, and it might have preserved much-sought organic matter. Day after day, the Phoenix team practiced how best to scrape, rasp, and scoop up ice chips and deliver the sample to TEGA. Daily visual checks at each step dragged the process out to 30 days, one-third of the planned mission. On the first attempt to deliver a sample, the filled scoop was tilted over TEGA and vibrated, “and nothing came out,” says robotic arm co-investigator Raymond Arvidson of Washington University in St. Louis, Missouri. “We repeated the experiment, but with more vibration, and it still didn't come out,” even with the scoop turned upside down. “Of all the things that could go wrong, that was the least likely,” says Boynton.

    Why it went wrong remains a mystery. Planetary scientist David Paige of the University of California, Los Angeles, shares team members' suspicions that at least part of the problem is that Phoenix forcibly removed the ice from the coldest spot around using a relatively warm scoop in a relatively warm atmosphere. That could lead to melting and refreezing, says Ming, much as ice cubes fresh from the freezer can fuse into a single, pesky clump in your glass. Ming says successful tests of the scoop before launch didn't include such changes in temperature. “Maybe we needed to do more testing,” he says, but neither time nor funding would have allowed that.

    “It's unfortunate we spent 30 [days] working on delivering ice,” says Arvidson. That would have left less than 30 days in the planned mission with six TEGA sample cells remaining, each requiring 7 days to analyze. “I'm concerned but not panicked,” he adds. NASA has now extended the mission by 30 days, and because plenty of science remains to be done on dry soil, Arvidson says, “we have to get on with business” while they work the icy-soil problem.


    Successes, Past and Future

    1. Richard A. Kerr

    Now just past the halfway point of its 120-day mission, Phoenix has run into problems handling the martian soil it was sent to analyze (see main text). But it has had its accomplishments, including:

    • A successful landing—All the testing of hardware and software inherited from the ill-fated Mars Polar Lander (Science, 9 May, p. 738) paid off in a perfect arrival for Phoenix, in part because the landing site turned out to be as safe as scientists had predicted.

    • Ice in easy reach—Scientists had deduced from orbital observations and theory that ice would be found anywhere they looked beneath 2 to 6 centimeters of loose soil. Phoenix found it 5 centimeters down on its first try.

    • Instrumentation that works—Both major instrument packages yielded results on their first tries. This martian soil, at least, is alkaline, not acidic as expected, and contains the products of interaction with water, although when and where that interaction occurred remains unknown.

    Nevertheless, the high-profile mission goals remain elusive. Signs that life may have been possible when the ice melted some time in the geologic past would most likely come from the wet-chemistry analyzer. Rumors that such signs have in fact been detected were rife at press time. The “bake test” analyzer can detect organic matter—the remains of life or merely meteoritic debris—but results from the first sample are requiring weeks of analysis.


    Researchers Flock to View Fleeting Display of Solar Corona

    1. Richard Stone

    JINTA, CHINA—With anticipation growing by the second, teams of Chinese scientists in matching T-shirts bearing the logos of their institutions fussed over telescopes, cameras, spectrometers, and other instruments set along the rim of a lake. At 7:14 p.m. Beijing time on 1 August, about an hour after the moon began to slide across the sun's face, the blue sky above this town in western China darkened like a sunset in fast motion. Then totality: The moon blotted out the solar disk, leaving the wispy corona, along with Mercury and Venus, visible to the naked eye.

    For the 110 seconds of total solar eclipse over Jinta, some of the dozens of scientists gathered here snapped photos while others silently took in the ethereal scene. Then the sky brightened. “This is my first time. It was just fantastic,” said Yan Yihua, a solar physicist with National Astronomical Observatories, Chinese Academy of Sciences (NAOC), in Beijing who was recording broadband radio emissions from the corona.

    A total solar eclipse is not just an awesome spectacle—it's also a rare opportunity to observe the corona, a swirling halo of plasma that's a millionth as bright as the solar disk. Such studies have taken on added significance in the wake of recent finds from Hinode, a spacecraft that has brought researchers to the threshold of solving a pair of long-standing enigmas: What impels the solar wind, and how the corona is heated to several million kelvin, enormously hotter than the sun's surface (Science, 7 December 2007, p. 1571). “Hinode has shown that the solar atmosphere is much more dynamic than we thought,” says Kazunari Shibata, a solar physicist at Kyoto University in Japan. But just how the corona and its magnetic field are energized is still largely a mystery—one that experiments during a total eclipse could help shed light on.

    China's record of observing the solar eclipse dates to roughly 2000 B.C.E. “In ancient China, people venerated the sun. They thought the solar eclipse is unlucky,” says NAOC's Han Yanben. Eclipse observations were vital for checking the calendar, and rulers planned around the unsettling events. For certain officials, failure to observe an eclipse was a grave mistake. During the Xia Dynasty some 4 millennia ago, annals show, court astronomers Xi and He were drunk and missed an eclipse. By law in those times, says Han, they were executed.

    The scientists in Jinta were not under that kind of pressure, but the stakes were nonetheless high to get their measurements right. A group from Yunnan Astronomical Observatory in Kunming had set a 20-centimeter telescope hard up against the lake to minimize thermal noise. Minutes after the eclipse, team leader Liu Zhong was hunched over a laptop in a tent next to the telescope. “We can see fine structure here,” Liu said, pointing to grainy features just above the sun's limb. “I don't know what it is yet—but it's so good!” he exclaimed. Zhang Mei, an NAOC solar physicist, was impressed. “This is why we gave him the best location,” she said.

    Show time.

    Zhang Mei looks on as Liu Zhong (in straw hat) examines early data. Eclipse over Jiayuguan Fort in Gansu Province (top).


    Uphill, three teams were poring over spectral emissions. In a dark-green tent, Bao Xing-Ming and colleagues from NAOC were using a charge-coupled device (CCD) camera and spectrometer to zero in on the near-infrared. Features of the corona's magnetic field can be deduced from these spectra and their polarization, says Zhang. “We know the corona's magnetic field is important in space weather, but we can't really measure it,” she says—except during an eclipse or using a coronagraph that mimics an eclipse. In a nearby tent, Qu Zhongquan's group from Yunnan was analyzing calcium and magnesium spectra to derive coronal density and temperature. Data from dozens of wavelengths should help fill out sketchy processes in the corona and chromosphere, Qu says.

    While spectra were a sure bet, a team from Purple Mountain Observatory in Nanjing was chasing a long shot. Zhao Haibin and colleagues were hoping to be the first in the world to observe vulcanoids, a class of asteroids hypothesized to exist within Mercury's orbit. Zhao's group and a second stationed in Hami, 500 kilometers to the northwest, were each using a CCD camera attached to a 15-centimeter telescope with a large field of view to image space between Mercury and the sun. Spots observed to move on complementary sets of images would be candidate vulcanoids. “The chances are small. We're going to have to get lucky,” says Zhao.

    Alphonse Sterling had good fortune on a different quest. Outside the Chinese scientific compound, strictly off-limits to outsiders (Science was granted access), the NASA solar physicist and two colleagues from the Harvard-Smithsonian Center for Astrophysics in Cambridge, Massachusetts—Samaiyah Farid and Antonia Savcheva—were snapping high-resolution photos of coronal plumes: plasma streams that can extend several solar widths from the sun. Hinode and other satellites have obtained sharp views of plumes in extreme ultraviolet and soft x-ray spectral bands. “We want to see what these bad boys look like in white light,” says Sterling, NASA's point man on Hinode. His images in Jinta were looking good, with several plumes clearly visible. Sterling's team plans to line these up with satellite data to look for differences in plumes perceived in other wavelengths.

    For those who weren't lucky this time, there's always next year. A corridor cutting across the heart of China will experience a much longer total eclipse than last week's—nearly 6 minutes in Shanghai—on 22 July 2009. Chinese solar physicists have already picked out a perch near Hangzhou.


    Deciphering the Genetics of Evolution

    1. Elizabeth Pennisi

    Powerful personalities in evolutionary biology have been tussling over how the genome changes to set the stage for evolution.

    Powerful personalities lock horns over how the genome changes to set the stage for evolution

    Diversity of form.

    Changes in regulatory DNA are implicated, but not always proven, in the evolution of morphological traits from a variety of organisms.


    Sitting quietly in the back of the seminar room, Hopi Hoekstra doesn't stand out as a rabble-rouser. But last year, this young Harvard University evolutionary geneticist struck a nerve when she teamed up with evolutionary biologist Jerry Coyne of the University of Chicago in Illinois to challenge a fashionable idea about the molecular mechanisms that underlie evolutionary change. Egos were bruised. Tempers flared. Journal clubs, coffee breaks at meetings, and blogs are still all abuzz.

    For decades, the conventional wisdom has been that mutations in genes—in particular in their coding regions—provide the grist for natural selection. But some 30 years ago, a few mavericks suggested that shifts in how genes are regulated, rather than alterations in the genes themselves, were key to evolution. This idea has gained momentum in the past decade with the rise of “evo-devo” (Science, 4 July 1997, p. 34), a field born when developmental biologists began to take aim at evolutionary questions. They have proposed that mutations in regulatory DNA called cis elements underlie many morphological innovations—changes in body plans from bat's wings to butterfly spots—that allow evolution to proceed. The idea has gained support from evidence that DNA outside genes—at least some of which are cis-regulatory elements—can be crucial to an organism's ability to survive and thrive over the long term.

    Urging caution.

    Harvard's Hopi Hoekstra argues that genetic changes must be adaptive to count as important in evolution.


    The zeal with which some biologists have embraced this so-called cis-regulatory hypothesis rubbed Hoekstra and Coyne the wrong way. In a 2007 commentary in Evolution, they urged caution, arguing that the idea was far from proven. The article sparked a sharp debate, with accusations from both sides that the other was misrepresenting and misinterpreting the literature. “What really got people upset is the tone of the paper,” says Gregory Wray, an evolutionary biologist at Duke University in Durham, North Carolina. A year later, fists are still flying—the latest skirmish took place in May on the Scientific American Web site—and several papers prompted by the debate have just been published.

    Although both sides would agree that cis-regulatory changes and mutations in coding regions of genes themselves probably both play a role in evolutionary change, the debate has become so intense that the middle ground is sometimes lost. Those on the sidelines are calling for patience. “There are strong winds from both directions,” says evolutionary biologist David Kingsley of Stanford University in Palo Alto, California. “There are a handful of tantalizing examples of both coding and regulatory change, but the solution will come when lots of examples are worked out and worked out fully.”

    The heat has fueled more careful looks at the evidence and a push to find more examples of cis-regulatory changes behind evolutionary modifications. It has also stimulated discussions of related ideas about how evolution proceeds in a genome: the role of transcription factors, for example, and whether evolution is predictable, with certain types of changes being caused by mutations within genes and others by alterations in nearby DNA. “I think we are on the threshold of a very exciting time,” says Wray.

    Regulation and evolution

    Early suggestions that gene regulation could be important to evolution came in the 1970s from work by bacterial geneticists showing a link between gene expression and enzyme activity in bacteria. About the same time, Allan Wilson and Mary-Claire King of the University of California, Berkeley, concluded that genes and proteins of chimps and humans are so similar that our bipedal, hairless existence must be the product of changes in when, where, and to what degree those genes and proteins come into play. They had drawn similar conclusions from studies of other mammals, as well as birds and frogs. But the tools to track down the molecular controls on gene expression and protein production didn't yet exist.

    More than 2 decades later, David Stern, a Princeton University evolutionary biologist, was probing the genetic changes that result in hairless fruit fly larvae. Typically, Drosophila melanogaster larvae are covered with microscopic cuticular hairs called trichomes, but not those of a relative called D. sechellia. In 2000, Stern found that mutations in genes were not involved and that changes in the regulation of a gene called shavenbaby were the cause. Sean Carroll of the University of Wisconsin (UW), Madison, saw a similar pattern in his group's studies of pigmentation patterns in fruit flies and in 2005 wrote an influential paper in PLoS Biology that helped convince the field that cis-regulatory changes were central to morphological evolution.

    Carroll argued that mutations in cis regions were a way to soft-pedal evolutionary change. Genes involved in establishing body plans and patterns have such a broad reach—affecting a variety of tissues at multiple stages of development—that mutations in their coding regions can be catastrophic. In contrast, changes in cis elements, several of which typically work in concert to control a particular gene's activity, are likely to have a much more limited effect. Each element serves as a docking site for a particular transcription factor, some of which stimulate gene expression and others inhibit it. This modularity makes possible an infinite number of cis-element combinations that finely tune gene activity in time, space, and degree, and any one sequence change is unlikely to be broadly disruptive.

    Data have been accumulating that suggest such regulatory changes are important in evolution. Take sticklebacks. In this fish, marine species have body armor and spines, but freshwater species don't. Four years ago, researchers tracked some of the difference to altered expression patterns in a gene called Pitx1 but found no coding differences in the Pitx1 gene of the two species (Science, 18 June 2004, p. 1736). “There's no doubt there's been a regulatory change,” says Carroll.

    Carroll, his postdoc Benjamin Prud'homme, and their colleagues discovered that closely related fruit flies vary in the pattern of wing spots used in courtship, and they have traced these changes to the regulation of a gene called yellow at the sites of the spots. Multiple cis-element changes—adding a few bases or losing others—have caused spots to disappear and reappear as Drosophila evolved and diversified, they reported in the 20 April 2006 issue of Nature.

    Similarly, Carroll's group reported in the 7 March issue of Cell that various alterations in a cis element controlling a Drosophila gene called tan—which plays a role in pigmentation and vision—underlie the loss of abdominal stripes in a fruit fly called D. santomea. This species diverged from a dark sister species once it settled onto an island off the west coast of Africa less than 500,000 years ago.


    Bat wings, too, may have arisen in part from a change in a cis element regulating a gene, Prx1, involved in limb elongation. Chris Cretekos, now at Idaho State University, Pocatello, and Richard Behringer of the University of Texas M. D. Anderson Cancer Center in Houston isolated this cis element in the short-tailed fruit bat and then substituted it for the mouse version of this regulatory DNA in developing mice. The resulting mice had a different expression pattern of the gene and longer forelimbs than usual, Cretekos, Behringer, and their colleagues reported in January in Genes and Development. The mouse and bat Prx1 protein differs by just two amino acids, which don't seem to affect its function, they note.

    And there are several cases in plants where cis elements have proved important. Teosinte, the ancestor of domesticated corn, sends up multiple stalks, whereas corn grows via a single prominent one. In 2006, John Doebley and his colleagues at UW Madison linked this change to a difference in DNA several thousand bases from a gene called teosinte branched 1, indicating a role for noncoding cis elements in the evolution of corn.

    “When you think about the sort of evolution we're interested in—why is a dog different from a fish—that has to depend on changes in gene regulation,” insists Eric Davidson, a developmental biologist at the California Institute of Technology in Pasadena.

    Where's the beef?

    But Hoekstra and Coyne say this enthusiasm doesn't rest on solid evidence. In their Evolution article, they picked apart these examples and the rationale behind them. They pulled quotes from Carroll's work to criticize his fervor and berated the evo-devo community for charging full speed ahead with the cis-regulatory hypothesis. “Evo devo's enthusiasm for cis-regulatory changes is unfounded and premature,” they wrote. Changes in gene regulation are important, says Hoekstra, but they are not necessarily caused by mutations in cis elements. “They do not have one case where it's really nailed down,” she says.

    Fruit fly fashions.

    Mutations in regulatory DNA help explain species differences, such as abdominal stripes and no stripes (left) and wings with and without spots (above).


    Coyne and Hoekstra accept only cases in which a mutation in a cis element has been demonstrated to modify a particular trait, not just to be correlated with a difference. That's “the big challenge,” says Hoekstra. In the stickleback case, for example, the fact that the marine species expresses Pitx1 where spines develop and the lake species does not—although both have the same unmodified gene—doesn't prove that a cis element is responsible for the difference, Hoekstra and Coyne argue. Even Kingsley, who works on this gene in sticklebacks, agrees that the case isn't airtight. “We still need to find the particular sequence changes responsible for the loss of Pitx1 expression,” he says.

    Furthermore, the duo insist that the modified trait must be shown to be beneficial in the long run. Thus, they dismiss the shavenbaby example not only because causative changes in cis-regulatory elements haven't yet been identified but also because no one really knows whether the fine hairs on fruit fly larvae confer a selective advantage. “I'm distressed that Sean Carroll is preaching to the general public that we know how evolution works based on such thin evidence,” Coyne told Science.

    Coyne and Hoekstra also take issue with the notion that morphological changes are unlikely to be caused by mutations in the genes for body plans because those genes play such broad and crucial roles. Similar constraints apply across all genes, they argue. Processes such as gene and genome duplication and alternative splicing can provide room for evolutionary changes by enabling genes to take on new roles while still doing their original jobs, they note.

    They point instead to a large body of evidence indicating that so-called structural changes in protein-coding genes play a central role in evolution. They list 35 examples of such changes—including a mutation in a transcription factor—in a variety of species to bolster their case. They also point out that the small differences between the chimp and human genomes, which led Wilson and King to question whether mutations in coding regions can account for the differences between the species, still add up to plenty of meaningful gene changes—an estimated 60,000. “Adaptation and speciation probably proceed through a combination of cis-regulatory and structural mutations, with a substantial contribution of the latter,” they wrote.

    Mice camouflage.

    Changes in the coding regions of genes underlie the coat color differences between a light, beach-dwelling subspecies of mouse and the brown mainland one.


    Beyond the debate

    Almost as soon as their article appeared, lines were drawn and rebuttals planned. Carroll thought he was misrepresented. “I am not trying to say that regulatory sequence is the most important thing in evolution,” he told Science. But when it comes to what's known about the genetic underpinnings of morphological evolution, “it's a shutout” in favor of cis elements, he asserts. By not accepting that body-plan genes are a special case, Hoekstra and Coyne “muddied clear distinctions that are based on good and growing data,” he charges. Carroll also doesn't buy into the requirement that the new form needs to be shown to result in a selective advantage.

    Günter Wagner, an evolutionary developmental biologist at Yale University, is also critical. “There clearly are well-worked-out examples where microevolutionary changes can be traced back to cis-regulatory changes,” he says. Coyne and Hoekstra were “too harsh.” Other evolutionary biologists grumbled that because the article was an invited perspective it didn't undergo official peer review.

    On the other hand, William Cresko of the University of Oregon, Eugene, thinks it was high time for a reality check. Some researchers, he said, had become “complacent about the data.” Katie Peichel of the Fred Hutchinson Cancer Research Center in Seattle, Washington, agrees: The cis-regulatory hypothesis got “taken up without [researchers] realizing there are nuances. We haven't solved morphological evolution.”

    In spite of the intense rhetoric, the debate has had at least some humorous moments. At the IGERT Symposium on Evolution, Development, and Genomics in Eugene, Oregon, in April, Wray—who concluded in a March 2007 Nature Reviews Genetics piece that cis regulation was, for certain genes, more important than structural changes—and Coyne shared center stage as the keynote speakers. Coyne's title was “Give me just one cis-regulatory mutation and I'll shut up,” and he wore a T-shirt that said “I'm no CIS-sy.” Wray's T-shirt said “Exon, schmexon!” suggesting that coding regions, or exons, didn't matter all that much. (Carroll couldn't make it to the meeting.) Yet in May, Carroll and “I'm no CIS-sy” faced off online on the Scientific American comments page.

    On the positive side, the dispute has stimulated some new research. Rather than ask which type of change is more important, for example, Wray is examining whether there are any patterns in the types of mutations that are associated with different types of genes. He has scanned the human, chimp, and macaque genomes for regions that are positively selected in each species, looking for stretches conserved in two of the species but much changed in the third. He kept track of whether the region is coding or noncoding and determined which genes are involved. This computer study gives a sense of what kinds of mutations are important in the evolution of various types of genes but does not tie specific sequence changes to particular altered traits. At the IGERT meeting, he reported that genes related to immune responses and basic cell signaling have evolved primarily through mutations in coding regions. In contrast, changes in noncoding, regulatory DNA predominated for genes important for development and metabolism.

    Stern has gone a step further. After looking at Hoekstra and Coyne's paper, he and Virginie Orgogozo of the Université Pierre et Marie Curie in Paris did a comprehensive literature survey to ferret out any evolutionarily important mutations, dividing them according to whether they affected physiology (building muscle cells or mediating nerve cell transmissions, for example) or morphology—affecting body plan development. Unlike Hoekstra and Coyne, they included data on domesticated species and didn't demand that the change be clearly adaptive. Overall, cis-regulatory changes represented 22% of the 331 mutations cataloged. However, in comparisons between species, cis-regulatory mutations caused about 75% of the morphological evolution, they report in an article in press in Evolution. The data indicate that both types of changes affect both types of traits, with cis-regulatory ones being more likely for morphological trait changes between species, Stern says.

    Friendly fight.

    Keynote speakers Greg Wray (left) and Jerry Coyne promoted their take on the genetic basis of evolution with custom T-shirts.


    Yet even these data are inconclusive, Stern warns. Because developmental biologists focus on expression patterns, and physiologists on the proteins themselves, the former tend to find regulatory changes and the latter, coding-region alterations, potentially biasing which trait depends on which type of mutation.

    Also, coding changes are more likely to be identified than changes in regulatory regions in part because once a gene is linked to a trait it is easy to assay for mutations there. “It's like shooting fish in a barrel,” says Carroll. In contrast, regulatory DNA is harder to pin down. It can be close to or far from the gene itself, and a given gene could have several regulatory elements, any one of which might have the causal mutation. Thus the numbers may be misleading, a point also made by Hoekstra and Coyne. “It's really difficult to say that one's going to be more important than the other,” says Stern. But it's clear that cis regulation is important, he adds. “I really want to emphasize that evo-devo [researchers] haven't come to this way of thinking simply through storytelling. We came to it through the data.”

    To complicate matters further, mutations in coding regions can themselves alter gene regulation. As part of their take on the debate, Wagner and Yale colleague Vincent Lynch make the case in an article published online on 22 May in Trends in Ecology & Evolution that mutations in transcription factors can lead to evolutionarily relevant modifications in gene expression. For example, variations in a repetitive region of the gene Alx-4—which codes for a transcription factor important for toe development—can alter expression patterns and change body plan in dogs. Great Pyrenees are missing 17 amino acids in this region compared with other dog breeds, and these 45-kilogram pooches have an extra toe that other breeds lack. “This is an important part of gene regulatory evolution,” says Wagner.

    Researchers are also trying to figure out where noncoding RNAs fit in, how gene duplications make way for change, and what roles even transposons and other repetitive DNA may play. “The important question is about finding out whether there are principles that will allow us to predict the most likely paths of change for a specific trait or situation,” says Patricia Wittkopp of the University of Michigan, Ann Arbor.

    With so much unknown, “we don't want to spend our time bickering,” says Wray. He and others worry that Hoekstra, Coyne, and Carroll have taken too hard a line and backed themselves into opposite corners. Coyne doesn't seem to mind the fuss, but Hoekstra is more circumspect about their Evolution paper. “I stand by the science absolutely,” she says. “But if I did it over again, I would probably tone down the language.”


    Industrial-Style Screening Meets Academic Biology

    1. Jocelyn Kaiser

    A $100-million-a-year-effort to find chemicals for exploring cellular processes and drug discovery is about to move into production; skeptics say it is struggling to meet its goals.

    A $100-million-a-year-effort to find chemicals for exploring cellular processes and drug discovery is about to move into production; skeptics say it is struggling to meet its goals

    Parasitologist David Williams has spent his career studying Schistosoma, a type of snail-borne worm that kills 280,000 people a year in the tropics and leaves millions more with chronic liver and intestinal problems. By 2005, he had found a possible target for a drug—an enzyme the parasite requires for survival. But he had no easy way to find a molecule that would block it. Then he learned that the U.S. National Institutes of Health (NIH) was inviting researchers to submit material to be tested against a huge number of chemicals to find “hits,” or biological interactions. Williams applied, was accepted, and last April, he and collaborators published the results in Nature Medicine: After screening 71,000 compounds, they found one, Compound 9, that inhibits the enzyme and killed at least 90% of the worms in schistosome-infected mice.

    Williams is now seeking funds to develop it as a drug. “It would be pretty exciting if we could get something that would be effective for schistosomiasis,” a disease whose devastation he first witnessed as a Peace Corps volunteer in Ghana, he says. The worm is beginning to show resistance to the existing drug, and a better drug is needed.

    The schistosomiasis story has been touted as one of the first successes of a costly, controversial NIH program announced 5 years ago called the Molecular Libraries Initiative (MLI). It aims to bring so-called high-throughput screening, once reserved for big pharmaceutical companies, to academic scientists. Its specific goals are to develop probes for exploring cell function—small molecules that bind to protein targets—and to help find treatments for diseases that don't interest big pharma. NIH says the program, now ending a 5-year, $385 million pilot stage, has begun to pay off. Ten screening centers have produced more than 60 research probes, including a few potential drug leads. This month, NIH will move into full-scale production with grants to three large centers.

    The libraries project also has a side benefit, proponents say: It has spurred scores of universities to set up their own small-molecule screening facilities (see sidebar, p. 766). “Virtually every major medical school in the country” is jumping aboard in some way, says pharmacologist Bryan Roth of the University of North Carolina (UNC), Chapel Hill.

    Yet even boosters of MLI acknowledge that this more than $100-million-per-year program is still an experiment—and still struggling. The screening centers took longer than expected to set up, and some were more successful than others. MLI leaders have had trouble defining certain goals, such as how strongly a compound must bind to its target to work well as a probe. NIH's original plan for sharing results has also faltered.

    As the program expands, the research community remains deeply divided about it. Believers say it is generating a valuable trove of shared data and bringing rigor to the hunt for new medicines and biochemical probes. The skeptics, including several prominent drug industry leaders, aren't convinced this is a wise use of NIH's tight budget. Some worry that it may be too diffuse. It may be “a worthwhile thing to do,” says Steven Paul, executive vice president for science and technology at Eli Lilly and Co. in Indianapolis, Indiana. But he asks: “Is it realistic, and is it cost effective? How potent and selective are these probes?” The answers may not become clear, some say, until nearly a billion dollars has been spent.


    NIH's molecular screening program has produced research probes and potential drug leads for several rare or neglected diseases.



    Inside a nondescript building off a busy road in Rockville, Maryland's, biotech corridor, neurogeneticist Christopher Austin presides over the NIH Chemical Genomics Center (NCGC)—a 50-staff member intramural version of the 3-year pilot screening centers NIH funded at nine external sites. At its heart is a quiet room in which three state-of-the-art yellow robots are hard at work processing biological assays. They fetch plates that are each dotted with 1536 tiny wells of different small organic molecules, mix in a protein or cell solution, then run the plate through a detector that spots whether any of the chemicals on the plates has triggered some change in the protein or cells. In another room, medicinal chemists tweak these “hits” to improve the strength and specificity of the interaction.

    Although drug companies have long relied on such high-throughput screening, “this is not a world that most academic [biologists] have been in,” says Austin, a former Merck researcher who says he often feels like a John the Baptist, bringing small-molecule screening to academia. The time is right for this evangelism, say Austin and other NIH officials. The explosion in genomics launched by the Human Genome Project has revealed a wealth of proteins whose functions are unknown. Some are involved in disease processes. Advances in robotics have brought down costs, making it feasible for university labs to screen a protein against hundreds of thousands of compounds, looking for one that interacts with it. That compound could then be developed into a probe that researchers would use to disrupt a protein's action or explore a cell pathway. Some, such as the schistosomiasis project, might also generate new drug leads for a tiny fraction of the overall cost of drug development (see timeline).

    In 2004, leaders described their plan to set up a huge central repository of 500,000 compounds that all centers would use for such screening (Science, 12 November 2004, p. 1138). They said that any biologist could propose screening a candidate protein, cell-based test, or even a novel assay based on a whole organism. The assay would then be peer reviewed and, if accepted, assigned to a screening center. Compounds that bind to the protein or modulate cell activity would be chemically modified until potent enough to work in a test tube but not necessarily in animals. The resulting probes would be “made available without encumbrance to all researchers,” that is, without intellectual-property restrictions, Austin and other NIH leaders wrote.

    MLI debuted in 2003 as the largest piece of NIH's Roadmap, a set of cross-institute initiatives. Some researchers argued that such top-down projects siphon funds from investigator-initiated science. But NIH Director Elias Zerhouni described it as a boost for basic research in his 2004 budget request to Congress, saying it would “help accelerate researchers' ability to prove the function of the complex biological circuits … in normal function and disease.”

    The start-up was slow. Equipping 10 academic centers to screen molecules entailed “a huge learning curve,” acknowledges Carson Loomis, MLI program co-director. Initially, NIH hoped the scale-up would be similar to creating the first genome sequencing centers, he says. But high-throughput screening is not as straightforward. Centers wrestled with balky robotics equipment and chemicals that degraded. They soon realized that most of the biological assays would require many modifications to work properly when screened. They also faced the challenge of merging two cultures—biologists and chemists—and getting them to work together on a product, not hypothesis-driven research. “That interface is not a smooth one automatically,” says Ray Dingledine, director of the center at Emory University in Atlanta, Georgia, and chair of the screening network.

    Another challenge has been creating the small-molecule repository itself. NIH deliberately chose a wider range of chemicals than would be standard in the drug industry to make sure nothing was overlooked. But many proved “worthless” in the screens, and the ones that panned out turned out to be pretty similar to what industry would have chosen, says Christopher Lipinski, a former Pfizer chemist renowned for his skill in predicting what works as an oral drug. NIH's Linda Brady, who helped launch MLI, says the repository is growing and has improved—“I haven't heard [the term] ‘junk’ in a long time,” she says.

    Filling a gap.

    NIH says that research probes developed through its Molecular Libraries Initiative could help fill the pipeline of potential drug leads, boosting research in early stages when costs are low.

    One continuing debate centers on how to define an acceptable “research probe.” NIH wanted the probes to be potent and selective enough to work in vitro—but no more developed than that—so that MLI participants would feel comfortable sharing raw data and forgoing patents. “There's lots of debate about where that bar ought to be,” says medicinal chemist R. Kiplin Guy of St. Jude Children's Research Hospital in Memphis, Tennessee. NIH ended up loosening its original cutoffs for potency and selectivity; now it's largely up to the center to decide when a probe is complete. That has resulted in variable quality and made some centers appear more productive than others, says one center director.

    On a mission.

    Christopher Austin, leader of NIH's screening center, hopes academics will discover the value of small molecules.


    Despite the bumps, the 10-center pilot network has screened nearly 200 biological assays (far short of the projected 400) and produced 62 probes. Among these are the schistosomiasis compound; a potential drug lead for treating Gaucher disease, a rare metabolic disorder; a molecule for exploring potassium channel receptors; and probes that have shed light on the function of a new estrogen receptor. “Every center has produced at least a couple of interesting compounds,” says Brady, although three—the intramural NCGC (which began a year earlier), the Scripps Research Institute's branch in Florida, and the Burnham Institute for Medical Research in San Diego, California—have produced the majority.

    Missing bridges

    NIH's plan for informing the broader community about these probes hasn't worked as well, however. MLI screeners must deposit screening results in Pub-Chem, a database created as part of MLI. But these raw data reports aren't easy to use and often contain mistakes because the data aren't curated, Lipinksi says. NIH initially asked centers to post online “probe reports,” Loomis says, but took them down when journal editors complained that they were too similar to submitted papers. NIH plans to require centers to post reports after a 6-month delay.

    In the meantime, at Science's request, NIH produced its first-ever table of completed probes. Both the total number and details of this list drew a lukewarm response from two industry experts. Some of them look “very good,” says Stephen Frye, a medicinal chemist who left GlaxoSmith-Kline (GSK) last year for UNC, such as a measles virus inhibitor and probes for studying SP1 receptors, which are involved in sepsis. Others, however, are not very potent, he noted. Alan Palkowitz, head of medicinal chemistry at Eli Lilly, says that, based on their structures, he believes up to one-third of the probes might reflect spurious activity in the screens or be problematic for other reasons. He sees mostly “potential starting points” for useful probes. At the same time, both praised the list of 200-some submitted assays as including some innovative contributions such as zebrafish and tests of signaling pathways.

    Arguments about quality aside, the true test of MLI will be if the broader community orders probes and starts publishing papers using them, notes Paul of Eli Lilly. However, that test may not come soon. Researchers may not have ready access to the compounds, which are often not available off the shelf. NIH is relying on center investigators to provide small amounts to the community but is not yet tracking requests in a systematic way, says Loomis. He adds, however, that a growing number of citations suggests that some probes are being used widely.

    Some industry leaders question whether this massive effort is worth the time and money. If the goal is to study gene function, there are easier ways, says Peter Kim, president of Merck Research Laboratories, such as using RNAi to block gene expression and monoclonal antibodies to inhibit proteins. Small molecules are best for testing in vivo hypotheses that can lead to potential therapies, he and others say. For this, the probes usually need to be optimized to function in animals. But MLI doesn't plan to fund in vivo studies. And, says Peter Schultz of Scripps in San Diego, if academics try to do it on their own, they may face the need for the extensive medicinal chemistry and pharmacology of drug discovery. “I don't want to say the community has been swindled, but [creating selective in vivo agents is] a lot harder than it appears,” says Schultz, who also oversees drug discovery as head of the Genomics Institute of the Novartis Research Foundation. (He is not involved with the Florida screening center.)

    First fruits.

    About $385 million spent for pilot screening centers, a compound repository, a database, and technology has yielded 62 molecular probes.


    MLI's leaders are used to defending against such criticism. They say small molecules are uniquely useful because they modulate the target protein directly, rather than through its gene, and can have subtle effects. “It's critical to have tools that act at the level that Mother Nature does,” says Austin.

    Growing investment

    Despite the skepticism, reviewers who examined MLI in early 2006 concluded that it showed enough promise to continue. A project this ambitious may need 10 years to prove itself, says chemist Catherine Peishoff of GSK. “To say it's a success or failure would be unfair at this point,” she says.

    This month, NIH will move into what it calls “full-scale production” by funding three “comprehensive” centers for up to 6 years that will each screen 25 assays a year and have larger staffs of chemists to improve the hits. (NIH also plans to work with chemical vendors to make the probes available.) The top contenders for full-scale awards appear to be the intramural center; Scripps of Florida; Burnham; and the Broad Institute at Harvard University, which until now has had separate NIH funding for high-throughput screening. A handful of smaller centers will work on specialized screens or chemistry.

    It may be expensive and risky, but MLI is important because many drug companies are abandoning high-throughput screening and shedding chemists, argues Frye, whose division at GSK was dissolved in 2007. “If the NIH doesn't pull this off, I think it's a big step backwards for drug discovery,” he says.

    Guy says its value will become clear over time: “It's true that people are relearning a lot of lessons,” but now the data will be formally tested and widely shared. Guy says that, like the Human Genome Project, the results will be a vast expansion in public knowledge about biological systems, including targets that companies wouldn't touch before.


    Universities Join the Screening Bandwagon

    1. Jocelyn Kaiser

    Unlike the screening centers funded by the U.S. National Institutes of Health (see main text), many of the high-throughput screening facilities at universities lack chemists to do the tweaking required to verify a "hit" and improve the strength and specificity of the interaction.

    Once shunned as too costly and industrial, high-throughput screening is becoming a hot activity at universities. An international directory put together by the Society for Biomolecular Sciences lists 55 academic molecular screening centers—some large, some small—often paid for by a university's own budget as part of a drug-discovery program.

    Unlike the screening centers funded by the U.S. National Institutes of Health (NIH) (see main text), many of these facilities lack chemists to do the tweaking required to verify a “hit”—an interaction between a chemical and a protein target—and improve the strength and specificity of the interaction. Only a few schools even have a medicinal chemistry department, says Christopher Lipinski, a retired Pfizer chemist.

    Some observers say this weakness shows up in talks and papers from the new screening programs. There's a “blind spot” in academia, says Edward Spack of SRI International in Menlo Park, California: “They'll get a hit, but then many can't optimize it.” Ross Stein estimates that more than 10% of the hits he sees reported in journals are false positives. “There's a lot of junk in the literature,” says Stein, director of drug discovery at the Harvard NeuroDiscovery Center.

    Even if academics come up with a potential therapeutic molecule, a big unknown is who will take it forward. With pharma laying off employees, and venture capital for biotechs drying up, a drug lead may have to get through preclinical animal studies before a company will pick it up, says Stein. At Merck, “a whole building of people” worked on that, says neurogeneticist Christopher Austin, a former Merck staffer who heads NIH's intramural screening center. Universities have no equivalent.

    But would-be drug developers in academia note that, as part of a new push for translational research, NIH, the Wellcome Trust in the U.K., and other foundations are giving investigators money to contract out steps such as animal testing and medicinal chemistry. “If the target is important and the molecule is important, we will find a way to move it along,” says molecular pharmacologist David Scheinberg of Memorial Sloan-Kettering Cancer Center in New York City.

    Despite the gaps, small-molecule screening in academia is here to stay, say supporters of the approach. But there will be a shakeout. “People will either learn and get better, or they will not survive,” says pharmacologist P. Jeffrey Conn of Vanderbilt University in Nashville, Tennessee.


    Can the Vaquita Be Saved?

    1. Virginia Morell

    Scientists are embarking on a last-ditch effort to help the world's most endangered marine mammal avoid the fate of its Chinese cousin, the baiji.

    Scientists are embarking on a last-ditch effort to help the world's most endangered marine mammal avoid the fate of its Chinese cousin, the baiji

    On the edge.

    Vaquitas are vanishing quickly, due to gillnets that entangle them.


    On a recent June morning aboard the Koipai Yu-Xa, a research vessel plying the Gulf of California near San Felipe, Mexico, marine biologist Barbara Taylor let out a whoop of joy. The cruise was the first shakedown test of a special acoustic device, the T-POD, developed by an engineer in England. And he had just sent Taylor an e-mail message with the news she was most hoping for: “… the T-POD is full of lovely porpoise data.” That meant the shy vaquitas, or Gulf of California harbor porpoises, still swim in enough numbers to be found via their calls. The cruise is part of a new effort to save the smallest cetacean from the fate of its Chinese cousin, the baiji: extinction.

    The baiji (Lipotes vexillifer) was the first cetacean to succumb to human pressures, and many fear that the vaquita, Phocoena sinus, will soon be number two. “We had no idea if we would detect any vaquitas,” because only an estimated 150 remain, says Taylor of the Southwest Fisheries Science Center in San Diego, California, chief U.S. scientist on the Vaquita Expedition 2008.

    Scientists and the Mexican government are working on the animals' behalf. Late this summer, Mexico will launch a plan to restrict the use of gillnets that kill the vaquitas. Then in October, a full-tilt 2-month international scientific expedition will gather baseline data for what researchers hope will be the porpoises' eventual recovery. “The situation is dire,” says Taylor. “The vaquita has only a few years left before it goes the way of the baiji.”

    The 1.5-meter-long vaquita has been known to science only since 1958, when three skulls were found on a beach. The porpoises—whose markings look like “mascara and lipstick,” says Taylor—live solely in the northernmost part of the gulf. At the time of discovery, they are thought to have numbered in the low thousands. But every year, 20 to 30 vaquitas get caught in gillnets and drown. Heavy fishing and trawling for everything from shrimp to shark has sent them into a perilous decline, says Armando Jaramillo-Legorreta, a marine biologist at Mexico's Instituto Nacional de Ecología in Ensenada, who with Taylor and other scientists authored a 2007 study in Conservation Biology calling for “immediate action, not more data.” Numerous efforts are already under way by a bevy of environmental organizations, and the vaquita is on both the U.S. and Mexican endangered species lists as well as the International Union for Conservation of Nature's Red List of species in critical danger of extinction. Small portions of its watery home are protected as Biosphere and Vaquita Reserves. But so far, nothing has worked.

    Scientists last surveyed the vaquitas in 1997, counting 567. Using a model that tracks birth and death rates and fishing activity, Jaramillo-Legorreta and other scientists came up with the current estimate of 150. Because the little porpoises are difficult to spot in the murky waters they frequent, the best way to find them is with acoustic devices, says Jaramillo-Legorreta, who is also chief acoustical operator on the expedition.

    Scientists “need to determine if the vaquitas' numbers are increasing or decreasing. To even be able to say that would be a major scientific advance,” says Taylor, noting that the vaquitas are so timid that they have only been seen at long distances (a sighting at 900 meters is considered close) and never captured alive.

    “Fortunately, we're now seeing the best efforts ever from the Mexican government to save the vaquita,” says Jaramillo-Legorreta. The new plan, the Action Program for the Conservation of the Species Vaquita, was hammered out with fishers over the past 4 years. It calls for buying out boats and helping fishers start new businesses such as ecotourism, replacing gillnets with other gear, or compensating fishers for staying out of prime vaquita territory. The government is already pressing ahead, having allocated nearly $20 million for the purpose, says Luis Fueyo, the coordinator of the vaquita program in Mexico's Ministry of the Environment in Mexico City. “There are 750 licensed fishing boats” in the three main towns near the reserve, says Fueyo, “and we've purchased 308 licenses, representing 247 boats.” Another 52 fishers are switching their gillnets; the remaining 451 boats will not fish in 1200 square kilometers of core vaquita habitat. There are, of course, illegal fishers in the Gulf, making law enforcement a top priority, says Fueyo.

    “The nets have to come out of the water—forever,” says Taylor. “That is the only way to save the vaquita. It is a huge challenge and an enforcement nightmare, but [it's] the only way.” The vaquitas'maximum population growth rate is assumed to be like that of other porpoises, only 4% a year, she says. With the vaquitas' numbers so abysmally low, any growth “will be hard to detect,” even with the sophisticated upcoming survey, says Taylor. She adds that the numbers may also be “politically difficult. It's hard for politicians to say that six additional vaquitas a year is good news.” But for Taylor and the other Vaquita Expedition scientists, some of whom listened in vain for the sound of a baiji in 2006, six new vaquitas would be worth many a whoop of joy.

  12. Problem Solved* (*sort of)

    1. Robert F. Service

    Researchers have toiled for decades to understand how floppy chains of amino acids fold into functional proteins. Learning many of those rules has brought them to the verge of being able to make predictions about proteins they haven't even discovered.

    Researchers have toiled for decades to understand how floppy chains of amino acids fold into functional proteins. Learning many of those rules has brought them to the verge of being able to make predictions about proteins they haven't even discovered

    Big picture.

    Experimental data helped computer modelers nail down the structure of the nuclear pore complex.


    In 1961, Christian Anfinsen, a biochemist at the U.S. National Institutes of Health, saw something that continues to perplex and inspire researchers to this day. Anfinsen was studying an RNA-chewing protein called ribonuclease (RNase). Like all proteins, RNase is made from a long string of building blocks called amino acids that fold up into a particular three-dimensional (3D) shape to give RNase its chemical abilities.

    Anfinsen raised the temperature of his protein, causing it to unravel into a spaghettilike string. When he cooled it back down again, the protein automatically refolded itself into its normal 3D shape. The implication: Proteins aren't folded by some external cellular machine. Rather, the subtle chemical push and pull between amino acids tugs proteins into their 3D shapes. But how? Anfinsen's insights helped earn him a share of the 1972 Nobel Prize in chemistry—and laid the foundation for one of biology's grand challenges.

    With an astronomical number of ways those chains of amino acids can potentially fold up, solving that challenge has long seemed beyond hope. But now many experts agree that key questions have been answered. Some even assert that the most daunting part of the problem—predicting the structure of unknown proteins—is now within reach, thanks to the inexorable improvements in computers and computer networks. “What was called the protein-folding problem 20 years ago is solved,” says Peter Wolynes, a chemist and protein-folding expert at the University of California, San Diego.

    Most researchers won't go quite that far. David Baker of the University of Washington, Seattle, believes that such notions are “dangerous” and could undermine interest in the field. But all agree that long-standing obstacles are beginning to fall. “The field has made huge progress,” says Ken Dill, a biophysicist at the University of California, San Francisco (UCSF).

    The work has huge implications for medicine. Misfolded proteins lie at the heart of numerous diseases, including Alzheimer's and Creutzfeldt-Jakob disease. Understanding how proteins fold could shed light on why they sometimes misfold and could suggest ways to intervene. Accurate protein models can also lead to the development of more-conventional drugs that block or enhance the work of key proteins in the body.

    Twin challenges

    Today, the protein-folding challenge boils down to two separate but related questions. First, what general rules govern how, and how quickly, proteins fold? Second, can researchers predict the 3D shape that an unknown protein will adopt?

    These simple questions open the door to a world of mind-boggling complexity. Because two neighboring amino acids can bind to each other at any one of three different angles, a simple protein with 100 amino acids can fold in 3200 different ways. Somehow, a folding protein sorts through all those possibilities to find the correct, or “native,” conformation.

    And it's not by trial and error. Even if a folding protein could try out one different conformation every quadrillionth of a second, it would still take 1080 seconds—60 orders of magnitude longer than the age of the universe—to find the right solution. Because most proteins fold in milliseconds to seconds, something else is clearly going on.

    Molecular biologist Cyrus Levinthal pointed out this paradox in 1969 and concluded that proteins don't follow a random set of wiggles to find their native conformation. But figuring out what path they do take hasn't been easy. Early on, researchers largely assumed that a protein follows a set path as it folds, wending its way through certain intermediate states as it coils up into its particular arrangement of helixes and sheets and so on. But in the mid-1980s, Wolynes and Dill suggested that proteins, rather than working like origami—in which one fold in a sheet of paper leads to the next until the final shape is reached—actually break the problem down into many pieces. Local clusters, each containing a handful of amino acids, initially pull and repel one another. As these clusters begin to fold, neighboring clusters come together, and so on.

    To explain this process, Wolynes and Joseph Bryngelson, both then at the University of Illinois, Urbana-Champaign (UIUC), suggested that as proteins fold they follow an energy landscape, akin to water flowing downhill. The result is a more energetically stable arrangement. Dill pushed the same notion and later came up with an image of an energy funnel, showing how proteins can follow many possible different pathways to their native conformation at the bottom of the funnel (see figure, below).

    Dill's funnel explained how proteins could avoid Levinthal's paradox and fold quickly. It also led to a testable hypothesis. The time it takes for a protein to fold depends on the energetic obstacles in its path. With fewer amino acids, most small proteins fold more quickly than larger ones that can get caught up on energetic plateaus before finding another downhill run. If the simpler-is-faster rule were true, researchers realized, then it should be possible to make some proteins fold faster by mutating the amino acids that slowed things down.

    In 2007, Martin Gruebele, a chemist at UIUC, and colleagues set a record for such streamlining when they tracked the folding of both native and mutated conformations of a protein called λ repressor. After making their proteins, Gruebele's team cooled them down to unravel them and then zapped them with a laser. The nanosecond burst of heat caused the proteins to begin refolding, which the Illinois team could watch by tracking their fluorescence. Certain mutations enabled the protein to fold in just 2.5 microseconds, 200 times faster than the natural protein does.

    Such mutations, however, often disrupt the protein's chemical function. The reason, Gruebele says, is that in most proteins, hydrophobic amino acids tend to shield themselves from interacting with water—an energetically favorable arrangement—by nestling in the center. Charged, or polar, amino acids by contrast tend to stick out into the water that surrounds proteins. These groups tend to be more chemically active and commonly play key roles in the protein's reactive center. And as proteins wiggle into shape, the polar interactions are often slower to make their adjustments. Changing some of those amino acids speeds things up but alters the chemistry. “They evolved to do a job, not to fold fast,” Gruebele says.

    The next step

    Thanks to this and many other related studies, Andrej Sali, a biochemist and protein modeler at UCSF, says that most protein-folding experts now believe they understand the general rules for how proteins fold and how they fold so quickly. But the next step—predicting how a specific set of amino acids will fold—remains a much bigger challenge. “We have not been able to transfer our conceptual understanding into [a] prediction of how specific amino acid sequences will fold,” Sali says.

    There have been some successes. Every 2 years since 1994, for example, computer modelers have vied to determine the 3D structure of an unknown amino acid sequence in a competition known as the Critical Assessment of Techniques for Protein Structure Prediction. At first, only about half of the modelers came close to predicting the structures of moderately difficult target proteins (see figure, below). In 2006, however, 80% did. Most of the predictions still can't match the resolution of an x-ray crystal structure, which can pinpoint the position of atoms down to a couple of tenths of a nanometer. Nevertheless, Dill says, “it's gotten to the point where if you have a reasonably small protein, you can get a good structure.”

    Steady rise.

    Computer modelers have slowly but steadily improved the accuracy of the protein-folding models.


    Computer models for predicting protein structures come in many varieties. But they generally fall in two camps: Ab initio models start by specifying the attractive and repulsive forces on each point and then calculate a structure by cranking through the calculations until they find the lowest energy state. Homology models, by contrast, make their predictions by comparing the target protein with proteins with closely related sequences whose structures are already known. More powerful computers and search algorithms have recently given ab initio models a major boost, but Dill says homology models still hold the upper hand.

    The accuracy of those predictions depends on a model's resolution (whether it aims to map the position of individual atoms or just of individual amino acids) and how thoroughly it samples the energy landscape to find the lowest energy configuration. As a result, modelers face a tradeoff. Increase the resolution by mapping out all the atoms, and you limit the amount of sampling a computer, or network, is able to carry out. Increase the sampling rate, and you limit the resolution.

    To simplify the computations, some researchers bolster their computer models with experimental data that narrow the search for the protein's lowest energy configuration. In the 25 March issue of the Proceedings of the National Academy of Sciences, for example, Baker and 15 colleagues in the United States and Canada described a new technique for using nuclear magnetic resonance (NMR) data to boost the speed and accuracy of protein simulations with atomic resolution (see figure, below).

    Like x-ray crystallography, NMR has long been used to map proteins in atomic detail. But the technique typically works only with small proteins. It usually requires taking at least two separate types of NMR data, an easily acquired data set known as the chemical shifts and a much slower technique called the Nuclear Overhauser effect (NOE).

    In their new work, Baker and his many colleagues dispensed with NOE data and fed chemical shift data for 16 proteins into a computer prediction program known as ROSETTA. The resulting atomic-scale models closely resembled structures previously solved by either NMR or x-ray crystallography. As a control experiment, the researchers also solved the structures for nine proteins for which the NMR and x-ray structures were still being worked out. Those results, too, ended up in tight agreement. “Our joint hope is by combining our methods with NMR data we can work to larger and larger proteins,” Baker says. He and his colleagues are adding other types of experimental data, such as data from lower resolution electron cryomicroscopy and from mutation experiments that highlight amino acids sitting next to each other in the folded protein.

    The multipronged approach is paying off for other researchers as well. In the 29 November 2007 issue of Nature, Sali and colleagues in the United States, the Netherlands, and Germany predicted the structure of the nuclear pore complex, an assembly of 456 separate proteins, by integrating 10 different biophysics and proteomic data sets into a model. The resolution was low by the standards of x-ray crystallography. However, crystallography and other experimental techniques have no shot at revealing such enormous aggregates. “This is the only way to get a look at large, complex assemblies,” Sali says.

    Tight fit.

    Adding data from nuclear magnetic resonance experiments improves the accuracy of computer models of how proteins fold.

    CREDIT: Y. SHEN ET AL., PNAS (25 MARCH 2008)

    But perhaps the greatest hope for detailed atomic-scale simulations rests with the never-ending improvements to computer processors. For years, researchers have taken advantage of this trend by joining thousands of processors together to build powerful supercomputers, such as IBM's Blue Gene machines, that have long excelled at protein-folding simulations.

    More recently, Baker, Vijay Pande of Stanford University in Palo Alto, California, and other researchers have created distributed supercomputers. They rely on computer users from around the world to download software that lends the computer's central processing unit (CPU) to folding calculations when the computer is not in use. Today, Pande's Folding@home network counts more than 250,000 active participants, and Baker's Rosetta@home totals more than 300,000.

    Pande says these networks have sped up protein predictions 100,000-fold. Other recent improvements to search algorithms have boosted speeds another 1000 times. And most recently, distributed networks have begun turning to ultrafast graphics processors known as GPUs to gain another 100- to 1000-fold advantage. Taken together, these improvements now allow distributed networks to follow a protein through a billion gyrations, sampling a separate fold each nanosecond as the protein works its way into its native conformation within a second. The result is more accurate structures, Pande says.

    Downhill run.

    A folding protein can follow many paths to its most energetically stable native (N) conformation.


    For now, GPU networks remain smaller than their CPU counterparts. Pande's GPU network, for example, counts only 10,000 participants. But these are likely to grow quickly as modelers gain experience with writing code to take advantage of their talents. “The sampling part of the problem will soon be an obsolete issue,” Pande predicts.

    So is the protein-folding problem solved? Not quite. But Pande and others say researchers are getting tantalizingly close. “The practical issue in protein folding is now an engineering issue,” says Sali. Pande agrees: “We're on the verge of being able to tackle the complete problem.”

    Crossing that line won't solve all the problems in protein folding. Drug designers in particular have a tough challenge, because they often need to know the position of atoms in an active site of a protein at an ultrahigh resolution in order to design drugs to block or enhance the protein's work.

    Still, Dill calls the progress in his field a revolution. But he says most scientists haven't noticed because it has occurred so slowly. “Progress in science comes out as news when there are big steps,” he says. “In protein folding, there have been a huge number of folks involved and lots of incremental steps. And that doesn't usually make news.”

    Maybe not. But now it's set to make a difference for scientists, physicians, and their patients.