News this Week

Science  26 May 2006:
Vol. 312, Issue 5777, pp. 1116

    Synthetic Biologists Debate Policing Themselves

    1. Robert F. Service

    BERKELEY, CALIFORNIA—Despite its reputation for free living, California seems to be the place biologists gather to debate whether—and how—to regulate themselves. Three decades after geneticists convening in Asilomar agreed to voluntary guidelines on recombinant DNA experiments, synthetic biologists meeting here this week* began hammering out a “community declaration” to promote security and safety in their nascent field.

    Advances in synthetic biology are making it possible to easily mix and match parts from organisms and synthesize potentially dangerous microbes from scratch. This has raised a host of concerns including bioterrorism and ecological contamination. Against a backdrop of such worries, synthetic biologists have for the past 2 years consulted ethicists and legal experts and launched studies to explore ways to reduce the risks of their research—and to forestall possibly intrusive legislation by governments.

    Yet the synthetic biologists in Berkeley only took baby steps toward self-regulation, suggesting but not voting on a pair of recommendations related to preventing DNA synthesis companies from supplying sequences that might be used for a bioweapon. “It's a good thing to start with,” says Harvey Rubin, an infectious-disease specialist and biosecurity expert at the University of Pennsylvania. (As Science went to press, the complete list of declarations was still being worked out and was expected to be available for comment at

    Security concerns.

    Feats of synthetic biology such as the recreation of the 1918 flu virus (above) have prompted researchers to consider self-regulation.


    Unlike conventional recombinant DNA technology, in which researchers tend to manipulate individual genes and proteins, synthetic biologists are increasingly able to alter large swaths of genomes at once and assemble new ones from scratch. Synthesizing complete organisms, even potentially dangerous ones, is already a reality. In 2002, a research team recreated the poliovirus by stitching together DNA ordered from companies (Science, 9 August 2002, p. 1016). And last year, another group recreated the pandemic flu strain that killed tens of millions of people worldwide in 1918 (Science, 7 October 2005, p. 77). “There are very real concerns that we must face,” says Drew Endy, a synthetic biologist at the Massachusetts Institute of Technology (MIT) in Cambridge.

    The Synthetic Biology 1.0 meeting held 2 years ago brought these issues to the forefront and helped prompt the Alfred P. Sloan Foundation to back a security study by researchers at MIT, the J. Craig Venter Institute, and elsewhere, the results of which are expected by the end of the summer. Last year, the U.S. National Institutes of Health also set up a National Science Advisory Board for Biosecurity to look at synthetic biology issues. And in April, Berkeley public policy expert Stephen Maurer and colleagues released a white paper outlining six possible early steps the field can take to boost security.

    One issue highlighted in the white paper is the growing number of companies around the world that can synthesize stretches of DNA tens of thousands of bases long, within range of recreating viruses in one fell swoop, though still considerably below the 4-million-or-so-base length of a bacterial genome. Given such skills, DNA synthesis companies should monitor commercial orders and report suspicious sequences to government agencies, says George Church, a synthetic biologist at Harvard University. That's already required in Germany, says Hans Buegl, a sales manager with GeneArt, a DNA synthesis company based in Regensberg. But such rules have yet to be adopted in the United States and many other countries. The Berkeley meeting's attendees' proposed declaration would call for such monitoring efforts to be standard procedure. A second recommendation is expected to call for the development of software programs that spot efforts to evade the scans, such as modifying suspect strands with extra DNA that could later be clipped off.

    Also suggested in the white paper were establishing a clearinghouse for community members to identify and track potential biosafety and biosecurity concerns and creating a confidential hotline from which researchers could seek advice from experts before proceeding with experiments about which they may be uneasy. But with it still unclear who should oversee such efforts, those proposals don't seem likely to be on any declaration for now.

    Expecting Asilomar-like results was unrealistic, say some. “Our society is a different place, and it's unlikely you could go to the monastery without everyone following behind,” says David Baltimore, who helped organize the 1975 conference and is president of the California Institute of Technology in Pasadena. Indeed, on 19 May, a group of 35 environmental organizations, trade unions, and ethicists wrote an open letter to the Berkeley meeting attendees imploring them to forgo self-governance in favor of an international discussion of more strict national and international controls. “Scientists creating new life forms cannot be allowed to act as judge and jury,” says Sue Mayer, director of GeneWatch UK, one of the signatory groups. The letter also suggests that synthetic biologists have a conflict of interest because several have helped launch companies in the field.

    Researchers counter that their intention was never to prevent a broader societal discussion or governmental oversight. “Look, we're trying to take a step forward here,” says Church. “If you stop someone from doing something that is noble because you want something even more noble, what you wind up with is worse.” By starting to propose a code of conduct for the field, he says, “we're beginning to develop some momentum.”

    • *Synthetic Biology 2.0, Berkeley, California, 20–22 May 2006.


    Pakistan Gives Geology Conference the Cold Shoulder

    1. Pallava Bagla

    NEW DELHI—Pakistan has pulled the plug on a high-profile conference next week that would have brought together scientists from India and Pakistan in a session designed to set aside hostilities and forge a research plan for the high Himalayas. The blow has left organizers of the science-for-peace event reeling. The cancellation “is completely unexpected and unwarranted,” says co-organizer Jack Shroder, a geologist at the University of Nebraska, Omaha.

    The joint project was to focus on the Karakoram range of the Himalayan mountains of northern Kashmir, a high-altitude graveyard for soldiers from the Indian and Pakistani armies, who in reality are far more likely to die from exposure and accidents than enemy fire. Topping the agenda of the conference, funded in part by a $70,000 grant from the U.S. National Science Foundation and scheduled for 29 to 31 May, was a discussion of how to turn one iconic battleground, the 6100-meter-high Siachen Glacier, into a science peace park. The first step would require that the two countries strike an accord and withdraw their troops. More than 100 scientists from eight countries had registered for the conference, sponsored by Pakistan's Higher Education Commission (HEC).

    Glacial progress?

    Pakistan has scuttled a meeting on turning Siachen Glacier into a research “peace park.”


    On 23 May, however, a geologist at the University of Peshawar e-mailed Shroder that the conference would be postponed “due to unavoidable circumstances.” The decision, he stated, was taken “in consultation” with HEC. A driving force for the cross-border initiative, environmental planner Saleem H. Ali of the University of Vermont in Burlington, told Science that Pakistan's Interior Ministry had pressured HEC to bow out, citing “security reasons.” Ali says HEC did not elaborate on the reasons, although he says HEC officials told him they were keen to go ahead with the event but were overruled. The abrupt postponement came, however, on the opening day of the 10th round of talks between senior officials from the defense ministries of India and Pakistan on how to demilitarize Siachen. As Science went to press, the talks were not expected to yield a breakthrough.

    The 11th-hour cancellation has caused a major headache for Shroder, who broke the news to participants on 23 May. Some scientists, he noted, were already in transit to Islamabad, where the conference was to be held. “We can only hope that the recovery from this blow to good science will be ultimately redeemed in either Pakistan or India, whichever country steps firmly into the breach and decides to at last do things right,” Shroder says. For now, however, the researchers, like the troops at Siachen, have been left out in the cold.


    Senate Panel Backs Social Sciences at NSF

    1. Jeffrey Mervis

    A U.S. senator has blunted her attack on the value of social science research, calming fears that the National Science Foundation (NSF) might be ordered to reduce its support for the discipline.

    On 2 May, Senator Kay Bailey Hutchison (R-TX), chair of the research panel within the Senate committee that oversees NSF and several other science agencies, used a hearing on NSF's 2007 budget request to harshly criticize several grants funded by NSF's social, behavioral, and economic sciences directorate (Science, 12 May, p. 829). She said such research should be excluded from the president's proposed doubling of NSF's budget as part of an initiative to strengthen U.S. competitiveness.

    Last week, the full committee approved a bill (S. 2802) that included NSF's role in the initiative. But after drafting language that would have restricted NSF's budget increase to the physical sciences, Hutchison instead introduced an amendment—passed unanimously—that preserves NSF's mission to fund the breadth of nonmedical scientific research across its $5.5 billion portfolio. The amendment highlights the importance of the “physical and natural sciences, technology, engineering, and mathematics” and explains that “nothing in this section shall be construed to restrict or bias the grant selection process against funding other areas of research deemed by the foundation to be consistent with its mandate, nor to change the core mission of the foundation.”

    NSF officials especially welcomed the last phrase, which allows them to stay the course. Senator Frank Lautenberg (D-NJ), who struck the compromise with Hutchison, says the words are intended to reflect “the importance of the social sciences to U.S. economic competitiveness” and their value in applying technology to societal needs.

    Despite her softened stance, Hutchison made it clear that some grants still rankle. Speaking before the committee voted, Hutchison declared that “these projects should not be funded by NSF at a time when we are focusing on trying to increase the number of scientists and engineers,” improve U.S. math and science education, and stay ahead of global competitors. The bill awaits action by the full Senate. The House of Representatives has not yet acted on a similar measure.


    NIH Wants Its Minority Programs to Train More Academic Researchers

    1. Jeffrey Mervis

    The U.S. National Institutes of Health (NIH) says it's time to get serious about producing more minority biomedical scientists. Admitting that they have been missing their target, NIH officials said at a public meeting last week that they will revise the rules of a flagship undergraduate program that serves mostly African Americans, Hispanics, and Native Americans. At the same meeting, a key advisory panel urged NIH and the academic community to go even further, proposing an 8-year doubling of minority candidates seeking doctoral degrees in the biomedical and behavioral sciences.

    “We realize [the doubling] is a huge number,” says Richard Morimoto of Northwestern University in Evanston, Illinois, co-chair of a working group that last week delivered a report on minority programs to the advisory council of the National Institute of General Medical Sciences (NIGMS), which oversees NIH's minority advancement programs. “But we felt that if we didn't raise the bar, a lot of programs would be content to keep serving the same number of students and achieving the same results.”


    At the core of the debate is how to get more mileage from several programs (see table) within the institute's $158-million-a-year division of Minority Opportunities in Research (MORE). (Given NIH's tight budget, nobody is talking about significant growth.) A staff white paper notes, for example, that fewer than 15% of the undergraduates in the Minority Access to Research Careers U*STAR program wind up with Ph.D.s in the biomedical or behavioral sciences, meaning that each doctorate-bound student costs NIH as much as $1 million. At the same time, the institute council's working group noted that nearly 40% of MORE's budget goes to a program helping faculty members at minority-serving institutions rather than directly to budding scientists.

    Some argue that MORE's programs might be more successful if they moved beyond their traditional base—schools with largely minority student populations that focus on undergraduate education and do relatively little research—and embraced major research universities with fewer minorities but more resources. Minorities are increasingly being educated at the latter, the working group points out. But some training program directors believe that such a policy could shift money toward schools that don't really need the funding. “A $250,000 training grant may not be a big deal if you've got a multibillion-dollar budget, but it's vital to a school like ours,” says Thomas Landefeld of California State University, Dominguez Hills. He and others believe that minority-serving institutions also plant the seeds for a scientific career among students who might not otherwise be aware of the opportunities.

    How to measure success is another hot-button issue. Focusing on how many students become academic researchers, for example, could work against programs enrolling large numbers of students who pursue medical or pharmacy degrees, for example, or even those who go on to work for industry. “We are defining success more clearly,” says NIGMS Director Jeremy Berg, “to mean greater diversity in the pool that trains the next generation of scientists and is eligible for NIH grants.”

    Morimoto admits there's no baseline for measuring progress toward the council's call for a 10% annual increase in filling the graduate school pipeline. The National Academies' National Research Council said in a report last year that NIH data fall woefully short of answering even basic questions about what its minority programs have accomplished (Science, 20 January, p. 328). But Morimoto says data alone aren't enough: “The training of more minority scientists needs to become the responsibility of the entire community.”


    High-Tech Materials Could Render Objects Invisible

    1. Adrian Cho

    No, this isn't the 1 April issue of Science, and yes, you read the headline correctly. Materials already being developed could funnel light and electromagnetic radiation around any object and render it invisible, theoretical physicists predict online in Science this week ( and In the near future, such cloaking devices might shield sensitive equipment from disruptive radio waves or electric and magnetic fields. Cloaks that hide objects from prying eyes might not be much further off, researchers say.

    The papers are “visionary,” says George Eleftheriades, an electrical engineer at the University of Toronto in Canada. “It's pioneering work that sets the stage for future research.” Greg Gbur, a theoretical physicist at the University of North Carolina, Charlotte, notes that others have studied invisibility but says the new papers describe more precisely how to achieve it. “Each gives specific examples of how you might design an invisibility device,” he says.

    No see?

    Forget the Invisible Man's transparency potion; new materials might ferry light around an object, making it invisible.


    From spaceships that vanish in Star Trek movies to Harry Potter hiding beneath his imperceptible cloak, invisibility has been a mainstay of science fiction and fantasy. But it might become a reality thanks to emerging “metamaterials,” assemblages of tiny rods, c-shaped metallic rings, etc., that respond to electromagnetic fields in new and highly controllable ways. John Pendry of Imperial College London and colleagues, and Ulf Leonhardt of the University of St. Andrews, U.K., independently calculated how the properties of a shell metamaterial must be tailored to usher light around an object inside it. An observer would see whatever is behind the object as if the thing weren't there, Leonhardt says.

    The theorists exploit the fact that light is always in a hurry, taking the quickest route between two points. That's not always a straight line, because light travels at different speeds in different materials, and it opts for the path that minimizes the total time of transit. So when light passes from, say, air into glass, its path may bend, which is why ordinary lenses focus light.

    Pendry and colleagues and Leonhardt calculated how the speed of light would have to vary from point to point within a spherical or cylindrical shell to make the light flow around the hole in the middle. Light must travel faster toward the inner surface of the shell. In fact, along the inner surface, light must travel infinitely fast. That doesn't violate Einstein's theory of relativity because within a material, light has two speeds: the one at which the ripples in a wave of a given frequency zip along, and the one at which energy and information flow. Only the second must remain slower than light in a vacuum, as it does in a metamaterial. The invisibility isn't perfect: It works only in a narrow range of wavelengths.

    The authors map out the necessary speed variations and leave it to others to design the materials that will produce them. But researchers already know how to design metamaterials to achieve such bizarre properties, at least for radio waves, says Nader Engheta, an electrical engineer at the University of Pennsylvania. “It's not necessarily easy, but the recipes are there,” says Engheta, who last year proposed using a metamaterial coating to counteract an object's ability to redirect light, making combination nearly transparent.

    Cloaking devices for radio waves could appear within 5 years, Gbur says, and cloaks for visible light are conceivable. Pendry notes that even a cloak for static fields would, for example, let technicians insert sensitive electronic equipment into a magnetic resonance imaging machine without disturbing the machine's precisely tuned magnetic field.

    Alas, even if invisibility proves possible, it may not work the way it does in the movies. For example, a cloaking device would be useless for spying, Pendry says. “Nobody can see you in there, but of course you can't see them, either.” Keeping track of your always-invisible device might be a pain, too.

  6. U.S. COURTS

    'Disappointed' Butler Exhausts Appeals

    1. Martin Enserink

    Thomas Butler's legal journey has come to an end. On 15 May, the U.S. Supreme Court declined to take up the case of the physician and microbiologist who received a 2-year prison sentence for shipping plague samples to Tanzania without the required permits and for defrauding his employer, Texas Tech University in Lubbock (Science, 19 December 2003, p. 2054).

    Butler declined to be interviewed, but his wife Elizabeth says her husband is “very disappointed.” Butler is working in Lubbock at a job unrelated to his professional training, she says, and weighing offers to rebuild his career. “This has been a tremendous blow,” she adds, “but we are healing little by little.”

    In January 2003, Butler reported vials containing the plague bacterium Yersinia pestis missing from his lab; after questioning by the FBI, he signed a statement, which he later withdrew, saying he had accidentally destroyed the samples. In his trial, the jury dismissed all but one of the government's charges relating to illegal shipping and handling of plague samples but found Butler guilty of fraud involving fees for clinical trials he had conducted at Texas Tech. Last fall, a three-judge panel on the U.S. Court of Appeals for the Fifth Circuit upheld his conviction (Science, 4 November 2005, p. 758); the full appeals court declined to review the case.

    “I have never in my career seen someone who was handed such a gross injustice,” says his attorney, George Washington University law professor Jonathan Turley. Turley says that the fraud charges, which the government added after Butler refused to accept a plea bargain, concerned a dispute between the researcher and his employer that would not otherwise have been prosecuted criminally.

    Butler, 64, was transferred to a halfway house in November after having served 19 months of his sentence and came home in late December. His supporters, including chemistry Nobelist Peter Agre of Duke University in Durham, North Carolina, are hoping against hope for a presidential pardon, if not from George W. Bush then possibly from his successor.


    RNAi Safety Comes Under Scrutiny

    1. Jennifer Couzin

    What began as an effort to craft a better hepatitis therapy using a strategy called RNA interference has ended in the deaths of dozens upon dozens of mice—a harsh safety alarm for biomedical researchers looking to RNAi as a treatment for HIV, cancer, neurodegenerative diseases, and more.

    The results, from gene therapist Mark Kay of Stanford University in California, come 3 years after he reported that a treatment based on the gene-silencing technique inhibited replication of the hepatitis B virus in mouse livers. This time around, Kay's team administered a refined version of the RNAi treatment to more than 50 infected mice.

    “We saw for the first couple days exactly what we expected,” says Kay's postdoctoral fellow Dirk Grimm, who helped lead the studies. But within a week or two, the mice began falling sick, their skin turning yellow from liver damage. More than 150 animals died, and many others suffered liver toxicity. Lowering the amount of virus given eliminated the harsh effects but also erased the treatment's success.

    “There's something that we don't understand going on here,” says Timothy Nilsen, who heads the Center for RNA Molecular Biology at Case Western Reserve University in Cleveland, Ohio. Although Kay and Grimm were taken aback by the devastating toxicity, they and others retain confidence in RNAi. “I really think it can still work,” says Kay.

    RNAi has become enormously popular in the last few years. It involves blocking the activity of genes, including those linked to disease, with short sequences of RNA complementary to a gene's sequence. Companies are already testing in people RNAi treatments for a respiratory virus and for macular degeneration.

    Interference problem.

    Compared to the liver of a healthy mouse (above), an RNAi treatment destroys the liver of a treated animal (below).


    Those trials, for which no significant safety problems have been disclosed so far, rely on simply introducing RNA molecules into the body. In contrast, Kay's team packages genes encoding small RNA molecules into viruses stripped of other genetic material, a strategy much like traditional gene therapy. Once injected, the viruses infect cells and keep producing the small RNAs, allowing a single dose to go a long way.

    For its RNAi tests, Kay's team uses an adeno-associated virus (AAV), which homes to the liver. Indeed, 90% of the virally delivered RNA genes ended up there, says Grimm. Yet the virus is probably blameless; injections of an empty virus didn't cause problems in the mice. To explore whether specific RNA sequences might be the culprits, the Stanford team created dozens of viruses making other RNA sequences and injected them into mice without hepatitis B, some genetically altered and some normal. Out of all 49 sequences tested, 23 were lethal in every case, killing the animals within 2 months. Another 13 were “severely toxic” to the liver, they write in Nature. As with many treatments, dosing seems to correlate with risk: Kay's team safely thwarted hepatitis B in mice by injecting an AAV that makes fewer RNA sequences.

    The results are “not surprising in retrospect,” says John Rossi of City of Hope in Duarte, California, who's working on an RNAi therapy for HIV. Too many extra RNA molecules may disrupt a cell's own internal RNAi machinery, he explains. Kay's group suggests that the extra small RNAs compete for a protein that transports a cell's own RNAs.

    A company called Sirna Therapeutics in San Francisco, California, still plans to test a nonviral RNAi strategy on people with hepatitis C next year. The firm “has spent a hell of a lot of time and effort putting [small RNAs] into animals and nonhuman primates … looking for toxicity, and we haven't seen anything like this,” says Barry Polisky, Sirna's chief scientific officer. Like Kay and others, Polisky worries that these new findings will be seen as an indictment of RNAi therapy, even though he is confident that injecting small RNAs alone is less hazardous than the viral approach. Not everyone's convinced. “I think it's premature to say anything is safer at this point,” says Nilsen.


    Price Crash Rattles Europe's CO2 Reduction Scheme

    1. Catherine Brahic*
    1. Catherine Brahic is a writer for SciDev.Net.

    LONDON—Dumping carbon into the atmosphere became very cheap last week. Or so it seemed, as the cost of licenses to emit carbon dioxide came tumbling down in Europe on 15 May. The price crash in the Emissions Trading Scheme fed doubts about the setup of this new market, launched in 2005 to help meet targets for CO2 in the Kyoto Protocol on greenhouse gas emissions. Experts are now discussing what went wrong and what can be done to shore up the system.

    The European Union (E.U.) invented the market to create incentives for cutting CO2 emissions. Companies can meet specific targets by investing in green technology that lowers CO2 emissions directly or by buying permits that allow them to emit CO2. In theory, those with the best technology will have surplus credits, which they can sell to the laggards— making a profit while improving the environment. Under this scheme, the price of one allocation unit—equivalent to 1 metric ton of CO2—soared to an all-time high of €31.5 in April. Then in a matter of days it dropped to €8. Prices were on the rise again as Science went to press but seem unlikely to climb back to where they were. The heaviest impact of the crash, ironically, may fall on developing countries, which had begun to benefit from investments in clean technology encouraged by the European CO2 market.

    CO2 trading prices fell after the European Commission announced that European industries had emitted more than 60 million tons of greenhouse gases less than predicted. With more than enough emissions allowances to go around, demand vanished. The events confirmed what had been suspected for some time: European governments may have been too generous in granting credits.

    “We know for sure that one of two things happened,” says climate policy expert Michael Grubb of Imperial College London. “Either industrial emissions were never going to be as high as projections said they would be, or it turned out to be far easier for industries to cut back on emissions than they had been saying.” The general consensus favors the first theory.

    Before the E.U. launched its trading scheme last year, its governing body, the European Commission, agreed on a total number of emissions allowances. To come to this number, nations tallied up estimates of their own CO2 emissions, subtracted a portion to create an incentive for industries to reduce their emissions, and handed over these targets in National Allocation Plans to the commission. Many governments, it appears, relied on company estimates of historical emissions.

    In April, news started to trickle out that various countries had not only met their targets for 2005 but also had allowances to spare. Drawing up allocation plans based on industry projections “inflated the trading system and sent out the signal that industries just have to lobby to get what they need,” says Grubb. But backers of the E.U. trading scheme point out that it is still in a teething period that runs from 2005 to 2007. The real deal begins in Phase II, from 2008 to 2012, corresponding to the time when the European Union must fulfil its Kyoto Protocol pledge to reduce greenhouse gas emissions to 8% below 1990 levels.


    Member nations must give the European Commission their National Allocation Plans for the second phase on 30 June. This time, estimates will be based on real emissions data, installation by installation. According to Shell carbon trader Garth Edward, they are the one tool policymakers have to ensure that the movements of the market translate into reduced global emissions.

    For Grubb, there's still a significant problem. The E.U. trading scheme covers slightly less than half of all E.U. emissions. Not included, for example, are the transport and domestic sectors. Countries need to justify how they will meet their Kyoto Protocol commitments both through their National Allocation Plans and by using technology and other measures such as taxes to reduce emissions in those nontrading sectors. “This is not a simple clear-cut matter,” says a European Commission official. “It requires an in-depth review of emissions trends across all sectors and all measures being used to limit them.” But Grubb believes it has been too easy for countries to claim they will achieve their Kyoto commitments by reducing emissions in sectors not covered by the trading scheme; he hopes the next allocations under the scheme will be more stringent.

    Some policymakers see the drop in the price of allowances as potentially good news. Says Halldor Thorgeirsson, deputy executive secretary of the United Nations Framework Convention on Climate Change, many “will be looking to the market for indicators of the cost” when negotiating post-2012 climate change policy. “The price drop is a bonus as far as post-2012 goes,” agrees Benito Müller, director of Oxford Climate Policy in the U.K. “The message is: ‘See? It's not that expensive; we can tighten the limits.'”

    Ultimately, developing countries may lose the most from the recent price crash. Investment surged in 2005 and 2006 in green projects in the south, partly stimulated by the high price of E.U. emissions allowances. Through a Kyoto Protocol instrument known as the Clean Development Mechanism (CDM), these projects offer companies in developed countries an opportunity to offset emissions at home by reducing emissions abroad. According to a World Bank report, CDM allowed approximately $2.5 billion in investments, or 350 million tons of reduced emissions, last year. More than half the volume was from European investment in developing countries.

    When CO2 emission prices were high in Europe, governments seemed ready to allow their industries to clean up southern skies as much as they wished. Now that prices have dropped to what most agree is a more realistic level, governments may decide to cap the external credits.


    A Vision for the Blind

    1. Ingrid Wickelgren

    Early-stage artificial “eyes” are competing in the clinic, giving blind volunteers a glimpse of the future

    New views.

    A visual prosthesis (artist's illustration) developed by Intelligent Medical Implants in Germany employs a goggles-mounted camera and a belt-attached processor (modeled below) that compresses visual images and transmits data to a device implanted in the eye.


    When Steffan Suchert, a lawyer in Nuremberg, Germany, learned that his two sons, who had been born deaf, were also going blind from a degenerative eye disorder, friends told him to pray and wait. Instead, he quit his law practice in 1998 and has spent nearly €3 million ($4.2 million) to found a company to develop a device that might return limited eyesight to his sons.

    Researchers at that company, Intelligent Medical Implants (IMI) Group in Bonn, have since designed a gold implant containing a chip about the size of a small coin that sends signals to a pupil-sized patch of 49 electrodes, exciting cells in the retina at the back of the eye. Since November, ophthalmic surgeon Gisbert Richard of the University Clinic of Hamburg has implanted these chips on the eyeballs of four totally blind patients and tacked the electrodes onto their paper-thin retinas. The chips, which will ultimately be connected via an infrared receiver to a video camera, are now being tested with simulated visual input. When a computer sent each patient's chip infrared signals encoding simple patterns such as lines and spots, three of the patients saw the lines and identified the locations of the spots. In addition, one patient could see horizontal movement in either direction simulated by the computer, IMI's Chief Medical Officer Thomas Zehnder reported 2 May at the meeting of the Association for Research in Vision and Ophthalmology (ARVO) in Fort Lauderdale, Florida.

    IMI is racing a growing cadre of companies and research groups to develop the first artificial “eye” that can supply useful vision to a subset of blind people. Just a few years ago, some artificial-vision investigators were lamenting that hype had outpaced clinical data in their field (Science, 8 February 2002, p. 1022). But now at least five teams have implanted experimental devices into people, and a sixth plans human tests within the next year or two. The pipeline of preclinical systems is also growing. At least 23 different devices are under development, a doubling in the past 4 years. “A critical mass” of research teams using innovative approaches has developed, says Joseph Rizzo, a Harvard Medical School neuroophthalmologist at the Massachusetts Eye and Ear Infirmary in Boston: “That kind of momentum makes it more likely that something will emerge that can really help blind people.”

    So far, even the most advanced of the experimental devices has provided blind people with only the crudest of black-and-white images, inadequate for navigating unfamiliar surroundings. Most of the artificial eyes currently under development would benefit just the minority of blind people who suffer from diseases such as retinitis pigmentosa (RP) and macular degeneration that degrade retinal cells but leave some of the retina intact. Much farther out are brain-implanted artificial-vision systems that can help people who have lost their eyes in accidents; none of today's devices will work for people who were born blind and whose visual system as a whole remains underdeveloped.

    Chip in the eye.

    IMI's implant sends visual data via gold wires to a tiny electrode array tacked onto the human retina.


    Lucian Del Priore, a retinal surgeon at Columbia Presbyterian Medical Center in New York City, warns that the field of visual prosthetics is still in its infancy. It is not realistic, he says, “to expect that a retina chip will restore vision to anything close to 20/20 in the near future.”

    Nevertheless, a combination of improved surgical techniques, miniaturization of electronics, advances in electrode design, and knowledge about how to safely encapsulate electronics in the body are inching the dream of artificial vision closer to reality. “It's very exciting for all of us to see the progress,” says neuroophthalmologist Eberhart Zrenner of the University of Tübingen in Germany.

    Entering the eye

    Researchers have investigated the use of electricity to stimulate vision for nearly half a century. In the 1960s, physiologist Giles Brindley of the Medical Research Council in London and his colleagues implanted 80 electrodes on the surface of a blind person's visual cortex, a region at the back of the brain that is the first stop for visual signals coming from the eye. Wireless stimulation of the electrodes made the patient, an adult who had recently become blind from glaucoma and a retinal detachment in the right eye, see spots of light known as phosphenes. “That was the first bold demonstration of what one might be able to do,” says Philip Troyk, a biomedical electrical engineer at the Illinois Institute of Technology (IIT) in Chicago.

    By the 1980s, a crop of ophthalmologists began considering a narrower and seemingly easier-to-solve problem: making prostheses for the eye. Many of these physicians wanted a way to help patients with incurable degenerative retinal diseases such as RP and macular degeneration. Research suggested that such disorders, which degrade photoreceptor cells called rods and cones, still leave large portions of the retina intact even after a patient has become totally blind. On this assumption, researchers aimed to stimulate the remaining functional cells.

    In the mid-1990s, ophthalmologist Mark Humayun, along with biomedical engineer James Weiland, then at Johns Hopkins Hospital in Baltimore, Maryland, and their colleagues, showed that this was feasible. When they stimulated the retinas of five blind people using handheld electrodes, the people saw spots of light in locations that matched the site of the stimulation.

    Humayun, Weiland, and their colleagues then developed a more permanent prosthesis in conjunction with Second Sight Medical Products in Sylmar, California. The device consists of a small video camera perched on the bridge of a pair of glasses, a belt-worn video processing unit, and an electronic box implanted behind the patient's ear that has wires running to a grid of 16 electrodes affixed to the output layer of the retina. The video processor wirelessly transmits a simplified picture of what the camera images to the box, and then the retinal implant stimulates cells in a pattern roughly reflecting that information.

    In normal vision, the rods and cones at the back of the retina detect light, and the retinal ganglion cells (RGCs), which actually sit closest to the vitreous—the eye's gelatinous interior—relay the visual signal to the brain. The electrodes of Second Sight's prosthesis directly excite RGCs—a so-called epiretinal approach—by sitting between them and vitreous. The stimulated RGCs then send signals along their axonal fibers, which make up the optic nerve.

    Since 2002, Humayun's group, now at the University of Southern California in Los Angeles, has implanted its array into six people blinded by RP. After some training with the device, all of them could distinguish between the light patterns given off by a plate, cup, and spoon by moving their head-mounted cameras to scan the objects, the group reported at ARVO this month. Some of the people could also detect motion when a bar of light was moved in different directions in a darkened room. Their perceptions are crude, admits Weiland, “but for them, it's a pretty big deal.”

    Weiland, Humayun, and their colleagues are now working on epiretinal implants containing hundreds of electrodes, which they hope will provide enough points of light to enable patients to recognize faces and read large print. The group is also developing a tiny video camera that would be embedded in an artificial lens and implanted in the eye. That lens would replace the eye's natural lens and would enable scanning using natural eye movements instead of awkward head shifting. In the meantime, Second Sight plans to start testing a 60-electrode implant by the end of the year.

    That technology will compete head-to-head with IMI's 49-electrode array, also implanted in the epiretinal space. Next to the IMI electrodes is a tiny infrared receiver, which enables the chip to receive video input from a glasses-mounted camera and “pocket processor,” the size of a small paperback book. In August, the company will begin implanting this upgraded device into 10 people. The prosthesis should enable them to find large objects in a room such as a table, chair, door, and perhaps even a cup of coffee, according to Hans-Jürgen Tiedke, an electrical engineer who heads the IMI group.

    Silicon sandwich.

    A silicon-based subretinal implant is wedged (inset) into the photoreceptor layer of the retina at the back of the eye, near the eyeball's perimeter. An epiretinal implant would sit on the other side of the retina, facing the eye's gelatinous interior.


    Under the retina

    Whereas epiretinal devices such as IMI's and Second Sight's require extraocular cameras and video processors to capture images, other teams elect to use light-sensitive chips designed to tap into more of the retina's image processing. In the retina, about 125 million rods and cones connect, through intermediate cell layers, to just 1.2 million optic nerve fibers, a 100-to-1 compression of information. Placing electrodes directly where photoreceptors are being lost, against the lining of the eyeball, enables the electrodes to excite the retina's intermediate cell layers and allows those layers to perform their normal processing of visual signals. These so-called subretinal implants also have the advantage of stimulating the retina in its natural topography, theoretically provoking more natural perceptions.

    Ophthalmologist Alan Chow and his team at Optobionics in Naperville, Illinois, were the first to try this approach in people in 2000. In 30 people so far, they have implanted in one eye a silicon disk the size of a nail head that is studded with 5000 microscopic solar cells, or photodiodes. The solar cells capture ambient light and translate it into pulses of electricity intended to stimulate the retina's intermediate layer of cells.

    Most of the implant recipients, including all 10 in the first clinical trial, have reported moderate to significant improvements in at least one aspect of visual function, such as light sensitivity, size of visual field, visual acuity, movement, or color perception. One of the first subjects, for example, had virtually no light perception before the surgery but could see human shadows after receiving the implant. A person in a more recent trial, who had very poor central vision and was legally blind, could thread a needle 6 months after the surgery, Chow says.

    Such improvements pose a mystery to some. Many of them are unlikely to be a direct result of the chip's electricity on retinal cells, according to William Heetderks, who directs extramural sciences at the National Institute of Biomedical Imaging and Bioengineering in Bethesda, Maryland. “The amount of current you need to actively stimulate retinal ganglion cells is known,” he notes, “and it is not in the same range as the amount you get off a photodiode.”

    Chow insists that some of his patients do see light at the implant site, but he agrees that the visual improvements are too widespread and complex to come solely from electrical stimulation of retinal cells by the tiny chip. He suggests that the implants somehow induce the release of growth factors that improve the function of remaining retinal cells. In rats with a genetic disorder that causes retinal degeneration, both active and inactive retinal implants delayed the degeneration of photoreceptor cells, Chow, Machelle Pardue of Emory University School of Medicine in Atlanta, Georgia, and their colleagues reported at ARVO.

    Solar-powered sight.

    Optobionics co-founders, and brothers, Alan and Vincent Chow work on their silicon eye implant (above), an array of 5000 microscopic solar cells (below, magnified). These are implanted in the human eye (center).


    Retina Implant GmbH in Reutlingen, Germany, the company founded by Zrenner and his colleagues, has created its own subretinal implant, a 40 × 40 array of microscopic solar cells. Each photodiode links up with a small amplifier, to boost the power of incoming light. In October, ophthalmic surgeons spent 7 hours putting the Tübingen team's chip into a blind person and have since repeated the surgery on a second patient. So far, Zrenner's team has only revealed data from the use of the chip's 16 test electrodes, which can be controlled externally via a cable that leaves the body behind the ear. Activating those electrodes elicited predictable images in both patients. Stimulating single electrodes produced pea-sized spots of light an apparent arm's length away. Switching on all 16 electrodes created a square; flipping on four in a row lit up a line the size of a large match, Zrenner and his colleagues reported at ARVO. “In principle, if you have enough electrodes working, you can put together an object,” he says.

    Looking ahead

    Although Zrenner's and Chow's prostheses are designed to work without cameras, subretinal devices don't have to operate solo. Physicist Daniel Palanker of Stanford University in California and his colleagues have developed an array of photodiodes that receive infrared input from goggles displaying a projection from a video camera. In this setup, the infrared “scene” changes as the eyes move inside the goggles' virtual reality display. This may provide more natural visual input than people can get from ordinary head-mounted displays, in which the view stays static unless a patient moves his or her head.

    The Stanford team's chips, which are not yet in human trials, also have unique structures that enable electrodes to get closer to retinal cells. That enables each electrode to stimulate a narrow area of tissue distinct from that triggered by a neighboring electrode, an advance that could be critical for developing high-resolution artificial vision. In one of the chips, cells migrate toward electrodes through pores. In another, cells travel between pillars such that electrodes at the tips of the pillars penetrate into the retina without apparent harm. When implanted in the retinas of blind rats with an RP-like disorder, both chips put retinal cells within just a few micrometers from electrodes. That should be close enough for 20/80 vision, enabling a person to read large print, the group reported at ARVO. By comparison, other groups' chips are basically flat, and their electrodes are typically tens to hundreds of micrometers away from retinal cells, limiting resolution to about 20/400, the level of legal blindness, or worse.

    Brindley's strategy of bypassing the eye completely also continues to be studied. IIT's Troyk and his colleagues are developing an array of 1000 microelectrodes that they hope eventually to implant in the visual cortex of a blind person. Such an implant could, in theory, help the many blind people who do not have intact optic nerves or retinas. One challenge is finding the best way to use an electronic link to put visual information into the brain, Troyk says.

    There are still big hurdles to cross before any of the prosthetic eyes under development can be put to everyday use. For example, no one knows for sure how much of the retina remains intact in the late stages of RP and similar retinal disorders, or what happens to neural tissue after it's stimulated repeatedly over months or years. In addition, researchers still don't have devices that can illuminate any of the world's fine print—details of faces or the texture of a flower. Nor do they have eye chips that can adapt to variations in natural lighting as the eye does. Stimulating color perception remains an even more distant dream. “One of the realizations I've come to is that artificial vision is not a restoration of natural vision,” Weiland says.

    Still, Suchert remains optimistic that IMI's chip or a similar device will one day help his sons. Matthias, who is 30, sees the world through a narrow tunnel, as if he were looking through the bore of a paper-towel roll. Andreas, 32, has a wider field of view but has lost his peripheral and night vision. “I insist on being successful,” says Suchert.


    Universities Find Too Many Strings Attached to Foundation's Offer

    1. Constance Holden

    The Alfred Mann foundation says professors need help to commercialize their inventions. But some experts say the charity is asking for too much in return

    Billionaire entrepreneur and biochemist Alfred Mann, 81, thinks the commercialization of biomedical technologies is too important to be left to academics. He wants to set up multimillion-dollar campus-based R&D institutes to help turn inventions into marketable medical innovations—with the institutes calling the shots.

    But Mann is having trouble selling the idea. This month, a proposed $100 million deal with the University of North Carolina (UNC), Chapel Hill, and North Carolina State University in Raleigh fell through, and discussions with several other universities have yet to result in agreements. Many technology transfer experts say they aren't surprised: What the Alfred E. Mann Foundation for Biomedical Research is proposing, they say, would force a university to surrender too much control over its intellectual property (IP).

    Universities have been trying for the past 20 years to beef up their technology transfer operations, ever since Congress opened the door for them to make money off the fruits of federally funded research. Mann, who made a fortune starting and then selling off several high-tech companies, created his foundation in 1985 to speed the development of university-based biomedical inventions into treatments. In 1998, Mann gave $100 million to the University of Southern California (USC) in Los Angeles to create an Alfred Mann Institute.

    A similar-sized gift to the two North Carolina universities was expected to be the first in a second generation of Mann institutes that would commercialize discoveries at a dozen or more campuses. The two universities were hoping the state would finance two $25 million buildings to house the institute; that request has been withdrawn after the talks collapsed and Mann withdrew his offer.

    The arrangement Mann is promoting is a novel hybrid that would confer all power to a separate nonprofit institute. The university and the foundation would each appoint half the institute's board members, and the revenues and royalties would be divided among the original inventor, the university, the institute, and the Mann foundation. Although professors and graduate students will participate in the work, the institutes are to be staffed largely by experts in product development recruited from industry.

    Tony Waldrop, vice chancellor for research at UNC, says the foundation “wanted much more far-reaching [IP rights] than what we were willing to give.” The university “wanted to have some ability to pick and choose” which faculty research products would be licensed to the new institute, he adds, and more freedom for inventors to choose their commercial partners.

    In by a hair.

    A subcutaneous miniprobe is one project at USC's Mann Institute.


    Waldrop declined to be more specific, but a copy of the proposed agreement (obtained by Science under the state's open records law) explains that the university would have been required to give the Alfred Mann Institute the first crack at any biomedical technology or drug the institute wanted to develop that wasn't already bound by a prior agreement with the funder. The university would have been allowed two exemptions every 5 years.

    IP experts who have seen the proposed agreement expressed surprise at its sweeping IP provisions. “I can't think of a major U.S. research university that would sign” such an agreement, says Karen Hersey of Franklin Pierce Law Center in Concord, New Hampshire, a former IP lawyer at the Massachusetts Institute of Technology. “The university is being asked to abandon its right to decide” what to do with its “uncommitted” IP. In fact, she says, the proposed scheme flies in the face of a host of accepted practices and constitutes a “massive reach” into federally funded research—possibly even violating a federal prohibition on discrimination by universities in making available the results of federally funded research. It's an “aggressive” proposal, agrees Robert Cook-Deegan of the Duke Institute for Genome Sciences & Policy in Durham, North Carolina. “It calls for the university to give the institute pretty much worldwide exclusive rights to anything that hasn't already been licensed to somebody else.”

    Mann foundation CEO Stephen Dahms, a former chemistry professor at San Diego State University, says that those who would reject the philanthropy's proposal don't know what's good for them. He says the institutes would cherry-pick only “a very limited subset of university IP”—perhaps two projects a year. The North Carolina institutions, he asserts, suffer from a “limited perspective on intellectual property access and other factors. … The lawyers warned us, but we thought we could overcome their traditional conservative ways of doing things.”

    “Universities are just not capable of making these business decisions,” says Dahms. In a 21 April letter to the Chronicle of Higher Education, which reported in March that several universities had bristled at the IP provisions, Mann explained that universities are getting low rates of return on research investments because “professors have no concept of what it takes to bring a product to market” and technology transfer offices “often don't know how to find the right partner.” A number of universities, including Johns Hopkins in Baltimore, Maryland, Emory in Atlanta, Georgia, and the University of Minnesota, have held preliminary discussions with the Mann foundation, but Dahms says no formal proposals have been made.

    USC's institute, which has received $170 million from the foundation, has several projects nearing the marketing stage, including a noninvasive heart-output monitor and a hair follicle-sized chemical biosensor. But the IP arrangements are less rigorous than those in the proposed North Carolina agreement. “Under no conditions would I undertake something without cooperation of the inventor,” says institute director Peter Staudhammer. Although proposed IP policies have become more rigorous since the USC agreement—“Mr. Mann wants to be certain these institutes are kept in an evergreen mode,” says Dahms—he emphasizes that the foundation is further revising its policies.


    A Quiet Leader Unites Researchers in Drive for the Next Big Machine

    1. Adrian Cho

    As head of the design team for the International Linear Collider, Barry Barish has physicists around the globe pulling together. But can the governments of the world afford their enormous particle smasher?

    Straight shooter.

    Barish (second from left) draws praise for his openness and integrity.


    BATAVIA, ILLINOIS—Three years ago, particle physics was, like Julius Caesar's Gaul, divided into three parts. Physicists around the world agreed that they should build an International Linear Collider (ILC), a 30-kilometer-long particle smasher that would blast electrons into their antimatter partners, positrons, to produce new particles and probe a new high-energy frontier. But researchers in North America, Europe, and Asia had different conceptions of the multibillion-dollar machine, and accelerator physicists were developing two different technologies for its twin accelerators.

    Now, researchers from the three regions are working together, thanks in good measure to the efforts of Barry Barish, a soft-spoken 70-year-old who wears his silver curls down to his collar and lives near the beach in Santa Monica, California. A particle physicist at the California Institute of Technology (Caltech) in Pasadena, Barish chaired the panel that settled the divisive technology issue and heads the ILC's Global Design Effort (GDE). But Barish is no Caesar. He leads not through force and intimidation but through a subtle combination of personal persuasion and masterful organization.

    “You don't have to have a frown on your face and be a tough guy to get things done,” Barish says. Nevertheless, Barish's leadership skills mystify others. “When Barry works with a group, people come away feeling that they've arrived at an answer that they discovered for themselves,” says Michael Turner, a cosmologist at the University of Chicago in Illinois who has known Barish since Turner was a Caltech undergrad. “He has an ability to guide things with an invisible hand.”

    Barish will need a deft touch to manage the GDE, a largely virtual collaboration that stretches around the world and is itself a bold experiment in how science is done. By year's end, the GDE aims to produce a preliminary design. More important, the team intends to calculate a price—a figure that may determine whether the ILC ever gets out of the starting blocks politically.

    That number has to be reliable. Physicists are haunted by the demise of the Superconducting Super Collider (SSC), an even bigger machine in Waxahachie, Texas, whose cost ballooned from $4 billion to $10 billion before the U.S. Department of Energy (DOE) axed it, unfinished, in 1993. Barish says the ILC cost estimate will be certain to within plus or minus 20%. That claim may make some of his colleagues' palms sweat.

    Both outsider and insider

    Barish received his doctorate from the University of California, Berkeley, in 1962 and has tackled ever-larger projects throughout his career. In the 1970s, he led one of the first experiments at the Fermi National Accelerator Laboratory (Fermilab) in Batavia, Illinois, a study of particles called neutrinos that cemented his reputation as a physicist. In the 1980s, he directed MACRO, an experiment in a cave in Gran Sasso, Italy, that searched for exotic particles called magnetic monopoles. In the early 1990s, he spearheaded GEM, an experiment that would have run at the SSC.

    Barish may be known best for his work with the Laser Interferometer Gravitational-Wave Observatory (LIGO), a pair of exquisitely sensitive detectors in Hanford, Washington, and Livingston, Louisiana, designed to detect ripples in the fabric of spacetime. Run by Caltech and the Massachusetts Institute of Technology (MIT) in Cambridge, LIGO foundered in the early 1990s because of dissension within and friction between management and the National Science Foundation (NSF). The agency considered killing the $500 million project and pressured Caltech to find a new director. In early 1994, Caltech administrators asked Barish to take the job.

    LIGO had been a “skunk works,” in which a few leaders made decisions secretively, says MIT experimental physicist and LIGO member Rainer Weiss. Barish immediately implemented a more open management structure, he says. Barish brought in more people and had every collaboration member write a description of his or her part of the project, Weiss says. Those became the “LIGO baseline,” a document that defined the experiment. Barish also made key decisions, such as changing the type of laser, that helped LIGO meet design specifications and start taking data, as it did last year.

    Barish has followed a circuitous path to the GDE directorship. In 2001, he co-chaired a DOE-sponsored committee that concluded that the United States should push to host the ILC. At the time, the global community was split over the “radio frequency cavities” that would accelerate particles in such a machine. Researchers in Germany were developing a design, dubbed TESLA, that used cavities made of chilly superconducting niobium; their counterparts in the United States and Japan were developing more-conventional copper cavities (Science, 21 February 2003, p. 1168). Community leaders formed an International Technology Recommendation Panel to choose between the “cold” and “warm” technologies and ultimately asked Barish, who had no ties to either side, to chair it.

    Many doubted that accelerator physicists would accept the panel's decision, as some had invested decades in one technology or the other. But when, in August 2004, the panel recommended the less powerful but also less demanding superconducting technology, the decision stuck—in large measure because Barish made the review exceptionally thorough and transparent, says NSF's Moishe Pripstein: “He focused on the process and was very open from the beginning so that people knew what the steps would be.”

    When it came time to pick a director for the design effort, Barish's name came up because he was familiar with the project but still had no ties to any particular camp, says Nobu Toge, an accelerator physicist at the Japanese particle physics laboratory KEK in Tsukuba. “Barry had the fewest enemies in this business,” Toge quips. So in March 2005, the ILC Steering Committee tapped Barish to direct the GDE. He now spends between a third and a half of his days flitting all over Europe, Asia, and North America on ILC-related work. “I need to see everybody, so I travel a lot,” he says. “Right now, I'm the face of this.”

    The process is personal

    Ask Barish's colleagues why he's an effective leader, and they invariably point to his interpersonal skills. “When I interact with him, I can tell he's listening to me,” Toge says. “He might answer yes, or he might answer no, but I can tell he's thought about what I said.” Chris Walter of Duke University in Durham, North Carolina, did his graduate work under Barish and says he has a knack for persuasion. “You would go into a conversation with him thinking that you did or didn't want to do something,” Walter says, “and you'd walk out agreeing with what Barry wanted and being somewhat baffled how that happened.”

    At the same time, Barish has a keen feel for process and organization, as his efforts with LIGO and the ILC technology panel demonstrate. Barish says he learned those skills through experience seasoned with a little purposeful study. “When we're talking about building a big project, we're talking about a bastardized version of how you build a bridge,” he says. “The trick is to inject just enough process so the project works, but not so much that it becomes so disciplined that people feel their work isn't their own.” Most important, Barish says, he strives to surround himself with talented people with whom he can work.

    Tunnel visions.

    The ILC will lay bare the new physics on the horizon, researchers say.


    Those closest to Barish attribute his success to particular personality traits. “He's extremely, rigidly ethical, and I think that underlying ethics is part of what makes him effective,” says Barish's daughter Stephanie, a multimedia producer in Venice, California. Samoan Barish, Barry's wife of 45 years and a social worker-psychoanalyst, says her husband possesses a self-confidence that lets him function under pressure. “Sometimes we're our own worst enemies; we put obstacles in front of ourselves,” she says. “Barry doesn't do that. He's not afraid.”

    For his part, Barish says he's comfortable making decisions because a scientific collaboration is not a democracy and because it's almost always better to keep a project moving than to sweat every detail. Barish admits he makes mistakes. But most mistakes can be overcome, he says.

    The ILC: How much?

    That may be, but physicists say they must get the cost of the ILC right the first time. “We're being conservative, because we know it's better to have the cost come down than go up,” says Peter Garbincius, a particle physicist at Fermilab and the GDE's lead cost engineer. “On the other hand, you can't start with too high a number and give [funding agencies] a chance to say no at the beginning.” Some say they need more time, but Brian Foster, a particle physicist at Oxford University in the U.K., says, “The time is about right to start injecting a real element of price so people can begin the process of eliminating bells and whistles.”

    In December, the GDE adopted a baseline document for the collider, and researchers are fleshing it out in a more detailed reference design. A few key technical questions remain, but work now centers on production issues, such as how to manufacture the ILC's 16,000 cavities—and at what cost.

    Above all, physicists vow not to repeat the mistakes made with the SSC. The cost of the SSC exploded in part because physicists did not design it for a specific site, and Waxahachie proved unexpectedly expensive, Barish says. Moreover, the original design pushed the technological limits, he says, and to ensure that the machine would work, researchers changed it in small but expensive ways. When estimating the cost of the ILC, physicists will consider specific “sample” sites, preferably close to existing labs, and will start with a conservative design, Barish says. “We're working much harder to capture all the costs in the beginning,” he says. “The only design changes we can envision are likely to help, not hurt.”

    Ironically, if the GDE succeeds, the unity Barish has helped forge is likely to recede, as nations vie to host the machine and haggle over how much each will pay for it. That struggle would be fought at the highest levels of government, but it's at least a couple of years away. For the moment, particle physicists are working toward a common goal, guided by the sure hand of a quiet gent who can tell you a thing or two about building bridges.


    Why the International Linear Collider?

    1. Adrian Cho

    Next year, physicists will start up the Large Hadron Collider (LHC) at the European particle physics laboratory CERN near Geneva, Switzerland. Most expect it to blast out the long-sought Higgs boson, the particle thought to give others their mass, and perhaps a slew of other finds. But only the International Linear Collider (ILC) could precisely measure the new particles' properties and chart the conceptual terrain, researchers say (Science, 21 February 2003, p. 1171). The LHC will collide protons, which consist of smaller particles called quarks and gluons, and produce immense sprays of particles. By smashing fundamental electrons and positrons, the ILC would produce much simpler collisions and give physicists far greater control over the energy and spin of the colliding bits of matter.


    The HapMap Gold Rush: Researchers Mine a Rich Deposit

    1. Jennifer Couzin

    Scientists are parsing a raft of new data on genetic variation for clues to disease and evolution

    CAMBRIDGE, MASSACHUSETTS—For a conference on the next generation in genomics, the setting was just right: a pristine auditorium in a gleaming new building near the Massachusetts Institute of Technology (MIT). More than 200 people gathered here at the Broad Institute earlier this month to discuss the HapMap, a database cataloging human genetic variation. Begun in 2002, the map has been assembled primarily to boost the analysis of inheritance using pieces of DNA that are often transmitted as intact blocks.

    Deciphering disease.

    Aided by the HapMap, researchers are finding gene variants that may help explain diabetes and other conditions.


    Nearly complete, the HapMap is now being tested for a number of uses: to find genetic variants behind common diseases, to examine the genome's architecture, and to study natural selection. The human HapMap has even inspired the launch of a parallel effort for Plasmodium falciparum, the deadly malaria parasite.

    Five countries kicked in about $138 million to fund the human project, properly known as the International HapMap Project. One early challenge was to allow for the fact that haplotypes differ somewhat across populations. To include a sweep of variants, the HapMap gathered DNA from 270 individuals of African, Japanese, Chinese, and European ancestry.

    The final version, which will be completed this fall, is slated to include more than 4.5 million single-nucleotide polymorphisms (SNPs). Although the SNPs themselves aren't necessarily contributors to disease, they may travel alongside other SNPs that are. Most of the map is already freely available online ( And it is being used “a lot,” says Francis Collins, director of the U.S. National Human Genome Research Institute (NHGRI) in Bethesda, Maryland, who is one of the map's biggest proponents. Collins says various National Institutes of Health institutes are being “flooded” with funding applications that involve using HapMap data.

    Despite such enthusiasm, some researchers say they're not certain just how the HapMap will aid their own genetic studies. The map's central goal is to help identify genes behind common diseases such as cancer, but it's not always clear how to apply it. When it comes to evolution studies, for example, the map may be biased because it prefers common SNPs to rare ones. “The HapMap project was not about studying population history,” says NHGRI's James Mullikin. But it's being used often by researchers in that area.

    As for disease genes, “it's still a bit early” to expect new findings, says Aravinda Chakravarti of Johns Hopkins University in Baltimore, Maryland, one of the project's leaders. HapMap-related studies are just ramping up, however, and a few are hitting on new results. At the meeting, for example, a postdoc at the Broad Institute, Robert Graham, reported a gene variant linked to lupus that he's found while working in the Broad lab of David Altshuler, one of the HapMap's leaders.

    John Todd of Cambridge University in the U.K. and postdoc Jason Cooper described a variant associated with type 1 diabetes, discovered by scanning more than 6500 SNPs in samples from thousands of type 1 diabetes patients and controls. With help from HapMap data, Todd's group homed in on a SNP on chromosome 2 that they believe may help drive diabetes, although they couldn't rule out effects from other SNPs nearby. The work appeared 14 May in the online edition of Nature Genetics.

    Another test for the HapMap will come this fall when David Hafler, a multiple sclerosis (MS) researcher at Harvard Medical School in Boston and the Broad Institute, and colleagues worldwide plan to complete the initial phase of the first HapMap-guided whole genome scan for a human disease, MS. The outcome “will provide a map for what to study” in MS basic research, Hafler predicts.

    The massive HapMap database is inspiring large collaborations as well as projects that take a big-picture look at the genome. Some search for changes in gene expression, inherited gaps in DNA, or patterns among so-called recombination hotspots, where matching chromosomes swap DNA more often than usual. Simon Myers, formerly a postdoc with statistician Peter Donnelly at Oxford University and now at the Broad Institute, examined more than 9000 hotspots found using the HapMap and a similar map by Perlegen Sciences Inc. in Mountain View, California.

    Myers and his Oxford colleagues matched each of their hotspots to a nearby “coldspot” where DNA rarely recombines. They found two DNA motifs in particular that were common in hotspots. One sevenbase sequence explained 10% of hotspots examined. The motif appeared to boost the chance of a hotspot in certain DNA stretches by up to five times.

    Another group at the Broad Institute is examining data that were sequenced and publicly released by HapMappers but didn't make it into the final HapMap because they were deemed erroneous. “This project is kind of a dumpster dive,” says the Broad Institute's Steven McCarroll. He and his colleagues found that thousands of the flaws are actually inherited DNA deletions. They've identified 10 commonly deleted genes, including two for sex steroid hormone metabolism and three for drug metabolism. They're now studying whether those deletions might contribute to disease.

    Finally, in answer to the commonly asked question, “What now?” several groups are turning from humans to parasites. Dyann Wirth of Harvard School of Public Health in Boston is leading this latest haplotyping effort, which seeks to index genetic variation in P. falciparum by examining DNA samples collected from South and Central America, Asia, and Africa. So far, the group has identified 55,000 potential SNPs. Data like these, if they hold up, may help uncover new drug targets and explain drug resistance and the “functional effects of mutations,” says Philip Awadalla of North Carolina State University in Raleigh, who's also studying P. falciparum's gene variation.


    Who Can Read the Martian Clock?

    1. Richard A. Kerr

    Researchers squabble over how to date martian geology by tallying impact craters. They're down for the count

    A local.

    Debris from a larger impact blasted open 210-meter Bonneville crater on Mars.


    HOUSTON, TEXAS—The concept sounds simple enough: To decipher the geologic history of other bodies in the solar system, count craters formed by the slow rain of bombarding rocks. The more craters on a lava flow, glacial debris, or a flood deposit, the farther back in time a volcano erupted, ice flowed, or water gushed. In practice, however, telling geologic time beyond Earth has proved tricky. For the earliest days of the solar system, researchers are even wrangling over the way the impacts took place (see sidebar).

    At a “microsymposium”* here in March, about 125 planetary scientists deadlocked over how to apply crater-dating techniques to recent Mars history. There was “no real progress in terms of mutual understanding,” laments Gerhard Neukum of the Free University Berlin in Germany.

    For more than 30 years, Neukum has been building the case that the overwhelming majority of craters planetary scientists can see resulted from a steady, uniform rain of impactors from the asteroid belt. If you count craters carefully on the moon, where rocks brought back to Earth have been used to date impacts, then crater counts on Mars—or Mercury, or the asteroids—will produce good ages, he has argued. Lately, however, other scientists, both relative newcomers and old hands in the cratering business, have challenged that view. “Gerhard has a very fixed way of looking at things,” says asteroid specialist Clark Chapman of the Southwest Research Institute (SwRI) in Boulder, Colorado. “I don't believe it.”


    Zunil (above) produced millions of secondary craters, like the moon's (right).


    Chapman and some other American researchers argue that most of the smaller craters in Neukum's counts were created not by rocks from the asteroid belt but by impact debris falling back to Mars, and therefore have little to do with telling time. In that case, Neukum could be off by orders of magnitude in his ages of younger features such as icy flows and gullies, which speak of an active Mars in the geologic here and now.

    “I really don't understand what is going on over there,” responds Neukum, referring to his American critics. “It's absolutely stupid nonsense. Somewhere, they are making a terrible mistake.”

    Mother of millions

    When meteoriticists noticed in the 1980s that a handful of meteorites in their collections had been blasted off Mars by impacts, researchers wondered how many more chunks of impact debris fall back on Mars and create craters. The more of these smaller, secondary craters scientists counted as asteroidal, the less accurate the crater-counting clock would be.

    Planetary scientist Alfred McEwen of the University of Arizona in Tucson and eight colleagues reported last year in Icarus that at least one impact on Mars created a huge number of secondaries. They counted secondary craters created by the debris from the formation of a 10-kilometer-wide crater named Zunil. Secondary craters can be hard to distinguish from so-called primary craters made by rocks coming out of the asteroid belt. But Zunil's secondary craters are so young that erosion has yet to obliterate their distinctive blankets of debris thrown out when they formed. At the workshop, McEwen and his colleagues upped their earlier estimate of the number of Zunil's 10-meter-and-larger debris chunks from 10 million to 100 million.

    Zunil “is just one little 10-kilometer crater,” says McEwen. If other primary craters produce as many secondary craters, he says, then most craters 10 meters to a few hundred meters in diameter on Mars are secondaries. That is just the size range that crater counters must use to date small geologic features such as glacial deposits and very young features such as water-cut gullies. Neukum and his colleagues have dated glacial flows at 4 million years ago, a geologic yesterday, and lava flows at 5 million years.

    “I give [Neukum] credit for doing the best job that can be done,” McEwen says, but adds, “I'm really concerned. It's very hard to distinguish primaries and secondaries. Given a few million years, you'd never know which is which.” Neukum is inevitably mistaking secondary craters for primaries both on the moon and on Mars, McEwen says. As a result, when Neukum sees a sparsely cratered feature, he thinks it's very young. In fact, it could be much older—as old as 100 million years, said McEwen. It may just not yet have been splattered by a large, infrequent primary crater.

    Returning fire

    Neukum disagrees. At the meeting, he and Stephanie Werner, a student of his who just received her Ph.D., argued that only a few percent of small craters result from secondary impacts. “So we don't care,” said Neukum. “They are not important.” Their evidence comes from the source of primary impactors, the asteroid belt. Neukum has counted craters on the 18-kilometer-long asteroid Gaspra, imaged by the Galileo spacecraft. “There are no secondaries on this body,” he says. That's because Gaspra's feeble gravity can't pull impact ejecta back onto itself to form secondaries, Neukum notes. The craters pockmarking it must reflect the sizes of the rocks produced by eons of colliding asteroids. By Neukum's count, the number of Gaspra craters goes up sharply with decreasing crater size. For example, for every 1-kilometer crater on the asteroid, there are about 1000 100-meter craters.

    Neukum and Werner see the same preponderance of small primary craters on the moon and on Mars. They've counted secondary craters from Zunil and those from “many Zunils” on the moon. Despite the differences in gravity, the results are the same as in the asteroid belt. “We had one kind of impactor cratering the inner solar system in the same way,” says Neukum, down to 10-meter craters. Because the proportions of small and large craters stay the same as on Gaspra, he says, “the admixture of secondaries cannot be true.” In addition, Neukum's cratering rates produce good matches between small-crater dating and sample-dating of young impacts on the moon. “It fits extremely well,” he says. “It all fits.”

    Crater counter William Hartmann of the Planetary Science Institute in Tucson agrees that it fits—at least well enough. Hartmann, who co-authored the now-standard 2001 cratering chronology with Neukum, throws out the obvious secondaries—the ones clustered together—and counts the rest as a rough guide to the age of the youngest features. “I think things look reasonably good,” he says.

    Where other crater counters err, says Neukum, is in the geology. “Many people have not been very careful,” he says. “Mars is [geologically] complicated. You find very few areas you can use in crater counting.” Erosion may have erased some craters, or crater counters may have unwittingly included surfaces of two different ages. “You can get any kind of [crater size] distribution,” he says.

    Signs of secondaries?

    At the symposium, some researchers begged to differ over the fine points of crater counting. Chapman and his SwRI colleague planetary dynamicist William Bottke reported on their new perspective on Gaspra. “One of Gerhard's biases is being conservative about recognizing something as a crater,” says Chapman. “Gerhard doesn't see [craters] if they aren't sharp and fresh. We now believe the fresh craters have got to be an anomaly due to very recent cratering.” Perhaps ejecta from an impact on a nearby asteroid hit Gaspra, Bottke suggests. In any case, the fresh craters are in fact secondaries of a sort, Chapman and Bottke say, contrary to Neukum's contention. The remaining craters represent the asteroidal source of Mars impactors, they say, and show no sign of Neukum's preponderance of small impactors.

    Close-up images from the Mars rover Spirit also suggest that secondaries dominate small craters on Mars, too. Geologist Matthew Golombek of the Jet Propulsion Laboratory in Pasadena, California, a Spirit team member, reported on a survey of impact craters measuring from 10 centimeters to a couple of hundred meters across. Spirit found that they are all far shallower and less bowllike than primary craters tend to be. The high speed of impactors falling in from the asteroid belt makes for relatively deeper craters than those made by slower ejecta blocks of the same size. “Almost every crater you see looks like a secondary,” said Golombek.

    So who's right? McEwen, who is a co-investigator on the Lunar Reconnaissance Orbiter (LRO) camera team, proposed a test to find out. LRO could image the same areas of the moon as Apollo orbiters did at the same resolution and under the same lighting. If Neukum is right about small bits of asteroids being so abundant, LRO should find about 50 new craters formed during the 39 years since Apollo. If impact ejecta from rare large impacts form most small craters, as McEwen believes, LRO will find no new craters. LRO launches in October 2008.

    • *Microsymposium 43, The Martian Time Scale: Craters, Meteorites, Processes, and Stratigraphy, 11–12 March, sponsored by Brown University and the Vernadsky Institute, Moscow.


    Bombardment Looking "Possible"

    1. Richard A. Kerr

    Although researchers at the symposium deadlocked over Mars's recent cratering history (see main text), they may have made progress with a related question: how cratering took place during the planet's first billion years.

    Since Apollo astronauts brought rocks back from the moon, planetary scientists have debated two diametrically opposite views of early cratering. One interpretation of lunar sample ages describes a “late heavy bombardment” 3900 million years ago, in which a swarm of debris with chunks ranging in size up to 100 kilometers across pummeled the moon and the rest of the inner solar system for a brief 80 million years. The other camp holds that cratering slowly declined after the formation of the solar system 800 million years earlier and that the battering recorded on the moon just marks the tail end of a drawn-out process.

    At the symposium, cataclysm versus decline came in for considerable discussion. The cataclysm hypothesis “cannot be true. There is a flaw in the interpretation,” declared cratering specialist Gerhard Neukum of the Free University Berlin. Geochronologist Donald Bogard of NASA's Johnson Space Center in Houston, Texas, then offered a compromise scenario: a much broader spike in cratering about 3900 million years ago, lasting a few hundred million years. “I admit it's a possibility,” Neukum conceded, eliciting a ripple of applause.

    A few days later, at March's Lunar and Planetary Conference in Houston, geochemist Dustin Trail of the University of Colorado, Boulder, and colleagues offered new support for a cataclysm in their analyses of six zircon grains over 4 billion years old from western Australia. Zircon is a highly durable mineral, but when severely heated, as by a large impact, it can form an enveloping overgrowth. Four of the six Australian grains had such rims, all dating to 3900 million years ago. None had rims older than that—evidence for a concentrated bombardment rather than a leisurely tapering off.

    At the Planetary Chronology Workshop in Houston this week, planetary dynamicists William Bottke of Southwest Research Institute in Boulder and Alessandro Morbidelli of the Observatory of the Côte d'Azur in Nice, France, described their computer simulation in which the new planets sweep up or fling away the leftovers from planetary formation. In the model, the pool of available impactors shrinks too fast to produce all the craters recorded on the moon 3900 million years ago. If that was the case, the decline scenario “cannot work,” Bottke says.

    “I feel that I'm getting pushed toward a cataclysm by data,” says geochronologist Timothy Swindle of the University of Arizona, Tucson. “The pause [in cratering] is there and then a spike that hangs around longer. I think we're getting there.”

  16. Energy Deregulation: Licensing Tumors to Grow

    1. Ken Garber*
    1. Ken Garber is a science writer in Ann Arbor, Michigan.

    Taking their cue from a controversial, 80-year-old theory of cancer, scientists are reexamining how tumors fuel their own growth and finding new ways to cut off their energy supply

    Sugar rush.

    PET scans reveal tumors (arrows) by highlighting areas of increased glucose uptake.


    In a widely cited paper published 6 years ago, cancer biologists Robert Weinberg of the Massachusetts Institute of Technology and Douglas Hanahan of the University of California, San Francisco, described six hallmarks of cancer cells, including their ability to invade other tissues and their limitless potential to replicate. Last month, at the annual meeting of the American Association of Cancer Research, Eyal Gottlieb launched a lecture with this provocative claim: “I believe I'm working on the seventh element, which is bioenergetics.”

    Gottlieb, a biologist at the Beatson Institute for Cancer Research in Glasgow, U.K., notes that tumor cells need an unusual amount of energy to survive and grow. “The overall metabolic demand on these cells is significantly higher than [on] most other tissues,” he says.

    Tumors often cope by ramping up an alternative energy production strategy. For most of their energy needs, normal cells rely on a process called respiration, which consumes oxygen and glucose to make energy-storing molecules of adenosine triphosphate (ATP). But cancer cells typically depend more on glycolysis, the anaerobic breakdown of glucose into ATP. This increased glycolysis, even in the presence of available oxygen, is known as the Warburg effect, after German biochemist Otto Warburg, who first described the phenomenon 80 years ago. Warburg thought this “aerobic glycolysis” was a universal property of cancer, and even its main cause.

    Warburg won a Nobel Prize in 1931 for his earlier work on respiration, but his cancer theory was gradually discredited, beginning with the discovery of tumors that didn't display any shift to glycolysis. Ultimately, the ascendancy of molecular biology over the last quarter-century completely eclipsed the study of tumor bioenergetics, including Warburg's ideas. The modern view of cancer is that it's a disease of genes, not one of deranged energy processing.

    Now, a revival in research on tumor bioenergetics suggests it could be both. A growing stream of papers is making the link between cancer genes and the Warburg effect, indicating that bioenergetics may lie at the heart of malignant transformation. For example, in a paper published online by Science this week (, Paul Hwang's group at the National Heart, Lung, and Blood Institute in Bethesda, Maryland, reveals that p53, one of the mostly commonly mutated genes in cancer, can trigger the Warburg effect. And last year, Arvind Ramanathan and Stuart Schreiber of the Broad Institute in Cambridge, Massachusetts, reported that in cells genetically engineered to become cancerous, glycolytic conversion started early and expanded as the cells became more malignant. They concluded that the cancer-gene model and the Warburg hypothesis “are intimately linked and fully consonant.”

    This idea remains controversial. Weinberg, for example, is a prominent skeptic. In his view, the Warburg effect and related metabolic changes are consequences of cancer, not major contributors to it: “It is a stretch to say that all this lies at the heart of cancer pathogenesis.” Nevertheless, several companies and labs are now testing anticancer drugs designed to exploit the bioenergetics of tumors.

    A new model of cancer

    The revival in cancer bioenergetics began in the mid-1990s when radiologists showed that positron emission tomography (PET) imaging could detect and map many tumors. In PET, an injected glucose analog highlights tumors, which are hungrier for glucose than normal cells are. “PET imaging,” says Schreiber, “suggests that the glycolytic switch even precedes the angiogenic switch”: the point at which tumors begin making their own blood vessels.

    Other evidence for metabolic differences in cancer accumulated at about the same time. Gregg Semenza of Johns Hopkins School of Medicine in Baltimore, Maryland, showed that a protein, hypoxia-inducible factor-1 (HIF-1), raised levels of glycolytic enzymes in cells lacking oxygen, and many hypoxic tumors contain elevated levels of HIF-1 (Science, 5 March 2004, p. 1454). In 1997, Chi Dang, also at Johns Hopkins, reported that the myc oncogene could turn on glycolysis. Furthermore, genes involved in energy production are mutated in several rare familial cancer syndromes.

    One way that cancer cells might increase glycolysis is through Akt, an important pro-survival signaling protein. In 2004, Craig Thompson, a cancer biologist at the University of Pennsylvania, reported that activated Akt, independent of HIF-1, could convert cancer cells to start using glycolysis. Akt had earlier been shown to induce glucose transporters to take glucose into the cell, and Nissim Hay of the University of Illinois, Chicago, showed that Akt signals a glycolytic enzyme, hexokinase, to bind tightly to mitochondria, the organelles in which most of the cell's ATP is normally made during respiration. This allows hexokinase to use ATP from mitochondria to jump-start glycolysis. Thompson has since linked Akt to other glycolytic functions.

    Thompson's model of how tumors make energy starts with upstream gene mutations that activate Akt and ends with cancer cells continuously consuming glucose, both aerobically and anaerobically. Others propose that cancer cells rely almost completely on glycolysis and largely shut down respiration, as Warburg originally reported. Because glycolysis is far less efficient than respiration, producing two ATPs per glucose molecule versus roughly 36 for respiration, that raises the question of how cancer cells benefit from the Warburg effect. “Is there a selective advantage?” asks Ajay Verma, a biologist at the Uniformed Services University of the Health Sciences in Bethesda, Maryland. “That hasn't been answered very well.”

    Cancer cells could benefit from glycolysis in many ways. Gottlieb and Thompson contend that a boost in glycolysis, added to respiration—which continues unabated—generates more energy more quickly than in normal cells that overwhelmingly rely on respiration. And because a glycolytic cancer cell is constantly slurping up nutrients, whereas a normal cell typically needs outside signals for permission to do this, such energy independence “empowers the [cancer] cell to grow,” says Thompson. It doesn't need to break down amino acids and fatty acids to generate energy as most normal human cells commonly do and can turn them instead into the proteins and lipids necessary for growth.

    Other potential benefits: Verma's work suggests that glycolysis leads directly to HIF-1 activation, which further boosts metabolism, and also stimulates angiogenesis and invasiveness. And in cases in which respiration is impaired, Dang suggests that shutting it down protects cancer cells from mitochondria damage that occurs when cellular respiration functions abnormally under hypoxic conditions.

    But does the Warburg effect cause cancer, as Warburg claimed? Probably not. “The glycolytic shift is not absolutely required for transformation,” says Thompson. But, he adds, it gives cancer cells “a higher metastatic potential and a higher invasive potential … because they're now cell-autonomous for their own metabolism.” Gottlieb agrees: “I believe [increased glycolysis] is important for sustaining tumors rather than inducing them.”

    Causality may not matter much when it comes to therapies. After all, angiogenesis doesn't cause cancer, but blocking it can stop cancer growth. Many early events in cancer “may not be relevant at the stages where we start treating those tumors,” notes Gottlieb. “Well, the bioenergetic demand will always be there and will always be required.”

    Powering down.

    (1) Hexokinase (HK) inhibitors interfere with the first step in glycolysis; (2) Drugs dissociating hexokinase from the mitochondrial membrane cause apoptosis and interfere with growth pathways; (3) Inhibitors of pyruvate dehydrogenase kinase (PDK) funnel pyruvate into defective respiratory machinery and cause apoptosis; (4) Inhibitors of ATP citrate lyase (ACL) cause citrate to build up, inhibiting glycolysis. (PDH = pyruvate dehydrogenase).

    Energy crisis

    Drugs targeting tumor bioenergetics are on the way. Most exploit a tumor's increased reliance on glycolysis. Threshold Pharmaceuticals Inc., a biotech company in South San Francisco, California, is already testing two such drugs in cancer patients: a chemotherapy compound conjugated to glucose, and a glucose analog that cannot be metabolized, thus shutting down glycolysis.

    Hexokinase, because it catalyzes the first step in glycolysis and can block cell death, is another key target. Hay, for example, proposes that drugs causing hexokinase to separate from mitochondria could treat cancer, by both damping down glycolysis, indirectly blocking a signaling molecule called mTOR and causing apoptosis by another mechanism. Directly inhibiting the enzyme is another strategy. In 2004, Johns Hopkins researchers reported that a hexokinase inhibitor, 3-bromopyruvate, completely eradicated advanced glycolytic tumors in all mice treated. Chemists at the M. D. Anderson Cancer Center in Houston, Texas, are now developing 3-bromopyruvate analogs for eventual clinical trials.

    Energy blocker.

    The large tumor on a rat's back (left, arrow) disappeared (right) after treatment with an experimental drug that interferes with cellular energy production.


    Other potential drug targets exist. Last year, Thompson identified an enzyme, ATP citrate lyase, that allows cancer cells to overcome a natural check on glycolysis. Inhibiting it blocks growth of tumors in mice. And this March, Dang and Nicholas Denko of Stanford University in California separately reported that another enzyme, pyruvate dehydrogenase kinase (PDK), acts to shut down mitochondrial respiration and protect cells in low-oxygen conditions. “One can imagine that by blocking PDK activity we can actually trigger cells to commit suicide,” Dang says.

    Compounds that limit glycolysis would, in theory, kill cancer cells while sparing normal cells, which can burn amino acids and fatty acids for energy. “When [cancer] cells are engaged in high-throughput aerobic glycolysis, they become addicted to glucose,” says Thompson. “So if you suddenly take away their ability to do high-throughput glucose capture and metabolism, the cell has no choice but to die.”

    How prevalent are glycolytic tumors? Using PET imagery, which maps glucose uptake, as a surrogate for the Warburg effect, Thompson estimates that between 60% and 90% of tumors make the shift to glycolysis. Gottlieb contends that most tumors turn to glycolysis only after oxygen disappears. But their special bioenergetics, he agrees, make them targetable by drugs.

    Some remain dubious. Michael Guppy, a biochemist recently retired from the University of Western Australia in Perth, even contends that the Warburg effect is a myth. Many researchers reporting the Warburg effect, Guppy says, do not accurately measure oxygen consumption in their cancer cells, sometimes ignoring the fact that cells can break down other molecules besides glucose to generate ATP. As a result, he contends, they overestimate the role of glycolysis. In a 2004 paper analyzing studies he found meeting his criteria for accuracy, Guppy reported that cancer cells, on average, were no more glycolytic than normal cells. So “a strategy for controlling cancer that relies on cancer cells being the sort of cell that cannot use oxygen when it's available … is wrong,” he says. Dang agrees that oxygen consumption could be measured more carefully but says Guppy “has ignored some key work in high-impact journals that negate his contention.” He adds that the fact that PET detects tumors is more evidence for a high level of glucose uptake.

    Even those at the vanguard of tumor bioenergetics acknowledge, however, that they must fully demonstrate how tumors inherently switch to glycolysis to meet energy needs. “We're still in the middle of absolutely proving that [system],” says Thompson. “It's a much more complex and dynamically regulated thing than anything else that we study in biology today.” Until the results are in, the seventh hallmark of cancer may have to wait.

  17. Autophagy: Is It Cancer's Friend or Foe?

    1. Jean Marx

    Cells rely on autophagy to recycle their components, and much evidence favors the idea that this “self-eating” suppresses tumor development. But other data suggest that autophagy fosters tumor development and actually protects cancer cells from treatments

    Environmentalists have long proclaimed the importance of recycling. Now cell biologists are delivering a similar message. Within the past few years, they have been working out the genetic and biochemical underpinnings of a cellular recycling system known as autophagy, and what they are learning could shed light on a variety of diseases—cancer among them.

    Autophagy has long been known for its roles in protecting cells against stresses such as starvation and in eliminating defective cellular constituents, including subcellular structures such as the energy-generating mitochondria. It is essentially a form of self-cannibalism—hence the name, which means “eating oneself”—in which the cell breaks down its own components. Cells can then recycle the resulting degradation products, using them to provide the energy and cellular building blocks necessary for their survival (Science, 5 November 2004, p. 990).

    Boost for growth.

    As indicated by the black dots, which mark the nuclei of dividing cells, mammary duct cells from mice with one beclin-1 gene inactivated (right) show increased growth compared to duct cells from normal mice (left).


    The cancer connection, which has cropped up more recently, comes from several research teams that have found that autophagy appears to suppress tumor development in animals. This includes demonstrations that some tumor-suppressor genes stimulate autophagy and that certain cancer-causing oncogenes inhibit it. But although such work suggests that boosting autophagy will prevent or treat cancers, “the situation is not so clear,” cautions Patrice Codogno of the University of Paris-Sud in France.

    Indeed, researchers continue to wrestle with the crucial question of whether some tumors exploit autophagy in order to survive. The process is known to kick in when cells encounter nutrient shortages. And because cancer cells in a growing tumor can find themselves short of needed nutrients, inducing autophagy could give them a hand. There's also evidence that in some cases autophagy helps cancer cells fight off chemotherapeutic drugs, although in others it may be part of the drugs' killing mechanisms. “There's controversy about whether one should be turning autophagy on or off to treat cancer,” says Beth Levine of the University of Texas Southwestern Medical Center in Dallas.

    Early neglect

    Suspicions that autophagy plays a role in cancer first arose about 3 decades ago when researchers noted that cancer cells seemed deficient in the process compared to normal cells. They made this determination either by measuring the rates of degradation of long-lived proteins or by looking for the characteristic double-membraned vacuoles that form in cells undergoing autophagy. These vacuoles encircle the cellular cargo destined for degradation and then fuse with lysosomes, which carry a host of enzymes for digesting proteins and other materials.

    At the time, however, researchers couldn't do much with the cancer connection, mainly because the genes involved in autophagy hadn't been identified. The early work implicating autophagy defects was “largely ignored by the cancer community because the evidence was mainly correlative,” recalls Levine.

    That didn't change until the early 1990s when yeast researchers, particularly Yoshinori Ohsumi of the National Institute for Basic Biology in Okazaki, Japan, Daniel Klionsky of the University of Michigan, Ann Arbor, and Michael Thumm of Georg-August University in Göttingen, Germany, began teasing out the genes needed for autophagy in that simple eukaryote. So far, more than 20 yeast autophagy genes have been unearthed, several of which have counterparts in mammals.

    Levine, who didn't set out to study autophagy, identified one such counterpart in 1998, helping to spark the current wave of interest in autophagy's role in cancer. Her team found the gene, now called beclin-1, while screening for binding partners for the protein encoded by the mouse bcl-2 oncogene. The sequence of the new gene, which also occurs in humans, resembled that of the yeast autophagy gene 6 (ATG6), and the Dallas group showed that beclin-1 could repair the autophagy defect in yeast with no ATG6 activity.

    Even more interestingly, Levine and her colleagues noted that the human gene maps to a chromosomal location that's frequently deleted in ovarian, breast, and prostate cancers—an indication that the site harbors a tumor-suppressor gene. The Levine team also found that beclin-1 expression is much reduced in invasive breast cancer cells compared to normal cells.

    Unlike most tumor suppressors, only one copy of the cell's two beclin-1 genes is usually lost in breast cancer. So Levine and her colleagues recreated this situation by knocking out a single copy of the gene in mice. The resulting animals showed both a decrease in autophagy and an increased frequency of cancers of the lungs and liver as well as lymphomas, the team reported in the December 2003 Journal of Clinical Investigation. The mice also developed benign breast tumors.

    A team led by Arnold Levine (not related to Beth Levine) of the University of Medicine and Dentistry of New Jersey-Robert Wood Johnson Medical School in News Brunswick and Nathaniel Heintz of Rockefeller University in New York City reported similar results the same month in the Proceedings of the National Academy of Sciences. That “pinpoints that beclin-1 is a tumor suppressor,” says team member Shengkan Jin of the Robert Wood Johnson Piscataway campus.

    Other researchers have connected previously identified tumor-suppressor genes to autophagy regulation. Codogno's team showed that one of these is PTEN, which inhibits a major cell growth stimulatory pathway and, as a result of that, a protein called TOR. Because TOR normally curbs autophagy, the process gets ramped up. Jin, Arnold Levine, and their colleagues have shown that the well-known tumor-suppressor gene p53 also increases autophagy by inhibiting TOR. And Adi Kimchi and her colleagues at the Weizmann Institute of Science in Rehovot, Israel, have found that the tumor suppressor known as DAPK (for death-associated protein kinase) is yet another autophagy stimulator.

    Conversely, many oncogenes appear to inhibit autophagy. Codogno and his colleagues have found that overactivity of the oncogene AKT curbs autophagy—the expected result because it promotes TOR activity. And last month in Washington, D.C., at the annual meeting of the American Association for Cancer Research (AACR), John Cleveland of St. Jude Children's Research Hospital in Memphis, Tennessee, presented as-yet-unpublished data from his lab indicating that the carcinogenic effects of the well-known myc oncogene are at least partly due to its ability to decrease autophagy. The evidence, obtained in collaboration with Michael Kastan, also at St. Jude, includes the finding that myc activity suppresses the expression of ATG genes 8 and 9. “Oncogenes suppress autophagy during tumor development,” Cleveland concludes.

    Autophagy in action.

    Stresses such as starvation lead to formation of double-membrane-bound autophagosomes that engulf cellular components destined for degradation by lysosome enzymes. The micrograph (inset) shows autophagosomes in yeast.


    Then there's bcl-2, the gene that helped Beth Levine and her colleagues identify beclin-1. This oncogene promotes the survival of cancer cells by inhibiting apoptosis, a process by which dysfunctional cells kill themselves. Evidence published by Levine and her colleagues last September in Cell raises the possibility that bcl-2 prevents autophagic cell death as well. Although autophagy is often protective, it can kill cells in some circumstances; they essentially digest themselves to death.

    The researchers showed that binding of the Bcl-2 protein to the protein product of beclin-1 inhibits autophagy. Levine suggests that this suppression by Bcl-2 helps keep autophagy in check under normal conditions. But if bcl-2 activity is excessive, as it is in some cancers, the consequent suppression of autophagy could allow damaged cells to complete a cancerous transformation.

    Killing abnormal cells is not the only way that autophagy could protect against tumor development, however. Autophagy's recycling ability can eliminate damaged cell components, especially organelles such as the mitochondria. Indeed, Klionsky says, it “is virtually the only way to get rid of whole organelles.” Getting rid of defective mitochondria, which release abnormally large amounts of DNA-damaging reactive oxygen species, could help protect cells against cancer-causing mutations, Jin and others suggest.

    Heart of the controversy

    Although many cancer therapies are thought to kill tumor cells by inducing apoptosis, researchers have begun to find signs of autophagy in tumor cells exposed to chemotherapy or radiation. The chemotherapeutic drugs that appear to trigger autophagy include tamoxifen, rapamycin, and arsenic compounds. “Many treatments induce autophagy rather than apoptosis,” says Seiji Kondo of M. D. Anderson Cancer Center in Houston, Texas.

    The crucial question, however, is whether autophagy helps kill tumor cells or instead protects them from the therapies' cell-damaging effects. There's evidence on both sides.

    One example favoring a cancer-killing role for autophagy involves the chemotherapeutic drug rapamycin, which is a known TOR inhibitor. Kondo and his colleagues have found that the rapamycin sensitivity of cells derived from highly malignant brain tumors called gliomas correlates with the drug's ability to induce autophagy. The researchers also found that drugs that increase autophagy boost rapamycin's ability to kill glioma cells.

    In contrast, at the AACR meeting, Ravi Amaravadi, who works in Craig Thompson's lab at the University of Pennsylvania, presented results supporting the idea that autophagy can protect against a chemotherapeutic drug. This work involved genetically engineered mice that overexpress the myc oncogene and develop lymphomas as a result. The animals also carried an altered p53 gene that can be activated by treatment with tamoxifen. The resulting p53 activity induces apoptosis, leading to a temporary regression of the animals' lymphomas.

    The Pennsylvania team showed that chloroquine, a drug that previous studies had suggested was an autophagy inhibitor, enhanced the animals' responses to tamoxifen treatment. “Cancer cells, when faced with cytotoxic damage [from chemotherapy], turn to autophagy. It's a mechanism that cancer cells adopt to survive,” Thompson maintains.

    This situation is complicated by results Cleveland described in his AACR talk. His team found that chloroquine can suppress myc-induced lymphoma development in mice. But Cleveland and his colleagues also concluded that chloroquine is an autophagy inducer, not an inhibitor. So although their mouse results were similar, the two groups came to diametrically opposite interpretations about whether autophagy enables chemotherapies to kill cancer cells or instead protects them from the drugs.

    The answer to the question of whether inducers of autophagy will be good or bad for cancer therapies may vary depending on the nature of the cancer, the drug, or both. The drug temozolomide (TMZ), which is currently in clinical trials for treating gliomas, provides an illustration of this kind of complexity. Kondo's team found that a drug that inhibits the late stages of autophagy enhanced TMZ's antitumor effects, whereas a different drug that blocks an early stage of autophagy suppressed them. “We have to understand all the players to predict whether a therapy [promoting autophagy] will protect the cells or kill them,” Kimchi says. Obviously, autophagy researchers still have their work cut out for them.

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution