News this Week

Science  09 Jan 2009:
Vol. 323, Issue 5911, pp. 192

    Scientists Laud Bush's Blue Legacy But Want More

    1. Christopher Pala*
    1. Christopher Pala is a writer based in Hawaii.

    Along with tuna, sharks, dolphins, turtles, and birds should profit from the new Pacific marine protected areas.


    Setting a middle course between the wishes of marine biologists and the concerns of the Pentagon and recreational fishers, President George W. Bush this week dusted off a little-used law for the second time in his Administration to protect swaths of ocean totaling an area the size of Spain.

    On 6 January, using the 1906 Antiquities Act, which gives the president unfettered authority to protect any place of scientific or historical interest—and scientists say intact island ecosystems constitute such places—Bush created three separate national monuments spread out over eight remote patches of the Pacific totaling 505,000 square kilometers. Although short of the 2.2 million square kilometers many marine scientists had advocated, the monuments consolidate Bush's legacy as the president who has preserved far more ocean than any other: more than 850,000 square kilometers.

    “It's terrific,” says Jay Nelson, director of Ocean Legacy at the Pew Environment Group, who lobbied for the Marianas monument. “It's the single largest marine conservation act in history.”

    The action culminated a yearlong effort that started following Bush's much-praised creation of the 362,000-square-kilometer Papahanaumokuakea Marine National Monument around the Northwestern Hawaiian Islands. Next, Bush asked James L. Connaughton, chair of the White House Council on Environmental Quality, whether other U.S. waters could be named national monuments. Connaughton met with representatives from a handful of marine conservation organizations, who then submitted proposals for a dozen sites. Fishing and mining groups knocked off those involving the protection of seamounts and canyons off California's coast; the waters off the island of Navassa between Haiti and Jamaica; and deep-sea coral beds off the Atlantic coast of South Carolina to Florida.

    In August, the White House announced the finalists: the waters around 11 remote, tiny, and uninhabited islands that were either lightly fished or not fished. This week, Bush disclosed the size of the monuments—they extend 92 kilometers from shore—and the degree of protection: generally no-take fishing, under a management system to be fine-tuned later.


    The first monument, and perhaps the most interesting scientifically, consists of 246,000 square kilometers of ocean around the Mariana Islands, already nature reserves, just south of Japan. It includes the islands of Maug, Asunción, and Uracus and is rich in submerged volcanoes with an exceptional diversity of life forms. The monument also stretches east and south to include the Marianas Trench, the oceanic equivalent of the Grand Canyon—only five times longer—and a group of seamounts with more hydrothermal life than anywhere else, teeming with the oldest living things on Earth: bacteria. Asunción erupts so often that it provides a unique platform to observe the rapid birth, death, and rebirth of coral and other marine life. Maug's water is so acidic in places that “it's the only place where you can look at the effects of ocean acidification on a shallow reef in a natural way,” says Rusty Brainard, chief of the Coral Reef Ecosystem Division of the National Oceanic and Atmospheric Administration (NOAA) in Honolulu.

    The second and smallest monument surrounds Rose in American Samoa, the world's smallest atoll, which has some of the densest coral cover in the world and hosts several seabird rookeries.

    For the third, Bush designated five separate areas in the Central Pacific, totaling 225,000 square kilometers. Wake Island and Johnston Atoll host military bases; the rest are National Wildlife Refuges: Howland and Baker in the Phoenix Islands; Jarvis, in the Line Islands; and Kingman Reef and Palmyra, also in the Line Islands. They have some of the greatest densities of fish in the world because the waters around them are unusually full of nutrients brought by upwelling currents. There, with the reefs already protected, the main beneficiaries are expected to be declining populations of tuna as well as the sharks, birds, turtles, and dolphins that are accidentally caught by longline fishing.

    The most significant opposition to making the monuments bigger came from the Pentagon. Nelson says several senior officials there expressed concern to him “that the designation of a monument over a big area could lead to future restrictions on their ability to operate effectively.” He said the officials cited lawsuits stemming from Bush's designation of the Northwestern Hawaiian Islands monument that restricted the use of sonar in war games.

    Eight groups of recreational fishers also expressed their displeasure in a letter to Connaughton, as did some politicians in the Commonwealth of the Northern Marianas. The Marianas Hotel Association and the Chamber of Commerce, on the other hand, enthusiastically endorsed the proposal as a way to “boost the local economy in promoting ecotourism.”

    Although scientists and conservationists are delighted with the new sanctuaries, they say bigger no-take areas are needed to counter increased fishing pressure in the Pacific. As a board member of Environmental Defense, marine biologist Jane Lubchenco of Oregon State University in Corvallis lobbied hard for Bush to protect the entire Exclusive Economic Zone around each island, which would have quadrupled the area protected. If confirmed as President-elect Barack Obama's choice to head NOAA, Lubchenco, also a past president of AAAS, which publishes Science, will oversee how all these monuments are managed. “We will continue to work with the next Administration to see if they can extend the area,” says Nelson.


    Higher Temperatures Seen Reducing Global Harvests

    1. Constance Holden

    Thousands of people died from the heat that baked western Europe in the summer of 2003. The heat wave also devastated the region's agricultural sector: In France, where temperatures were 3.6°C above normal, the country's corn and fruit harvests fell more than 25%. Thirty-one years earlier, another very hot summer shrank harvests in southwest Russia and Ukraine and led to a tripling in world grain prices.

    By the end of the century, two researchers predict, those summers may seem like cool ones, and the impact on agriculture will be even greater.

    In a paper appearing on page 240, atmospheric scientist David Battisti of the University of Washington, Seattle, and economist Rosamond Naylor of Stanford University in Palo Alto, California, apply 23 global climate models used by the Intergovernmental Panel on Climate Change to estimate end-of-century temperatures. Their conclusions with regard to agriculture are sobering. “In the past, heat waves, drought, and food shortages have hit particular regions,” says Battisti. But the future will be different: “Yields are going to be down every place.” Heat will be the main culprit. “If you look at extreme high temperatures so far observed—basically since agriculture started—the worst summers on record have been mostly because of heat,” not drought, he says.

    The models predict that by 2090, the average summer temperature in France will be 3.7°C above the 20th century average. Elevated temperatures not only cause excess evaporation but also speed up plant growth with consequent reductions in crop yields, the authors note. Although rising temperatures may initially boost food production in temperate latitudes by prolonging the growing season, Battisti and Naylor say crops will eventually suffer unless growers develop heat-resistant versions that don't need a lot of water. “You have to go back at least several million years before you find … temperatures” comparable to those being predicted, Battisti says.


    These wilting sunflowers in southwestern France show the effects of the exceptionally hot summer of 2003 that devastated agriculture.


    Just as France offered a glimpse of the future in temperate regions, says Naylor, the Sahel in Africa shows what life could be like in the tropics and subtropics, home to half the world's population. A generation-long drought in the region lifted in the early 1990s, but higher temperatures have remained, depressing crop and livestock production. The authors predict future production reductions of 20% to 40%, while the population in tropical regions is expected to double to 6 billion.

    The conclusions of the paper seem “reasonable,” says plant and soil scientist Peter Smith of the University of Aberdeen in the United Kingdom, who also does greenhouse gas modeling. Smith adds that future pressures on food supplies come not only from steadily growing populations but also from changes in food preferences, in particular, more people eating meat. “Demand for livestock products in developing countries will greatly increase over the next few decades,” says Smith. That trend, he says, represents “a switch to less efficient ways of feeding ourselves.”

    So developing heat-tolerant crops won't be enough to solve the problem of rising temperatures, he says. “We humans also need to change our behavior.”


    A New Spy Agency Asks Academics for Help in Meeting Its Mission

    1. Yudhijit Bhattacharjee

    One year ago, applied physicist Lisa Porter took over a new government agency charged with developing the next generation of spycraft technologies. In a rare interview, Porter, the director of the Intelligence Advanced Research Projects Agency (IARPA), discussed the agency's progress and plans with Science.

    Temporarily housed with the Center for Advanced Study of Language at the University of Maryland, College Park, IARPA expects by the end of the year to move into its own building on campus and double its size to 30 program managers. The academic location is no accident: The agency wants to build strong ties with academia through a continuing flow of grant solicitations. “The intelligence community likes to keep its business in the family—Lisa is trying to go beyond,” says Steve Nixon, former director for science and technology within the office of the Director of National Intelligence (DNI), which was created in 2004 and oversees IARPA.

    Porter was working at a national security think tank in northern Virginia when the terrorist strikes of 11 September 2001 made her “look in the mirror” and ask what more she could do to defend the country. Her answer was to join the Defense Advanced Research Projects Agency, a model for IARPA. There she ran classified and unclassified programs, including one aimed at making helicopter blades less noisy. A few years later, she went to NASA to head its aeronautics program.

    Porter's first task was to soothe concerns that IARPA was part of a power grab by DNI to take over the research portfolios of the Central Intelligence Agency, National Security Agency, and a dozen other entities (Science, 22 June 2007, p. 1693). An effective communicator, Porter erased those fears by providing lawmakers with budget numbers showing that those agencies would retain most of their programs, says one congressional aide. (IARPA's budget is classified but believed to be well in excess of $100 million.) She's expected to retain her job in the Obama Administration.

    Here are some highlights from her recent conversation with Science.

    Less cloak.

    Lisa Porter hopes most of IARPA's research will be unclassified.


    Q: What will IARPA do that's new?

    Each of the [intelligence community's] agencies focuses on its particular mission. They are driven by today's challenges. We are forward-looking: Our time horizon is not months but years. We want to ensure that we have the technological advantage to stay ahead of our adversaries.

    Q: What are these challenges?

    We have divided them into three thrust areas: smart collection, incisive analysis, and safe and secure operations. The problem with the current approach to intelligence gathering is that we are collecting lots and lots and lots of data and then relying on people down the chain to somehow sift through it all and find the nuggets of true value. We need to be smarter up front in figuring out where to collect and what to collect. We also need to develop new sensors and new ways of deploying sensors to collect the data we want. Historically, we have relied on satellites and spy planes to look at big targets [such as airfields] that are fairly fixed in space. Today, we're looking for things that could be much smaller and could be moving around [like small weapons].

    Even after smart collection, we're going to end up with a lot of data in different forms. Let's say you've got an image of something, you've also got sensor measurements that give you temperature and humidity. How do you fuse all of the information together? How do you line up different pieces of information in time so that you understand what's going on? This is really, really hard and requires new tools of analysis. The third area, safe and secure operations, has to do with addressing future vulnerabilities in cyberspace.

    Q: How do you plan to solve these problems?

    Part of IARPA's approach is to engage [people] outside the intelligence community as well as within who are working on problems that are analogous. For example, there's research being supported by NASA on how to combine data from airline pilots, the maintenance crew, and the flight recorder to make predictions about future safety risks. That may be the kind of stuff that we can leverage.

    Q: Will researchers be free to publish their results?

    There will be programs where the program manager says, “I anticipate this will be fully unclassified, and everything will be publishable.” But sometimes you don't know where the research is going to take you. In those cases, we would have a caveat that although we anticipate the work to be openly publishable, there may be some results coming out of the research that will require a review because it may release sensitive intelligence information.

    Q: How will you measure IARPA's success?

    If I start rolling out widgets in 6 months, then you need to fire me because I've abandoned the principle of long-term research. In the near term, I'd say the quality of the program managers would be a good metric. Building a reputation for technical integrity is important.


    Brain Scans of Pain Raise Questions for the Law

    1. Greg Miller

    PALO ALTO, CALIFORNIA—Ready or not, neuroimaging is knocking on the courthouse door. Last summer, Sean Mackey, a neurologist who directs Stanford University's Pain Management Center, was asked by defense lawyers in a workers' compensation case to serve as an expert witness. A man who received chemical burns in a workplace accident was seeking compensation from his employer, claiming that the accident had left him with chronic pain. The evidence his lawyers assembled included functional magnetic resonance imaging (fMRI) scans of his brain that showed heightened activity in the “pain matrix,” a network of brain regions implicated in dozens of studies on the neural basis of pain.

    But do those scans, taken while technicians gently brushed his afflicted arm or asked him to squeeze a rubber ball, prove that the man was experiencing the agony he claimed? Hardly, says Mackey, who uses fMRI in his own research. The worker may well have had a valid case, Mackey says, but the fMRI findings weren't relevant. Although certain brain regions consistently rev up when people experience pain, neuroscientists have yet to demonstrate that the converse is true: that any particular pattern of brain activity necessarily indicates the presence of pain. “I'm of the strong opinion that in 2008, we cannot use fMRI to detect pain, and we should not be using it in a legal setting,” he said here last month.

    This particular case did not go to trial: The two sides reached a settlement, says Mackey, who spoke at a Stanford Law School event that brought together neuroscientists and legal scholars to discuss how the neuroimaging of pain potentially could be used—or abused—in the legal system. The general consensus seemed to be that although the science is still emerging, the possibility of legal applications is very real, as Mackey's experience shows.

    The intersection of neuroscience and the law has generated a buzz recently, both among experts and in the popular media. Much of the attention has focused on using fMRI and other methods for lie detection. But Adam Kolber, a law professor at the University of San Diego in California, argued here that pain detection is more likely to be the first fMRI application to find widespread use in the courtroom, in part because the neuroscience of pain is better understood. Kolber estimates that pain is an issue in about half of all tort cases, which include personal injury cases. Billions of dollars are at stake. Yet people with real pain are sometimes unable to prove it, and malingerers sometimes win cases by faking it.

    Pain in the brain.

    Scholars are wrestling with the legal implications of the neuroimaging of pain.


    Using fMRI as a painometer isn't straightforward, however. For starters, said Katja Wiech, a cognitive neuroscientist at University College London, pain sensitivity varies considerably from one person to the next. It's also influenced by psychological factors such as anxiety (which tends to make pain worse) and attention (focusing on pain makes it worse; distractions take the edge off). Such influences also show up in fMRI scans, Wiech said. Moreover, she and others noted that several studies have found broad overlap in the brain regions activated by real and imagined pain—something that could be exploited by plaintiffs with bogus claims.

    A. Vania Apkarian, a neuroscientist at Northwestern University in Evanston, Illinois, was more optimistic. His group has found that activity in the medial prefrontal cortex and the right insula correlates well with pain intensity and the duration of chronic pain, respectively, in people with chronic back pain. “This is an objective measure of pain in these patients,” Apkarian said. Based on these and other findings, he predicted that fMRI will be courtroom-ready sooner than others had suggested. “Maybe not in 2008, maybe in 2012,” he said. “It's inevitable.”

    Apkarian's data looked promising to several legal experts in attendance. “You scientists care more about causation than we do in the law,” said Stanford law professor Henry “Hank” Greely. “If the correlation is high enough, … we would see that as a useful tool.” Indeed, Greely and others noted, even if fMRI can't provide a perfectly objective measure of pain, it may still be better than the alternatives. “We let people get on the stand … and say all kinds of things that may or may not be true,” said William Fletcher, a judge on the U.S. Court of Appeals for the Ninth Circuit.

    “There's absolutely no doubt that lawyers will become aware of this [neuroimaging evidence] and push for it,” said Stephen Easton, a former trial lawyer and professor at the University of Missouri School of Law. Easton is concerned that fMRI images, like other types of visual evidence, could unduly sway juries. “Pictures can have an aura of objectivity beyond which is justified,” he said. Indeed, a handful of recent studies have hinted that nonexperts rate articles about human behavior as being more convincing when they're accompanied by irrelevant images of the brain (Science, 13 June 2008, p. 1413).

    That's an important consideration, said law professor David Faigman of the University of California Hastings College of the Law in San Francisco. According to rule 403 of the Federal Rules of Evidence, judges can disallow relevant evidence if they deem it likely to mislead or prejudice the jury. Rule 403 has been invoked to exclude evidence from polygraph tests, Faigman said, on the grounds that the general public sees the tests as a more valid means of lie detection than they really are. He thinks the same logic could apply to neuroimaging evidence as well.

    The verdict is still out, then, on how and to what extent the neuroimaging of pain will enter the legal system. But the opening arguments are already being heard.


    TB Bacteria May Reign Over Cells Intended to Bridle Them

    1. Evelyn Strauss

    Even before scientists identified the agent that causes tuberculosis (TB), they could tell when it had invaded a person's body from the presence of hallmark lesions called granulomas. These nodules wall off the trouble-making microbe Mycobacterium tuberculosis and contain immune cells called macrophages. Conventional wisdom holds that granulomas protect the host, but some work has hinted that they promote bacterial multiplication early in infection. Now, Lalita Ramakrishnan of the University of Washington, Seattle, and graduate student Muse Davis report in the 9 January issue of Cell that mycobacteria harness granuloma formation, recruiting new macrophages to the structures and then manipulating the cells into offering the bacteria new quarters in which to reproduce.

    Although scientists already realized that granulomas help the TB microbe in the sense that they enable it to hide away and break out decades later, “the belief was that the granuloma is part of a good immune response; it benefits the host,” says microbiologist William Bishai of the Johns Hopkins School of Medicine in Baltimore, Maryland. Ramakrishnan “turned the central dogma on its head.” Andrea Cooper, an infectious disease immunologist at the Trudeau Institute in Saranac Lake, New York, says the new study “redirects our thinking” to considering how the TB bacterium interacts with macrophages early in infection. The work, she and others say, might open new therapeutic avenues and yield clues about why 90% to 95% of infected humans remain symptom-free for life.

    The Seattle pair studied M. marinum, a relative of M. tuberculosis, in zebrafish embryos, an experimental organism whose power derives in part from its transparency. This attribute allows researchers to literally see how bacteria and immune cells behave, even immediately after exposure, when few microbes are present. The study's medical relevance isn't fully known, as M. marinum is not M. tuberculosis, and zebrafish are not humans, but “the work raises the question of whether we've been thinking the wrong way” about granulomas, says microbiologist Eric Rubin of Harvard School of Public Health in Boston.

    Davis and Ramakrishnan injected fluorescent M. marinum into a cavity above the brain, a site that lacks macrophages in the absence of bacteria, and then recorded video of what happened. Macrophages rapidly arrived, ingested bacteria, and traveled into brain tissue. Within 4 days, collections of infected macrophages—early granulomas—appeared. To distinguish whether these structures form when infected macrophages recruit fresh, uninfected ones or when collections of infected macrophages amass, the researchers waited for an initial group of macrophages to engulf microbes and then injected a blue compound that marks macrophages into the bloodstream. Because this substance does not cross the blood-brain barrier, any dye found in the brain must have been carried by cells that came from elsewhere. Uninfected animals' brains remained largely colorless, whereas brains of infected animals held numerous blue, infected macrophages within multiple granulomas, indicating that those cells had been recruited by the first responders.

    Hitching a ride.

    Infected macrophages (green) can leave established granulomas (left) and seed new ones at other sites in the body.


    When infected cells in early granulomas die, they retain their contents, the researchers found. The uninfected macrophages that then show up consume chunks of the infected cells, thereby ingesting the bacteria inside. “The key point is that one dying cell is infecting a lot of other cells,” says Cooper.

    As uninfected macrophages home to the granulomas, they display characteristics of cells that are swimming toward chemical attractants, the researchers observed. The team also discovered that M. marinum without a particular region of its genome—RD1—fails to spur key events associated with granuloma formation. RD1 was known to enhance virulence, but no one knew exactly how. The new results suggest that this stretch of mycobacterial DNA somehow triggers granulomas to release a chemical signal that attracts uninfected macrophages. “Here's a clear indication of what RD1 does early in an intact animal,” says Cooper.

    Additional experiments revealed that macrophages can escape from early granulomas and seed new ones. “Some macrophages serve as taxicabs to bring [bacteria] to locations where new granulomas can form,” says Bishai. Further analysis revealed that this process accounts for most, if not all, granulomas that appear elsewhere. This observation contradicts a popular theory, which posits that the bloodstream carries free mycobacteria around the body. Whether the strategy operates in other tuberculosis models is unknown, but preliminary results from monkeys are consistent with the dissemination scheme outlined in the zebrafish work, says JoAnne Flynn, an immunologist at the University of Pittsburgh School of Medicine in Pennsylvania.

    Studying the RD1 system could generate insights about why most people infected with M. tuberculosis don't get sick, suggests Ramakrishnan. Individuals who resist disease may, for example, not respond to RD1's influence. The work might also point toward clinical interventions that quash infection before it takes hold, she says. “If you could intercept the pathway that RD1 uses to lure macrophages, you would have a whole new approach to treating TB.”


    Indian Neutrino Detector Hits Snag on Environmental Concerns

    1. Pallava Bagla
    Waiting game.

    Naba Mondal at INO's proposed site.


    MASINAGUDI, INDIA—The rosewood and teak forest here in southern India's Nilgiri Biosphere Reserve is prime elephant habitat. It's also where Indian particle physicists hope to install a massive detector to stalk a more exotic quarry: neutrinos. But concerns about the well-being of the heaviest land animals have so far blocked plans to tune in to the lightest known fundamental particles.

    The $167 million India-based Neutrino Observatory (INO), slated for completion in 2012, is the country's most expensive science facility ever. The magnetized iron detector would be nestled in a cavern 2 kilometers deep inside a granite mountain in Tamil Nadu state, some 250 kilometers southeast of Bangalore. Neutrinos are produced in stars as well as on Earth, in nuclear reactors and when cosmic rays smash into the upper atmosphere. They have the slightest mass and are elusive because they interact with other particles only by means of the weak nuclear force. The granite in Nilgiri would absorb most cosmic rays that at the surface would swamp any neutrino signal, but neutrinos will readily pass through to the detector.

    A 100-strong team of physicists has conducted site surveys and has begun fabricating detector components at Tata Institute of Fundamental Research (TIFR) in Mumbai and at collaborating institutions. The initiative is “unique and important,” says Maury Goodman, a neutrino specialist at Argonne National Laboratory in Illinois. INO, adds Anil Kakodkar, chair of the Atomic Energy Commission, is a “perfect launching pad” for attracting fresh blood into basic sciences in India.

    Before work at Nilgiri can begin, INO must obtain a permit from Tamil Nadu's forestry department. State officials say the physicists have not yet made a convincing case. “INO would be detrimental to the ecological balance of the area,” says a senior forestry official, who cites two chief concerns: damage to fragile habitat as equipment and materials are hauled through the forest, and debris from tunneling choking the watershed. The World Wide Fund for Nature-India also opposes the facility, arguing that Nilgiri “is already under pressure, and INO will lead to permanent detrimental impacts on wildlife.”

    Last month, forestry officials and INO scientists met in Chennai, capital of Tamil Nadu, to seek common ground. Kakodkar and project staff outlined their strategy for minimizing INO's environmental impact. The discussions were “positive,” says INO spokesperson Naba K. Mondal, a particle physicist at TIFR. But as Science went to press, it was unclear whether the state government would issue a permit.

    A few decades ago, India was at the forefront of neutrino research. In 1964, a TIFR team led by B. V. Sreekantan and M. G. K. Menon, using an iron calorimeter in a gold mine shaft, were the first in the world to detect neutrinos created in the atmosphere. The facility was shuttered in 1992 when Kolar Gold Fields closed and the experiment became too costly to maintain. “Many of us in the international community grieved over the termination of that line of work,” says John Learned, a physicist at the University of Hawaii, Manoa.

    Taking the measure.

    INO aims to make precise calculations of neutrino mass.


    India hopes INO will help it secure a leading position in the next generation of neutrino research. For instance, more robust estimates of neutrino mass could shed light on an enduring mystery: why there is more matter than antimatter in the universe. The project entails delving an underground laboratory and installing a 50,000-ton detector for studying atmospheric neutrinos and antineutrinos. Down the road, the detector could be doubled in size to study neutrinos beamed through the planet from particle accelerators in Europe or Japan. Both experiments aim to yield more precise calculations of neutrino masses. “No large dedicated experiment to study atmospheric neutrinos has ever been built,” Goodman says. “The INO design is certainly a better way to study atmospheric neutrinos than has been done before.”

    According to Mondal, the Nilgiri site is ideal in part because the geology was well-characterized during preparation for a hydroelectric project at the mountain. (The waterworks were built a decade ago, when India's environmental movement was weaker than it is today.) Mondal acknowledges that excavating the tunnel will require hauling huge amounts of materials and rubble through elephant habitat. “No doubt there will be some pain,” says Raman Sukumar, an elephant biologist at the Indian Institute of Science in Bangalore who has worked in the area for 3 decades. But he argues that “INO can also be converted into an opportunity” if the project funds conservation efforts in Nilgiri. INO plans to create just such a fund, Mondal says.

    In that case, Sukumar says, “both neutrinos and elephants can be winners.” Mondal and his colleagues are anxiously waiting to see if Tamil Nadu officials agree.


    On the Origin of Life on Earth

    1. Carl Zimmer*
    1. Carl Zimmer is the author of Microcosm: E. coli and the New Science of Life.

    In the first of a monthly series of essays celebrating the Year of Darwin, Carl Zimmer discusses attempts to unravel how life originated on Earth by recreating the process in the laboratory.


    An Amazon of words flowed from Charles Darwin's pen. His books covered the gamut from barnacles to orchids, from geology to domestication. At the same time, he filled notebooks with his ruminations and scribbled thousands of letters packed with observations and speculations on nature. Yet Darwin dedicated only a few words of his great verbal flood to one of the biggest questions in all of biology: how life began.

    The only words he published in a book appeared near the end of On the Origin of Species: “Probably all the organic beings which have ever lived on this earth have descended from some one primordial form, into which life was first breathed,” Darwin wrote.


    Darwin believed that life likely emerged spontaneously from the chemicals it is made of today, such as carbon, nitrogen, and phosphorus. But he did not publish these musings. The English naturalist had built his argument for evolution, in large part, on the processes he could observe around him. He did not think it would be possible to see life originating now because the life that's already here would prevent it from emerging.

    In 1871, he outlined the problem in a letter to his friend, botanist Joseph Hooker: “But if (and Oh! what a big if!) we could conceive in some warm little pond, with all sorts of ammonia and phosphoric salts, light, heat, electricity, etc., present, that a protein compound was chemically formed ready to undergo still more complex changes, at the present day such matter would be instantly devoured or absorbed, which would not have been the case before living creatures were formed.”

    Scientists today who study the origin of life do not share Darwin's pessimism about our ability to reconstruct those early moments. “Now is a good time to be doing this research, because the prospects for success are greater than they have ever been,” says John Sutherland, a chemist at the University of Manchester in the United Kingdom. He and others are addressing each of the steps involved in the transition to life: where the raw materials came from, how complex organic molecules such as RNA formed, and how the first cells arose. In doing so, they are inching their way toward making life from scratch. “When I was in graduate school, people thought investigating the origin of life was something old scientists did at the end of their career, when they could sit in an armchair and speculate,” says Henderson James Cleaves of the Carnegie Institution for Science in Washington, D.C. “Now making an artificial cell doesn't sound like science fiction any more. It's a reasonable pursuit.”

    Raw ingredients

    Life—or at least life as we know it—appears to have emerged on Earth only once. Just about all organisms use double-stranded DNA to encode genetic information, for example. They copy their genes into RNA and then translate RNA into proteins. The genetic code they use to translate DNA into proteins is identical, whether they are emus or bread mold. The simplest explanation for this shared biology is that all living things inherited it from a common ancestor—namely, DNA-based microbes that lived more than 3.5 billion years ago. That common ancestor was already fairly complex, and many scientists have wondered how it might have evolved from a simpler predecessor. Some now argue that membrane-bound cells with only RNA inside predated both DNA and proteins. Later, RNA-based life may have evolved the ability to assemble amino acids into proteins. It's a small step, biochemically, for DNA to evolve from RNA.

    In modern cells, RNA is remarkably versatile. It can sense the levels of various compounds inside a cell and switch genes on and off to adjust these concentrations, for example. It can also join together amino acids, the building blocks of proteins. Thus, the first cells might have tapped RNA for all the tasks on which life depends.

    For 60 years, researchers have been honing theories about the sources of the amino acids and RNA's building blocks. Over time, they have had to refine their ideas to take into account an ever-clearer understanding of what early Earth was like.

    In an iconic experiment in 1953, Stanley Miller, then at the University of Chicago, ignited a spark that zapped through a chamber filled with ammonia, methane, and other gases. The spark created a goo rich in amino acids, and, based on his results, Miller suggested that lightning on the early Earth could have created many compounds that would later be assembled into living things.

    By the 1990s, however, the accumulated evidence indicated that the early Earth was dominated by carbon dioxide, with a pinch of nitrogen—two gases not found in Miller's flask. When scientists tried to replicate Miller's experiments with carbon dioxide in the mix, their sparks seemed to make almost no amino acids. The raw materials for life would have had to come from elsewhere, they concluded.

    In 2008, however, lightning began to look promising once again. Cleaves and his colleagues suspected that the failed experiments were flawed because the sparks might have produced nitrogen compounds that destroyed any newly formed amino acids. When they added buffering chemicals that could take up these nitrogen compounds, the experiments generated hundreds of times more amino acids than scientists had previously found.

    Cleaves suspects that lightning was only one of several ways in which organic compounds built up on Earth. Meteorites that fall to Earth contain amino acids and organic carbon molecules such as formaldehyde. Hydrothermal vents spew out other compounds that could have been incorporated into the first life forms. Raw materials were not an issue, he says: “The real hurdle is how you put together organic compounds into a living system.”

    Step 1: Make RNA

    An RNA molecule is a chain of linked nucleotides. Each nucleotide in turn consists of three parts: a base (which functions as a “letter” in a gene's recipe), a sugar molecule, and a cluster of phosphorus and oxygen atoms, which link one sugar to the next. For years, researchers have tried in vain to synthesize RNA by producing sugars and bases, joining them together, and then adding phosphates. “It just doesn't work,” says Sutherland.

    This failure has led scientists to consider two other hypotheses about how RNA came to be. Cleaves and others think RNA-based life may have evolved from organisms that used a different genetic material—one no longer found in nature. Chemists have been able to use other compounds to build backbones for nucleotides (Science, 17 November 2000, p. 1306). They're now investigating whether these humanmade genetic molecules, called PNA and TNA, could have emerged on their own on the early Earth more easily than RNA. According to this hypothesis, RNA evolved later and replaced the earlier molecule.

    But it could also be that RNA wasn't put together the way scientists have thought. “If you want to get from Boston to New York, there is an obvious way to go. But if you can't get there that way, there are other ways you could go,” says Sutherland. He and his colleagues have been trying to build RNA from simple organic compounds, such as formaldehyde, that existed on Earth before life began. They find they make better progress toward producing RNA if they combine the components of sugars and the components of bases together instead of separately making complete sugars and bases first.


    Researchers at Harvard are trying to make simple life forms, shown here in a computer image.


    Over the past few years, they have documented almost an entire route from prebiotic molecules to RNA and are preparing to publish even more details of their success. Discovering these new reactions makes Sutherland suspect it wouldn't have been that hard for RNA to emerge directly from an organic soup. “We've got the molecules in our sights,” he says.

    Sutherland can't say for sure where these reactions took place on the early Earth, but he notes that they work well at the temperatures and pH levels found in ponds. If those ponds dried up temporarily, they would concentrate the nucleotides, making conditions for life even more favorable.

    Were these Darwin's warm little ponds? “It might just be that he wasn't too far off,” says Sutherland.

    Step 2: The cell

    If life did start out with RNA alone, that RNA would need to make copies of itself without help from proteins. Online in Science this week (, Tracey Lincoln and Gerald Joyce of the Scripps Research Institute in San Diego, California, have shown how that might have been possible. They designed a pair of RNA molecules that join together and assemble loose nucleotides to match their partner. Once the replication is complete, old and new RNA molecules separate and join with new partners to form new RNA. In 30 hours, Lincoln and Joyce found, a population of RNA molecules could grow 100 million times bigger.

    Lincoln and Joyce kept their RNA molecules in beakers. On the early Earth, however, replicating RNA might have been packed in the first cells. Jack Szostak and his colleagues at Harvard Medical School in Boston have been investigating how fatty acids and other molecules on the early Earth might have trapped RNA, producing the first protocells. “The goal is to have something that can replicate by itself, using just chemistry,” says Szostak.

    After 2 decades, he and his colleagues have come up with RNA molecules that can build copies of other short RNA molecules. They have been able to mix RNA and fatty acids together in such a way that the RNA gets trapped in vesicles. The vesicles are able to add fatty acids to their membranes and grow. In July 2008, Szostak reported that he had figured out how protocells could “eat” and bring in nucleotides to build the RNA.

    All living cells depend on complicated channels to draw nucleotides across their membranes, raising the question of how a primitive protocell membrane brought in these molecules. By experimenting with different recipes for membranes, Szostak and his colleagues have come up with protocells leaky enough to let nucleic acids slip inside, where they could be assembled into RNA, but not so porous that the large RNA could slip out.

    Their experiments also show that these vesicles survive over a 100°C range. At high temperatures, protocells take in nucleotides quickly, and at lower temperatures, Szostak found, they build RNA molecules faster.

    He speculates that regular temperature cycles could have helped simple protocells survive on the early Earth. They could draw in nucleotides when they were warm and then use them to build RNA when the temperature dropped. In Szostak's protocells, nucleotides are arranged along a template of RNA. Strands of RNA tend to stick together at low temperatures. When the protocell warmed up again, the heat might cause the two strands to pull apart, allowing the new RNA molecule to function.

    Now Szostak is running experiments to bring his protocells closer to life. He is developing new forms of RNA that may be able to replicate longer molecules faster. For him, the true test of his experiments will be whether his protocells not only grow and reproduce, but evolve.

    “To me, the origin of life and the origin of Darwinian evolution are essentially the same thing,” says Szostak. And if Darwin was alive today, he might well be willing to write a lot more about how life began.


    1. This essay is the first in a monthly series, with more on evolutionary roots on our Origins blog. Leave a comment on this essay

    Seeking Africa's First Iron Men

    1. Heather Pringle*
    1. Heather Pringle is a contributing editor at Archaeology magazine.

    Archaeologists are battling over when--and how--ancient African cultures entered the Iron Age.

    Archaeologists are battling over when—and how—ancient African cultures entered the Iron Age

    Smiths at work.

    Demonstrating old methods, Cameroonian men forge iron with a stone hammer.


    According to myth, Rwanda's ancient line of kings descended from a man with secret knowledge: He could transform chunks of ordinary rock into smooth, gleaming iron. With this new technology, he taught his people to make hard, durable weapons for defeating their enemies and sharp axes for cutting the forest to make fields. By the time the first Europeans arrived in the 19th century, iron had become power in the kingdom of Rwanda. Its kings had taken the blacksmith's hammer and anvil for their royal regalia, and at least one Rwandan ruler was buried with his head resting on two iron anvils.

    Other traditional African societies tell stories of mythical ironworkers who descended from heaven or came from other lands. The prevalence of such legends underlines the importance of ironworking in these cultures, and archaeologists have long wondered if the arrival of iron metallurgy spurred the growth of complex early societies. Did foreigners in fact bring ironworking to Africa, or did Africans invent it themselves?

    First iron?

    Controversial sites from some countries in sub-Saharan Africa suggest that iron smithing was developed there 4000 years ago.

    Entering the Iron Age was not easy. Metalworkers had to smelt ore at precise temperatures and then repeatedly hammer and reheat the spongy metal, known as a bloom, that first emerged from their furnaces. The traditional view is that metallurgists in Anatolia, the Asian part of Turkey, were the first to smelt iron ore deliberately, beginning around 1800 B.C.E. Initially, they reserved the new metal for precious ornaments or ritual objects. But by 1200 B.C.E., workers in the Levant were churning out considerable amounts of iron.

    The metal had a major impact on societies. “Iron was a transformative metal,” says archaeologist Scott MacEachern of Bowdoin College in Brunswick, Maine. Iron ores are much more abundant than copper or the tin needed to make bronze. Bronze was therefore costly and largely limited to use in ritual objects and goods for elites. But once cultures learned to smelt iron, they could put iron tools into the hands of ordinary people for clearing forests and tilling the soils. According to some models, this boosted agricultural yields, increased the numbers of villages, and triggered ever more social complexity.

    A long debate has raged, however, about the origin of ironmaking in Africa. According to traditional thinking, iron metallurgy diffused slowly from one society to the next in the Old World, reaching northern Africa by 750 B.C.E. but not crossing the barrier of the Sahara Desert until 500 B.C.E. or later.

    Now controversial findings from a French team working at the site of Ôboui in the Central African Republic challenge the diffusion model. Artifacts there suggest that sub-Saharan Africans were making iron by at least 2000 B.C.E. and possibly much earlier—well before Middle Easterners, says team member Philippe Fluzin, an archaeometallurgist at the University of Technology of Belfort-Montbéliard in Belfort, France. The team unearthed a blacksmith's forge and copious iron artifacts, including pieces of iron bloom and two needles, as they describe in a recent monograph, Les Ateliers d'Ôboui, published in Paris. “Effectively, the oldest known sites for iron metallurgy are in Africa,” Fluzin says.

    Some researchers are impressed, particularly by a cluster of consistent radiocarbon dates. The new finds should prompt researchers to explore how sub-Saharan metal smiths could have worked out iron production without the benefit of an earlier Copper or Bronze Age, says archaeologist Augustin F. C. Holl of the University of Michigan, Ann Arbor. “We are in a situation where we have to rethink how technology evolves,” he says.

    Others, however, raise serious questions about the new claims. Archaeometallurgist David Killick at the University of Arizona in Tucson says the Ôboui iron artifacts are far too well preserved for the dates given: “There is simply no way that they have been sitting in the ground for 3800 radiocarbon years in acidic soils and a seasonally moist environment like the western Central African Republic.”

    An invisible equation

    The idea that iron metallurgy diffused to Africa from the Middle East, rather than being invented independently, receives its strongest support from the sheer complexity of iron smelting. Working meteoritic iron is fairly straightforward, but to extract iron from hematite and other common ores, early metalworkers had to bring the ore to a precise range of temperatures, so the iron could fuse with carbon released from the burning of charcoal. To pull off this feat, smelters had to master an invisible equation, placing the ore out of sight in a clay furnace fueled with the correct amount of charcoal and fed with just the right amount of air for combustion. According to the diffusion theory, only people who possessed millennia of experience working copper, such as the Anatolians, would have had sufficient knowledge to begin experimenting with iron.

    Digging for iron.

    Nine field seasons at Ôboui revealed an ancient iron workshop and artifacts such as this iron needle (left).


    The archaeological data on early ironworking in northern Africa are frustratingly spotty, but current evidence suggests that Phoenician traders carried the technology to their colony of Carthage in northern Africa around 750 B.C.E. Other travelers brought iron technology to Egypt, which already possessed copper, by 660 B.C.E., if not earlier. The wealthy Nubian kingdom just to the south possessed bronze and began smelting iron between 800 and 500 B.C.E. In the Nubian city of Meroë, workers created an iron industry that one writer dubbed “the Birmingham of Africa,” because as with the British town, iron was the source of Meroë's wealth. The city produced an estimated 5 to 20 tons of the metal annually for hoes, knives, spears, and other everyday goods, as Thilo Rehren, an archaeometallurgist at University College London, wrote in a 2001 article in Mitteilungen der Sudanarchäologischen Gesellschaft. From North Africa, the technology was thought to have crossed the Sahara Desert around 500 B.C.E., spreading to southern lands that lacked both copper and bronze-working traditions.

    Evidence contradictory to this model has cropped up since the 1960s, however. Several French and Belgian archaeologists have pointed to evidence from sites in Niger, Rwanda, and Burundi suggesting that Africans invented ironworking independently as early as 3600 B.C.E. Their analyses were strongly criticized by prominent researchers in the United States, who argued that the early radiocarbon dates likely came from wood older than the iron artifacts. In reviewing the debate in a 2005 paper in the journal History in Africa, independent scholar Stanley Alpern suggested that Francophone researchers had fallen under the influence of African nationalism and pride, which blinded them to problems in their data. The critiques persuaded many researchers outside France that the earliest good evidence for sub-Saharan African ironworking came from carefully excavated sites dated between 800 and 400 B.C.E., such as Walalde in Senegal, where a complex society of pastoralists and craft specialists may have developed a trading system using precious iron bars for exchange.

    Meanwhile, French proponents of the very early dates dismissed such charges as the last gasp of scientists wedded to the diffusion model. A frosty impasse ensued: Many Francophone Africanists stopped attending Anglophone meetings and informally sharing their research results. “There is a taboo there,” fumes Holl. “People just have this conception that iron technology in sub-Saharan Africa has to be later than 500 B.C.E., and when it is earlier than that, they start looking for [alternative] explanations.”

    The deep chill lasted until the fall of 2008, when Anglophone researchers learned that a French team led by archaeologist Étienne Zangato of the University of Paris X had published the Ôboui monograph a year earlier with sensational new evidence for ancient African ironworking. Now the controversy has fired up again.

    Forging ahead.

    Excavator Étienne Zangato claims early dates from Ôboui's forge (left) show that Africans were the first ironsmiths.


    An ancient forge

    Zangato's most important data come from Ôboui, a site where horticulturists and fishers lived for millennia; by 800 B.C.E., people in the region were erecting megaliths and burying important people in impressive tombs.

    Zangato began excavations at the site after a violent storm struck in 1992, sweeping away part of the capping sediments and exposing a layer of metallic objects, potsherds, and stone tools. Zangato and his team spent nine field seasons at the site, opening more than 800 square meters. They recovered 339 stone artifacts and a host of evidence for ironsmithing: a blacksmith's forge, consisting of a clay-lined furnace, stone anvil, and part of a ceramic pot that likely held water for cooling or possibly tempering red-hot iron. They also found charcoal storage pits, 1450 pieces of slag, 181 pieces of iron bloom, and 280 small iron lumps and objects, including two needles.

    Fluzin detected none of the telltale waste produced by the first stage of smelting, implying that Ôboui's smiths imported iron bloom from elsewhere or that excavators have yet to find the site's smelting area. But microscopic examination of thin slices from iron samples collected near the anvil demonstrates that people at Ôboui purified the bloom by repeatedly heating and hammering it. Some lumps contained as much as 85% iron and revealed visible traces of hammering, such as deformations caused by crushing, under the microscope. “It is undeniable that these samples correspond to metalworking, already quite advanced, of fragments of bloom,” concludes Fluzin. His comparative studies of minerals in the ore suggest that the most likely source was an ancient mine located 12 kilometers away.

    To date the site, seven charcoal samples were taken from inside and outside the furnace. They were radiocarbon dated by Jean-François Saliège in the Laboratory of Dynamic Oceanography and Climatology at the University of Paris VI to between 2343 and 1900 B.C.E.—long before the Anatolians were working iron.

    Those dates are early, but they fit well with a newly emerging pattern, says Zangato. Excavations he directed between 1989 and 2000 at the three nearby sites of Balimbé, Bétumé, and Bouboun each uncovered layers containing ironworking debris, he says. Those layers were radiocarbon dated by Zangato, Saliège, and Magloire Mandeng-Yogo of the Institute of Research for Development in Bondy, France, to between 1612 and 2135 B.C.E. at Balimbé and Bouboun, and to sometime between 2930 and 3490 B.C.E. at Bétumé. “There is no longer any reason to cling to the diffusionist theory for iron metallurgy in Africa,” says Zangato. “I believe more and more in local development [of the technology].”

    The evidence is very convincing, says Patrick Pion, a University of Paris X archaeologist who specializes in the European Iron Age. The metallographic analysis is clear proof of ironworking at Ôboui, he says, and “the series of C14 dates obtained are coherent, done by a laboratory and a specialist recognized for C14 dating in Africa. I see no reason intrinsically to question them.” MacEachern agrees that the seven consistently early dates from the forge are persuasive. “This is not the common situation that we've had in the past in African metallurgy, where we've had isolated dates from debatable contexts,” he says.

    Tools of power.

    In the ancient Nubian city of Meroë, iron used for durable tools, like this small tool kit, became the source of great wealth.


    But Killick and others are completely unconvinced by the dates, though they agree that the forge is real. “Although it seems that the seven oldest radiocarbon dates form a coherent group, they are all coming from a few square meters in a very disturbed archaeological site,” says Bernard Clist, an archaeologist at the Institute of Research for Development in Grasse, France. “They are closely bounded by pits and structures well dated to around 2000 B.P. and later.” This means that later ironworkers could have dug into ground laced with charcoal from an earlier occupation or forest fire, giving dates that are far too old. To push back the dates convincingly, say critics, the team needs to publish more detailed stratigraphic data and charcoal studies. They also need several consistent lines of chronological evidence, such as thermoluminescence (TL) dates on clay furnaces, accelerator mass spectrometry (AMS) dates on short-lived plant remains, and indirect dates from sequences of ceramic styles.

    Zangato concedes that the team's case would have benefited from more dating. But he says that the €500,000 he received for the 15-year project from France's Ministry of Foreign Affairs was insufficient to cover expensive TL and AMS dates; he opted for additional radiocarbon dates instead. He and Fluzin dismiss the charge that the iron artifacts are too pristine for their age, saying that a dense, difficult-to-excavate upper layer of sandy clay at Ôboui prevented the diffusion of water and oxygen, and so reduced corrosion. Killick counters that the high soil temperatures at the site should in fact speed corrosion. If Ôboui's iron bloom really dates back to 2000 B.C.E., then its open pores “ought to be completely full of corrosion products, leaving small islands of metal in a sea of corrosion,” says Killick. There is no sign of this in the published photos. Even MacEachern finds the relative lack of corrosion puzzling. “I'd certainly like to know more about the preservation of the iron tools,” he says.

    Neither Zangato nor Fluzin is backing away from their claims, however. Zangato and Holl are working on a paper for the World of Iron conference in London from 16 to 20 February. Expect lively sessions, says conference organizer and archaeometallurgist Xander Veldhuijzen of University College London: “The earliest iron debate is currently in a very interesting phase, as a lot of new evidence is just appearing.”


    A New View on--and Hope for--an Old Disease

    1. Lauren Cahoon*
    1. Lauren Cahoon is a freelance writer based in Ithaca, New York.

    Researchers are debating whether growths called tubers cause the mental problems in many people with tuberous sclerosis. Regardless, an organ-transplant drug may offer a treatment for the rare disease.

    Researchers are debating whether growths called tubers cause the mental problems in many people with tuberous sclerosis. Regardless, an organ-transplant drug may offer a treatment for the rare disease

    Growth problem.

    In tuberous sclerosis, growths can appear under the skin and freckle a face (left) or disrupt a brain (right, arrows).


    It can start so subtly that most parents don't even notice the telltale signs. A baby's arms and upper body may jerk, or the eyes may dart to the side; the seizure is brief, with muscles momentarily stiffening and then relaxing. It's usually only when their baby gets a bit older that parents notice that the child is having problems with walking or talking. But even then, they will hardly suspect what's behind the symptoms: growths called tubers, sometimes hundreds of them, which have sprouted throughout their child's body.

    In Laura Jensen's case, the mother from California thought her baby boy's gagging spells were caused by teething, until one episode, when he was 9 months old, made it clear something much more serious was going on. “He made a choking noise, his face turned blue, and his mouth started to pull to one side,” she recalls. “I didn't know it was a seizure, but I sure knew it wasn't teething.”

    When Jensen was finally referred to a neurologist, who diagnosed her son with a rare condition called tuberous sclerosis complex (TSC), she had no clue what it was. She soon learned that her son would likely be plagued with crippling seizures for the first few years of his life. In addition, children with TSC can end up severely mentally handicapped, autistic, or nonverbal. The disease is sometimes disfiguring, as tubers may erupt on the face or elsewhere under the skin. It can also kill in several ways, by causing kidney failure or destroying lung function, for example (see sidebar, p. 204). Fortunately, Jensen's son, now 16 years old, is one of the lucky ones; his symptoms have turned out to be comparatively benign.

    The mental disabilities that accompany TSC have traditionally been tied to the tubers that form in the brain. Researchers have assumed that the growths, lodged in areas critical for cognition, disrupt and impair normal brain functioning. Scientists have, therefore, tried to attack the disease by thwarting tuber growth. After connecting TSC to a signaling cascade already implicated in cancers, they have begun clinical trials with rapamycin, an organ-transplant drug approved by the U.S. Food and Drug Administration. At a conference this fall on TSC in Brighton, U.K., several groups presented results from rapamycin trials showing dramatic tuber shrinkage in various organs, including the brain. “There's tremendous excitement about it,” says Elisabeth Henske, an oncologist who also works on TSC at Brigham and Women's Hospital in Boston.

    Yet along with this enthusiasm comes a new debate about what exactly is causing the mental problems of some people with TSC. Petrus J. de Vries, a pediatric psychologist at the University of Cambridge in the United Kingdom, has recently tried to convince his colleagues that their focus on tubers as the source of the condition's cognitive impairment is too narrow. De Vries has studied young tuberous sclerosis patients for decades, and he believes that the growths in the brain are not the sole culprits. If his controversial theory is right, tuberous sclerosis researchers will have to significantly rethink what they know about this strange disease. “I like stirring things up a little bit,” says De Vries. “I think it's important to make people question their assumptions.”

    Decoding the disease

    TSC was first documented in 1880 by Désiré-Magloire Bourneville, a French neurologist who treated a 15-year-old boy with severe retardation and epilepsy. After the patient died, Bourneville found clusters of tuberlike growths throughout the boy's brain. These noncancerous growths, the hallmark of the disease, can show up not only in the brain but also in the heart, eyes, skin, kidneys, and lungs.

    The disease is rare, striking just 1 in 6000 births, so researchers typically have difficulty finding large populations of patients to study. However, a few decades ago, scientists identified several large families in which TSC showed up several times, indicating it was a genetic disease. Researchers began looking at the genes of these families and in 1993, a European consortium of TSC investigators linked mutations in a gene on chromosome 16 to the disease in some of the families. Five years later, a group led by Marjon van Slegtenhorst at Erasmus University in Rotterdam, the Netherlands, located a gene on chromosome 11 that when mutated also leads to the disease (Science, 8 August 1997, p. 805).

    That gene encodes a protein called harmatin, whereas the chromosome 16 gene encodes one called tuberin. Both proteins are located inside cells throughout all tissues of the body. Scientists found that either the tuberin or harmatin gene was not functioning in the tuber growths of people with TSC, suggesting that the proteins were somehow involved in suppressing cell growth.

    Mutant fruit flies helped confirm this. Scientists genetically engineered strains of the insect lacking either tuberin, harmatin, or both proteins. All three kinds of mutants developed the same abnormalities—grossly large eyes, organs, and wings—indicating that their cell growth and proliferation had gone unchecked.

    In 1998, the Van Slegtenhorst group had shown that harmatin and tuberin bind together and function as a unit inside cells. With more fruit fly and mammalian studies, a number of research teams then independently determined that this molecular complex crucial to TSC participated in a signal cascade known as the mTOR pathway. This pathway is the main molecular switchboard for cellular growth: The protein mTOR, short for molecular target of rapamycin, directly activates a protein called S6K, thus driving cell growth. Depending on the energy state of the cell, upstream proteins, such as PI3Kinase, AKT, and insulin, will signal mTOR to increase or decrease cell growth.

    Researchers have learned that tuberin and harmatin act as the primary upstream brake on mTOR's drive for cell growth. The pair does this by inhibiting the protein Rheb, the direct activator for mTOR and cell proliferation. Consequently, if the tuberin-harmatin complex malfunctions, say as a result of a mutation in one of the genes for the proteins, mTOR's brake is released. Unbridled, mTOR is thought to cause not only the tubers in TSC but also tumor formation in most types of cancer. The realization that the tuberous sclerosis proteins were connected to mTOR “dumped us right in the center of a topical and hot regulatory pathway,” says Julian Sampson, a medical geneticist and TSC researcher at Cardiff University in the United Kingdom. “So then another heap of people got interested in these genes and proteins.”

    De Vries agrees that the study of TSC took off once it was connected to the mTOR pathway. “That's when things started to explode,” he says.

    Brain debates

    With the disrupted molecular pathway in TSC finally delineated, researchers are now trying to resolve how that leads to the disease's most severe manifestations, the problems in the brain. “The most devastating things about tuberous sclerosis are the cognitive deficits and the intractable epilepsy,” says Vicky Holets Whittemore, vice president of the Tuberous Sclerosis Alliance. Roughly 70% of TSC patients have epilepsy, 25% develop autism, and 25% have severe mental disabilities. Although 40% to 50% of people with TSC will score normally on IQ tests, even they seem to have very specific problems with certain key memory and attention-related tasks.

    Perhaps one of the most difficult aspects of the disease is the uncertain prognosis once a child is diagnosed. “You can't predict how severely affected a child is going to be after they're born,” says John Yates, a retired medical geneticist at the University of Cambridge. “You can't tell if a child [will have] huge mental problems or a few blemishes on the skin.”

    Studies have shown that the frequency of seizures can affect a child's ultimate cognitive ability; thus, antiepileptic medication is often the first line of defense when a child receives a TSC diagnosis. Still, although several medications are effective at stopping seizures, some people develop severe mental disabilities anyway. That's one reason many scientists have blamed the tubers in the brain, not the seizures they cause, for a person's mental problems. Indeed, several case studies show that the number and size of tubers correlate with more severe cognitive deficits.

    Yet De Vries, and a growing number of other researchers, contend that neither seizures nor tubers are responsible. “People thought that more tubers equals more brain problems,” says De Vries. “But the more data we collected, the less clear it became to me. We've had adult patients with Ph.D.s, who have 20 to 30 tubers [in their brains], and people with severe [mental] problems with one or two tubers—so there were lots of exceptions to the rule.”

    If the tubers aren't causing mental dysfunction, what is? De Vries suggests that mutations in the genes for tuberin and harmatin directly sabotage neurons by disrupting the mTOR pathway. He notes that the mTOR pathway helps control formation of the cytoskeleton within neurons and the synapses that provide connections between neurons, as well as the brain's production of myelin, which insulates nerve fibers. Defects in any or all of these functions, he suggests, could explain TSC's severe cognition problems. Additionally, the mTOR pathway is necessary for long-term potentiation, the process by which neurons in the brain retain memories; that may explain the memory problems many people with TSC experience. “All you need is a disruption of [this] pathway,” says DeVries.

    De Vries's theory is relatively new—he published the idea in the summer of 2007—but it's gaining influence. Several studies on mouse and rat models of TSC have shown that even the animals without seizures or tubers in the brain have cognitive problems. “I think the tubers sort of distracted everybody for a long time,” says Whittemore. “Everyone's beginning to understand that there's an underlying molecular issue.”

    Out of control.

    When the TSC1-TSC2 complex fails to suppress the mTOR pathway, cell growth goes unchecked.


    Still, not everyone is convinced. After DeVries's presentation in Brighton, scientists in the audience expressed their doubts. In general, Yates says, most people with extreme cognitive deficits usually have a history of severe seizures or several tubers in the brain or both. Yates also argued that if the mTOR pathway disruption in neurons was the real culprit, then family members with TSC would have very similar manifestations of the disease, rather than the wildly different presentations that can happen between parents and children or between siblings.


    Wonder drug?

    While researchers in Brighton debated the roots of TSC's mental dysfunction, they also discussed the merits of the disease's most promising treatment: rapamycin. Like the molecular complex naturally formed by the tuberin and harmatin proteins, this drug blocks mTOR activity and thus prevents cell proliferation. Currently, it's used as an immunosuppressant for organ transplant patients—the drug prevents the proliferation of immune cells that target a donor organ—and is being tested in clinical trials as a treatment for renal cancers. Scientists and doctors have theorized that the drug's restraint of mTOR could compensate for TSC patients' defective genes, stopping the abnormal cell growth behind tubers. Bolstering this hypothesis, several research groups at the Bristol meeting presented initial results from phase II trials that showed rapamycin significantly shrinks tubers in both the brain and the kidney, on average.

    Perhaps even more tantalizing is the possibility that rapamycin could address TSC's epilepsy and cognitive problems. If de Vries is right, and the disruption of the mTOR pathway in neurons indeed causes the mental problems in TSC, rapamycin should be able to treat the condition's actual molecular dysfunction.

    Researchers have recently proved the concept in animal models; Michael Wong, a neurologist at Washington University School of Medicine in St. Louis, has found that in a mouse model of TSC that includes epilepsy, daily doses of rapamycin will prevent the seizures. A research group at the University of California, Los Angeles, led by neuroscientist Dan Ehninger, also looked at rapamycin's effect on a strain of mice with mutations in the tuberin gene that develop learning and memory problems but that didn't have tuber growths or seizures. Ehninger's team found that rapamycin completely rescued adults of this strain from their cognitive problems, making them just like wild-type mice. These studies have some TSC scientists more hopeful than they've ever been. “I think that's truly one of the more exciting things to evolve over the years I've been involved in TSC research,” says Henske. “It's paradigm-shifting—precedent-setting.”

    In people with TSC, however, evidence for rapamycin's effect on cognition and seizures remains largely anecdotal; doctors have mentioned that the drug seems to improve mental functions of some patients. In the United Kingdom, an ongoing phase II trial of rapamycin's effects on kidney tubers and lung functioning is also measuring how the participants' memory is affected by the drug. At the conference, De Vries announced that eight out of 13 people taking the drug have so far showed a memory improvement, but two have seen their memory worsen.

    Even in terms of restraining tubers, rapamycin is far from a silver bullet. In the phase II trials, the moment a patient went off the drug, his or her tubers began to grow back, researchers reported at the meeting. Wong had a similar outcome with his experiments in epileptic TSC mice. “They were normal as long as they were on the rapamycin,” he says. “If you take them off of it—the symptoms happen again.”

    That distresses Sampson. Chronic use of rapamycin is not advisable, he notes, because of its immunosuppressive properties. “It's controlling [tubers], not curing them,” he says. “If you're thinking of a therapy that's going to be long term, … this isn't a cure for tuberous sclerosis.”

    Still, many TSC scientists are pleased at the progress and hopeful that rapamycin can be fine-tuned into a safe and effective treatment. At the conference in Brighton, De Vries noted how far the understanding of this odd and rare disease has come. “It is very exciting to see how we have moved on in the field. My first TSC conference was in 1998, and the second gene had just been cloned—we had no idea about all the events, including drug trials, awaiting us 10 years later,” he says. “It will be very interesting to see where the field will be at in 2018.”


    A Discriminating Killer

    1. Lauren Cahoon*
    1. Lauren Cahoon is a freelance writer based in Ithaca, New York.

    One of tuberous sclerosis complex's most mysterious manifestations is lymphangioleiomyomatosis, a progressive lung disease that only affects women and typically proves fatal within a decade or two of its diagnosis.

    One of tuberous sclerosis complex's (TSC's) most mysterious manifestations is lymphangioleiomyomatosis (LAM), a progressive lung disease that is decidedly sexist. The disease affects only women, usually in their childbearing years, and typically proves fatal within a decade or two of its diagnosis.

    Out of breath.

    A CT scan (above) can reveal how lymphangioleiomyomatosis obstructs lungs.


    Surprisingly, the tuberous growths that are a hallmark of TSC aren't the problem. Instead, LAM is caused by smooth muscle cells that invade the lungs. Researchers don't know where these invaders come from, nor what makes the cells spread to the lungs, where their accumulation slowly impairs breathing. “We don't know why the [cells] metastasize; … they look benign under the microscope,” says Elisabeth Henske, an oncologist at Brigham and Women's Hospital in Boston who specializes in LAM. She notes that the disease can come back even after a patient has had a lung transplant, confirming that the source of the smooth muscle cells is somewhere other than the lungs.

    The disease's gender preference also remains inexplicable. “We're trying to address why this happens only in women,” says Henske. “We're looking to see whether there is something in estrogen that would be involved in a metastatic process.” Despite these mysteries, a new treatment could be on the horizon: A small study done at the Cincinnati Children's Hospital Medical Center in January 2008 found that the organ-transplant drug rapamycin improved lung function in some LAM patients. Now, a larger, phase II trial on 120 LAM patients is under way as scientists seek to confirm that the drug can truly keep the disease at bay.

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution

Navigate This Article