# News this Week

Science  05 Oct 2012:
Vol. 338, Issue 6103, pp. 22
1. # Around the World

1 L'Aquila, Italy
Prison Terms Sought for Italian Earthquake Experts
Billions in Spanish Research Budget Unspent
3 Democratic Republic of the Congo
Ebola-Like Virus Found
4 Wako, Japan
Element 113 Clinched?

## L'Aquila, Italy

### Prison Terms Sought for Italian Earthquake Experts

Seven experts tapped for advice before a deadly earthquake struck L'Aquila, Italy, in April 2009 should each get 4 years in jail, prosecutors in their manslaughter trial in the central Italian city requested on 25 September.

The four scientists, two engineers, and a public official are accused of conducting a superficial risk analysis that gave townsfolk a false sense of security before the quake. The seven—members of Italy's National Commission for the Forecast and Prevention of Major Risks—had met on 31 March 2009 to assess the danger posed by seismic tremors that had been shaking L'Aquila and the surrounding area for 3 months.

A draft version of the commission's meeting minutes contains controversial statements absent in the official minutes, including references to the discredited idea that small tremors reduce the chances of a bigger quake because they discharge energy. Victims' relatives said that misconception persuaded loved ones to stay indoors before the quake hit.

Judge Marco Billi is due to reach a verdict by 23 October. http://scim.ag/EQtrial2012

### Billions in Spanish Research Budget Unspent

Research and development in Spain will suffer another 7.2% cut if Parliament approves the government's 2013 budget, presented on 29 September. The budget has shrunk 4 years in a row, overall by 38.7% since 2009. But an analysis by the Confederation of Spanish Scientific Societies (COSCE), released on 27 September, shows that the real picture is even bleaker than official numbers suggest because billions in research money simply remains unspent.

Included in the science budget is a pot of money aimed at supporting companies, universities, and public research institutions with loans. In practice, the loans—which made up 58.8% of the 2012 budget—aren't much in demand, and most of the money flows back into state coffers. In 2011, at least €3 billion remained unused; in 2010, €2.36 billion.

COSCE called on the government to look closer at which funding programs are needed and to help ensure companies make use of the loans. Future science budgets should be “in line with the reality of the country,” says COSCE President Carlos Andradas Heranz, a mathematician at the Complutense University of Madrid. http://scim.ag/Spainbudget

## Democratic Republic of the Congo

### Ebola-Like Virus Found

A newly discovered virus from the family that causes rabies may be responsible for three linked cases of hemorrhagic fever in the Democratic Republic of the Congo in 2009. An international team of researchers reported on 27 September in PLoS Pathogens the first evidence that a virus from the rhabdovirus family, which typically causes brain swelling or flulike disease, is a hemorrhagic fever agent like Ebola virus and Marburg virus—among the most virulent pathogens known to infect humans.

Using sophisticated RNA sequencing technology, the researchers discovered a link between a rhabdovirus they called Bas-Congo and a 2009 episode of acute hemorrhagic fever in three people from the village of Mangala. The scientists did not actually isolate Bas-Congo virus, but instead plucked out RNA sequences from a blood sample taken from the outbreak's sole survivor and reconstructed its genome.

“It looks fairly solid,” says Thomas Ksiazek, a hemorrhagic disease specialist at the University of Texas Medical Branch in Galveston. Ksiazek cautions that until you have the actual virus in hand, it's difficult to know what caused the disease.

The research team is now intensifying its hunt for the virus itself in humans and other species. http://scim.ag/BasCongo

## Wako, Japan

### Element 113 Clinched?

Researchers in Russia and Japan have sparred for nearly a decade over who first observed an atom of superheavy element 113. Now, the Japanese team from the RIKEN Nishina Center for Accelerator-Based Science in Wako says a new atom of 113—its third—should cement its claim.

Researchers make superheavy elements by firing nuclei at foil made of a heavy element, hoping that a projectile fuses with a target nucleus. The resulting compound nucleus lasts a fraction of a second and is identified by the chain of unstable isotopes it decays into. Last year, an international adjudicating panel contended that none of the isotopes the groups cited as evidence had been studied, so their identity—and credit for the discovery—remained unclear.

The new RIKEN chain, however, took a different decay route leading to isotopes that had been studied. The result makes a “very strong case” for discovery, says Christoph Düllmann of the GSI nuclear research lab in Darmstadt, Germany. Will the adjudicators agree? Stay tuned. http://scim.ag/element113

2. # Random Sample

## Most Papers Retracted for Misconduct

Geography helps tell the story of why biomedical papers are retracted—or so suggests a new study published online on 1 October in the Proceedings of the National Academies of Sciences (PNAS). Microbiologists Ferric Fang and Arturo Casadevall, along with medical writer R. Grant Steen, examined the reasons for retractions from multiple angles, and they found that the explanations partly track with the country where the research was done. Most papers retracted for fraud or suspected fraud involved work conducted in the United States. But when plagiarism and duplicate publications are the reason for retractions, locales such as China take a bigger slice of the pie.

More broadly, the PNAS study finds that misconduct (which includes fraud, plagiarism, and duplicate publication) accounts for about two-thirds of all retractions, to the authors' surprise. “I thought it was going to be error” that explained why most papers were pulled, says Casadevall of Albert Einstein College of Medicine in the Bronx, New York. Science has the dubious distinction of coming first in a long list of journals with the most retracted articles in the past 40 years, with 70 retractions, edging out PNAS by one. http://scim.ag/retractstudy

## Alfred Russel Wallace Goes Online

The eminent 19th century British naturalist Alfred Russel Wallace now has his own Web site. Wallace Online (http://wallace-online.org/) provides free searchable access to all 28,000 pages of his writings and other historical documents and to 22,000 images. Wallace and Charles Darwin were contemporaries who independently formulated the theory of evolution by natural selection. Aside from Darwin, “Wallace is the only other person in history who independently figured out the secret of life,” says John van Wyhe, a science historian at the National University of Singapore who put the treasure trove of information together. Van Wyhe also created the popular Darwin Online Web site (http://darwin-online.org.uk/). The new site, timed to mark the hundredth anniversary of the naturalist's death in 1913, includes the full text of Wallace's magnum opus, Darwinism, which van Wyhe calls “next to The Origin of Species, the single greatest book ever written on evolution.”

$1.2 trillion — Estimated annual worldwide cost of climate change, equivalent to about 1.6% of global GDP, according to a report released on 26 September by DARA and the Climate Vulnerable Forum. 50% — Average decline in coral cover on the Great Barrier Reef from 1985 to 2012, according to a study published on 1 October in the Proceedings of the National Academy of Sciences. ## ScienceLive Join us on Thursday, 11 October, at 3 p.m. EDT for a live chat on a hot topic in science. http://scim.ag/science-live 3. # Newsmakers ## Early Leader of Environmental Movement Dies Barry Commoner, a cell biologist who became an influential proponent of the 1963 Nuclear Test Ban Treaty, a leader of the burgeoning environmental movement, and a candidate for president, died on 30 September in New York at the age of 95. Known as an often provocative scholar, Commoner prompted a flurry of letters to Science in 1961 with an essay, “In Defense of Biology,” that decried environmental threats and bemoaned “a widening gap between the more traditional areas of biology and those which are closely related to modern chemistry and physics” (Science, 2 June 1961, p. 1745). In 1970, Time magazine put Commoner on its cover, hailing him as a luminary in the “emerging science of survival.” But when asked in 1976 what he would do if appointed White House science adviser, Commoner told Science he'd resign: “I don't believe in science advice. … Pressure from an informed public is far better than an advisory system” (Science, 6 August 1976, p. 464). ## Polar Bear Biologist Cleared of Misconduct Charges Charles Monnett, a prominent U.S. government polar bear researcher who had been suspended amid allegations of research misconduct (Science, 5 August 2011, p. 681), has returned to work after being cleared of the most serious charges. But he is being reprimanded by his employer, the U.S. Bureau of Ocean Energy Management, Regulation and Enforcement, for e-mailing internal agency documents to several political officials and academics in Alaska. The decision ends a controversy that began in March 2010 when an anonymous government employee complained that Monnett had released information and manipulated data on polar bear mortality to thwart plans by the Shell energy company to drill exploratory wells in the Arctic. Monnett's allies cheered the decision. “We are pleased this misguided witch hunt is finally stumbling to a conclusion,” said Jeff Ruch, executive director of Public Employees for Environmental Responsibility in Washington, D.C., which helped defend Monnett against the charges. ## Gran Sasso Gets New Director Physicist Stefano Ragazzi, of the University of Milano-Bicocca in Italy, has been appointed the next director of the Gran Sasso National Laboratory in L'Aquila, Italy, run by the Italian National Institute of Nuclear Physics. He replaces physicist Lucia Votano, who has held the post since 2009, on 15 October. Ragazzi, 57, began his scientific career studying neutrinos and later moved into high-energy physics. Starting in 1992, he coordinated several high-energy research groups at CERN, the European particle physics lab in Geneva, Switzerland. He also helped design calorimeters—devices that measure the energy of particles—that CERN's CMS experiment used in the hunt for the Higgs boson. “As a new director, the major challenge will be keeping the level of research as high as it has been so far, especially in the field of dark matter and the study of neutrinos' properties,” Ragazzi says. 4. # Mysteries of the Brain 1. Leslie Roberts, 2. John Travis Next week, tens of thousands of researchers will make their way to New Orleans, Louisiana, for the annual meeting of the Society for Neuroscience, a testament to both the vitality of this discipline and how little we know about the body's most complex organ. To identify and explore some of the brain's enduring mysteries, Senior Editor Peter Stern and the news staff at Science have consulted with neuroscientists from our Board of Reviewing Editors and elsewhere. The brain mysteries we chose for a closer examination here encompass medical science, evolutionary biology, cognitive science, and more—and leave many provocative questions to another time (see p. 39). 5. Mysteries of the Brain # How Are Memories Retrieved? 1. Greg Miller New work suggests that memory is far more fluid than neuroscientists thought, and that memory retrieval plays a crucial role in shaping memory over time. So much of memory is a puzzle. How can the experiences of a lifetime—the sights and sounds, people and places, successes and failures—be recorded in the soft tissue of the brain? How can those memories persist for decades even as the neurons that encode them undergo constant molecular remodeling? And how can we (more often than not) recall a particular bit of information almost instantaneously, and with little prompting? This last question may be the most mysterious of all. “Retrieval is such a rich phenomenon,” says Michael Hasselmo, a neuroscientist at Boston University. “You get a reminder from somebody that's maybe just a word and you somehow turn it into a rich internal movie of events that you're moving through with a perspective and a location and a sense of time passing.” Our memories are part of what makes each of us unique (see p. 35), and they give us a sense of self-identity and continuity as we move through life. “Without our memories, we're just zombies,” says György Buzsáki, a neuroscientist at New York University in New York City. The neuroscience of memory is a complex and contentious area, but most researchers agree on a broad-brush account that goes something like this, at least for episodic memories, or memories of events. These memories are initially encoded and stored mostly in the hippocampus, deep inside the temporal lobe of the brain. For long-term storage, memories are filed away to other areas, including the neocortex, the thin sheet of tissue on the surface of the brain. A memory of any given event, the thinking goes, is represented by a sparse and scattered network of neurons, such that the sights, sounds, and emotions associated with the experience may each reside in a different location. To recall that memory, the brain must somehow reactivate just the right subset of neurons. Many details of this process are not known (or are disputed). Even so, some researchers say it's time to revise some aspects of the standard view—such as the notion that the hippocampus is not involved in retrieving older episodic memories, and that memories become fixed and unchangeable once transferred to the neocortex. Newer work suggests a far more fluid role of memory, and one in which retrieval plays a crucial role in shaping memory over time. So what should researchers look for if they hope to learn how the brain recalls the past? One clue comes from functional magnetic resonance imaging (fMRI) studies of the human brain suggesting that remembering reactivates some of the same neural circuitry as the original experience. Recalling a face, for example, activates a part of the fusiform gyrus thought to specialize in face recognition. Recalling a place evokes a different pattern of brain activity that includes the parahippocampal gyrus, an area that lights up when people view images of landscapes and other scenes. “We have a pretty good idea that the brain uses the same machinery for remembering that it does for experiencing things,” says Loren Frank, a neuroscientist at the University of California, San Francisco. When it comes to episodic memories, Frank says, what's stored in the brain are little snippets of the experience that can be compiled into a kind of highlight reel. The neural signature of memory retrieval, Frank argues, should look much like the neural signature of the actual experience played in fast-forward. There's disagreement about how fast the replay should be, but several labs, including Frank's, have found something like this in the brains of rats. One example is a 2008 study from Buzsáki and colleagues, in which the researchers trained rats to alternate taking left and right turns at a particular point in a maze. They found, as others had before, that each path evoked a specific sequence of firing by so-called place cells in the rodents' hippocampus. In a twist, the researchers then gave the rats a break between turns in the maze. They found that during this time, their place cells fired in sequences that predicted which direction they would turn on their next run in the maze. One sequence played when the rat had a left turn coming up, and a different sequence played when a right turn was next (Science, 5 September 2008, p. 1322). Do these hippocampal firing sequences represent the rat reminding itself what it needs to do next? Buzsáki thinks so. “Plans are based on memories,” he says. Buzsáki speculates that such sequences also play a broader role in recalling episodic memories. “The hippocampus is like a librarian,” he says: Its job is to record new experiences, help file them away to the neocortex, and later retrieve them on demand. In the case of episodic memories, the firing sequence of hippocampal cells might serve the same purpose as the identifying bar code on the spine of a book, indicating which subset of neocortical neurons represents a given memory. Recent studies support the concept that memory retrieval involves reactivating small but specific sets of neurons. In one, published online on 22 March in Nature, researchers used genetic engineering methods to tag neurons in the hippocampus that were activated as a mouse learned to associate a flash of light with an impending shock. When the researchers then reactivated those same neurons with a pulse of laser light delivered by an optical fiber, the mice froze in fearful anticipation even though they hadn't seen a flash of light. Bridging the gap between such rodent studies and the human brain isn't easy. In rare cases, neuroscientists have taken advantage of monitoring electrodes placed in or on the brains of epilepsy patients awaiting surgery. Such studies are done only when they don't interfere with medical care. In a 2008 study, researchers asked patients to watch several short video clips from TV shows and movies and then recall as many of them as possible a short time later. Individual neurons in the hippocampus seemed to develop a preference for a specific clip, firing strongly a second or so before a patient named a clip, say, from The Simpsons, but not when he or she recalled any of the other clips (Science, 3 October 2008, p. 96). The findings provide some of the best evidence that reactivation of specific hippocampal neurons is involved in the conscious experience of memory retrieval. Memory researchers have also begun to use methods borrowed from machine learning to analyze patterns of brain activity from fMRI scans on a finer scale than conventional methods allow (Science, 13 June 2008, p. 1412). “We can actually now identify individual memory traces by the pattern of activity in areas of the brain such as the hippocampus,” says Eleanor Maguire, a cognitive neuroscientist at University College London. “We can predict in an experiment the memory that someone is recalling.” (But only when the choices are limited: The technology is nowhere close to being able to read out any fleeting memory that crosses someone's mind.) Maguire says that work by her group and others is beginning to challenge the dogma that the hippocampus is not involved in recalling older episodic memories. The overarching role of the hippocampus, she hypothesizes, is to pull together the various aspects of a memory residing in different regions of neocortex and bind them into a coherent scene in the mind's eye. Recent studies suggest that the hippocampus even does this when people ponder imaginary scenarios. By stitching together pieces of the past, Maguire says, the hippocampus enables us not only to vividly remember but also to envision possible futures. Another recent challenge to traditional views of memory retrieval comes from research suggesting that memories can be incrementally strengthened, weakened, or otherwise altered each time they're recalled. “That work has changed the way people think about the persistence of memory,” says Yadin Dudai of the Weizmann Institute of Science in Rehovot, Israel. Such findings point to a more malleable memory system in which retrieval presents an opportunity to update old memories in light of new experience. Our repository of memories may be less like a library and more like Wikipedia, where each entry is open to editing anytime it's pulled up. This type of plasticity may be crucial for fitting new memories into the existing network of old memories, says Howard Eichenbaum, a neuroscientist at Boston University. “Everything you learn has to fit in with what you already know,” Eichenbaum says. How the brain accomplishes that never-ending task is a puzzle scientists have only begun to explore. 6. Mysteries of the Brain # Why Is Mental Illness So Hard to Treat? 1. Greg Miller The human brain is complex and difficult to study, which has impeded development of drug treatments for mental illnesses. But new tools and new ways of thinking could help the field gain new traction. “The last quarter century has seen many forward strides in the management of patients with mental disease. During this period modes of therapy have been available that are far superior to the old-time method of whirling patients on wheels or ducking them into cold water.” So begins a 1954 article in the Journal of the American Medical Association on the calming effects of reserpine, a compound derived from an Indian plant, on psychotic patients in a California mental hospital. Its effects were so remarkable, the authors wrote, that if their findings held up, “reserpine will be the most important therapeutic development in the history of psychiatry.” Serious side effects ultimately prevented reserpine from being the game changer its early champions had envisioned, but other drugs discovered around the same time—including lithium for bipolar disorder, monoamine oxidase inhibitors for depression, and chlorpromazine for schizophrenia—gave clinicians their first real weapons against mental illness. Flash-forward to the 21st century. Current psychiatric drugs are not much more effective than those initial medicines. Pharmaceutical companies have few promising candidates in the pipeline and show signs of giving up (Science, 30 July 2010, p. 502). Meanwhile, mental illness remains a major cause of disability throughout the world. Why has it been so hard to develop new treatments? “We just don't know enough,” says Thomas Insel, director of the U.S. National Institute of Mental Health (NIMH) in Bethesda, Maryland. “Research and development in this area has been almost entirely dependent on the serendipitous discoveries of medications. From the get-go, none of it was ever based on an understanding of the pathophysiology of any of the illnesses involved.” It's little wonder that uncovering the roots of psychiatric illness has not been easy. Not only is the brain the most complex organ in the body, but it's also harder to study. Doctors and researchers can't biopsy a patient's brain as they would a diseased kidney or swollen prostate. Genetic studies of psychiatric disorders may yet uncover solid leads for drug developers, but so far they've mostly uncovered bewildering complexity. There's also the question of how well disorders of the human mind can be studied in other species. Many of the animal models of mental illnesses in use today were developed decades ago to screen for compounds with effects similar to those of the early antidepressant and antipsychotic drugs, says Steven Hyman, who directs the Stanley Center for Psychiatric Research at the Broad Institute in Cambridge, Massachusetts, and preceded Insel as NIMH director. In the “forced swim test,” for example, researchers put a rat or mouse in a tub of water and clock how long it takes for the animal to stop struggling and just float, as if it's given up. “This is called behavioral despair,” Hyman says. Of course, nobody knows if the rat experiences anything like what a person in the grips of depression experiences, but rats given imipramine, one of the early antidepressants, struggle longer. The forced swim test has in fact identified other drugs that turn out to have antidepressant effects in humans, Hyman says, but by relying on a test designed to find imipramine-like effects, researchers may have missed drugs that work by means of other, potentially more effective mechanisms. Hyman, Insel, and others argue that too much effort has been spent searching in the narrow beam of light cast by the early psychiatric drugs, looking for similar compounds or tweaking their chemistry to eke out improvements in efficacy, reduce their side effects, and—not insignificantly—preserve the revenue stream generated by patent-protected drugs. The field of psychiatric medicine, it seems, got lucky early on, and then it got in a rut. However, several new tools and new ways of thinking could help the field gain new traction. One example, Insel says, comes from drugs that target receptors for the neurotransmitter glutamate, which recent evidence suggests can reduce hopelessness and suicidal ideation in people with depression far faster than current drugs do. “That story, while it's still developing, is extraordinary because it tells us we need to rethink our expectations,” Insel says. “It may be possible to treat this in hours instead of weeks.” Another encouraging inroad into depression comes from deep brain stimulation (DBS), in which surgeons implant electrodes in brain regions thought to be involved in regulating emotion and cognition. The approach is still experimental, and only severely depressed patients who've failed to respond to less invasive treatments are eligible, but DBS seems to help about three-quarters of them, says Helen Mayberg, a neurologist at Emory University in Atlanta and a pioneer of this therapy. The success of DBS, Mayberg and others suggest, undermines the decades-old concept of mental illness as primarily a chemical imbalance—too much or too little serotonin floating around the brain, for example—and points instead to faulty neural circuits as the core problem. In their quest for the next generation of treatments, researchers should focus less on single molecules and individual brain regions, Mayberg says, and think of the brain as “a dynamic system that has to be properly choreographed.” One powerful new tool for examining neural circuits at the cellular level is optogenetics, which combines laser optics and genetic engineering to stimulate or inhibit specific classes of neurons in rodents and monkeys. Researchers have employed optogenetic methods in mice to investigate the mechanisms of DBS therapy for Parkinson's disease (Science, 20 March 2009, p. 1554) and to study the neural circuits involved in addiction, anxiety, and other conditions. On a larger scale, human brain imaging research is illustrating that psychiatric conditions are multifaceted, with different symptoms that can be traced to different networks of brain regions, says Cameron Carter, a cognitive neuroscientist at the University of California (UC), Davis. The delusions and hallucinations of schizophrenia, for example, may involve malfunctions in one network, while disordered thinking and other cognitive problems involve another. This brain-based view is at odds with the traditional approach of diagnosing disorders according to the behavioral problems and inner anguish they cause—the approach taken by psychiatry's go-to diagnostic guide, the Diagnostic and Statistical Manual of Mental Disorders (DSM), published by the American Psychiatric Association. “There's no doubt that these categories [in the DSM] don't map very well onto nature,” Carter says. An effort to create a new diagnostic scheme rooted in biology, spearheaded by NIMH, is already under way (Science, 19 March 2010, p. 1437). And studies of the human genome may yet lead to innovative drugs for mental illness. Researchers have identified hundreds of genetic variants that increase the risk of autism, for example. Making sense of this deluge of new data is a challenge, but there are some hints of convergence: Many of the autism risk genes appear to be involved in common biological functions, such as synaptic signaling and brain development. Researchers now have novel tools at their disposal to follow up on these leads, including the ability to create neurons from reprogrammed stem cells from patients (Science, 26 November 2010, p. 1172). Gene expression profiling—investigating the activity of thousands of genes—in these human neurons could help researchers identify the biological pathways disrupted by autism risk genes and screen drugs to correct them, says Daniel Geschwind, a neurogeneticist at UC Los Angeles. “I'm extremely optimistic that [by] using a combination of these methods we're going to be developing new classes of drugs,” he says. “I think we're on the threshold of something really exciting.” 7. Mysteries of the Brain # Why Are Our Brains So Big? 1. Michael Balter The leading hypothesis for why humans have such large brains is that we live in large social groups, which requires considerable processing power. But which came first, the big groups or the big brains? With an average volume of about 1400 cubic centimeters, the human brain is more than three times as large as that of the chimpanzee, our closest living evolutionary cousin. And although the brains of whales and elephants are bigger in absolute terms, once adjusted for body weight the size of the Homo sapiens brain outstrips that of any other animal. For the past 2 decades, the leading explanation for why natural selection bestowed such generous largesse on the human noggin has been the social brain hypothesis. The researcher with whom the idea is most closely associated, psychologist Robin Dunbar of the University of Oxford in the United Kingdom, has argued that brain size—and particularly the size of the brain's neocortex—most closely correlates with the size of a species' social group. Keeping track of who is doing what to whom, Dunbar and other researchers argue, requires considerable processing power, and so bigger groups demand bigger brains. Numerous studies by Dunbar and others appear to support the link between a species' group size and neocortex size, especially among primates (Science, 7 September 2007, p. 1344). And recently, a number of brain-scanning studies in humans and monkeys have found correlations between the size of social networks and that of specific brain areas linked to sociality (Science, 4 November 2011, p. 578). One study of people, for example, found a positive relationship between gray matter density and the number of Facebook friends an individual has. “The social brain hypothesis is widely accepted and has generated a huge amount of research,” says Robert Seyfarth, a biological anthropologist at the University of Pennsylvania. Yet there are competing explanations for our big brains, and Seyfarth and other researchers are concerned that Dunbar's formulation may be too simplistic to account for the complex course of human brain evolution. The social brain hypothesis “now needs to be balanced against other complementary hypotheses and more fully integrated” with evidence from cognitive neuroscience, says Robert Barton, an evolutionary anthropologist at Durham University in the United Kingdom. Researchers generally agree that our large brains have something to do with how smart we are and why we were able to take over the planet. But if the evolutionary path to a big brain were easy, one might think that every animal would have one. Yet brains use a lot of energy, and most species have maintained smaller ones throughout their evolution, apparently avoiding the cost of fueling all that processing power. One key question is whether a species' group size, which is usually considered a proxy for its social complexity, was the driving factor in the evolution of bigger brains, or if it was the other way around: Natural selection could have favored larger brains for other reasons, such as greater innovation in food-foraging and tool-using skills, which then made larger social groups possible. Richard Byrne, a cognitive neuroscientist at the University of St Andrews in the United Kingdom, argues that great apes like chimps, gorillas, and humans evolved larger brains “to solve challenging food-acquisition problems better than monkeys,” with whom they competed for resources in the wild. His “Machiavellian intelligence” hypothesis, formulated in the late 1980s with St Andrews colleague Andrew Whiten, focused on the cognitive challenges of balancing competition and cooperation within primate groups, and was a forerunner to the social brain hypothesis. These challenges, Byrne says, led to brains better equipped to understand cause and effect—necessary for the development of tool use, such as fishing termites out of trees with sticks as wild chimpanzees do, and understanding the intentions of other individuals, thus making ever-more-complex social relations possible. A similar emphasis on food-gathering innovations has been argued by biologists Simon Reader of McGill University in Montreal, Canada, and Kevin Laland of St Andrews. Reader says that although he finds the new brain-scanning studies showing correlations between the sizes of certain brain regions and group size “particularly interesting,” there are likely to be “multiple drivers of cognitive evolution” that “need not be mutually exclusive.” Dunbar, however, rejects the notion that food-foraging innovations alone selected for big brains, and then the capacity to live in large groups “came for free.” Living in large groups, Dunbar says, is “hugely costly” in time and energy. Beyond offering alternatives to the social brain hypothesis, Seyfarth and some other researchers have also challenged its assumptions. They argue that group size is too crude a measure of the brain-taxing demands of social relationships. “The best predictor of reproductive success is the nature of the social relationships animals form, which are often with a smaller fraction of the entire group,” Seyfarth says, citing his own studies and those of others, especially with baboons. Recently, a team led by Carel van Schaik, a primatologist at the University of Zurich in Switzerland, has put forward what it calls the “cultural intelligence hypothesis,” which attempts to incorporate a broader range of factors, including an animal's behavioral flexibility and social learning—the transmission of skills and information within a species. Van Schaik and his colleagues argue that the social brain hypothesis does a poor job of predicting the brain size of primates such as orangutans and aye-ayes, whose social relations are relatively simple and yet who have larger brains than closely related primates living in much more complex social groups. And the team points out that small-brained hyenas and even some bats live in societies as highly complex as those of many primates. “The social brain hypothesis seems to be too narrow,” van Schaik says, adding that the cultural intelligence hypothesis incorporates the “ecological skills learned through social learning processes, whereas the social brain hypothesis mainly or exclusively focuses onsocial skills.” But Dunbar insists that his version of the social brain hypothesis also incorporates these additional factors. “I don't really see the difference” between the two hypotheses, Dunbar says. For him, culture is just another “mechanism that allows the social brain to do its work.” And he rejects the idea of treating ecological and social factors as two separate domains. “The evolutionary logic is that animals need to increase group size to solve one or more ecological problems, such as the risk from predators, that are best solved socially,” Dunbar says. But living in large groups has its own costs in time and energy, especially when competition arises, he adds. This leads individuals to also form smaller, more close-knit alliances of the type that Seyfarth and others have observed. “Primates do a balancing act between maintaining group cohesion and using close allies to buffer themselves against the costs of group living, which is no mean feat cognitively,” Dunbar says. For now, just how the human brain got so big remains a puzzle. Fortunately, natural selection has already made it big enough that this is one mystery we might someday solve. 8. Mysteries of the Brain # Why Are You and Your Brain Unique? 1. Greg Miller Recent work has provided clues about the neural basis of individual differences in behavior, cognition, and even personality, but there's still much we don't know. Take a seat in any coffeehouse or pub, and odds are it won't be long before you overhear people talking about why someone behaves the way they do. You might catch two mothers talking about whether a child's short attention span was inherited from his father. Or two friends speculating on whether a mutual acquaintance is so neurotic because of the way she was raised. The differences in our individual talents and tendencies, and their origins, are an endlessly fascinating conversation topic. They're becoming a hot issue for scientists, too. Now that the nature vs. nurture debate has been declared a draw, researchers have turned their attention to how genetics and life experiences interact to make each person's brain unique. Recent work has provided clues about the neural basis of individual differences in behavior, cognition, and even personality, but there's still much we don't know. Virtually every brain-based trait that's been examined, including general intelligence, personality traits such as extraversion, and the risk of mental illness, turns out to be influenced by the genes one inherits, says Robert Plomin, a behavioral geneticist at King's College London. “You don't find that any [traits] are 100% heritable,” Plomin says. “You do find that some are somewhat more heritable than others.” That leaves a lot of room for our brains to be molded by environmental influences. Neuroscientists have long thought that experiences early in life have an especially powerful influence on brain development, and although that's probably true, recent research suggests that even older brains may be more malleable than once thought (see p. 36). Our memories also make each of us unique, and they pose scientific mysteries of their own (see p. 30). Genes and life experience certainly interact to generate individuality. One much-discussed and debated example involves the gene for a protein that shuttles the neurotransmitter serotonin back into neurons after it's been released. People who possess a particular variant of this gene were more likely to experience symptoms of depression and suicidal thinking, and were more likely to receive a formal diagnosis of depression—but only when they also experienced a stressful life event, such as the loss of a job or a loved one (Science, 18 July 2003, p. 291). Plomin thinks our genes may also shape our environment. One of the most puzzling findings in behavioral genetics, Plomin says, is the observation that the genetic influence on intelligence appears to be strongest later in life. That is, while studies of children generally find that about 20% of individual differences in intelligence can be attributed to genetics, that figure increases with the age of the group examined and reaches 80% in some studies of older adults. A possible explanation, Plomin suggests, is that genes nudge people toward choices that shape their environment in a particular way, which in turn affects their intellectual prowess. It may be that what's inheritable about intelligence is not so much raw brainpower, Plomin says, but the propensity to engage in certain activities: “Do you read books and talk to people who make you think more, or do you lobotomize yourself with television?” Such small decisions could snowball over the course of a lifetime. It's a difficult hypothesis to test, but it's not intractable, Plomin says. Another insight into how genes and the environment can conspire to influence behavior comes from epigenetics, a field concerned with the chemical alterations to DNA and its associated proteins that result in long-lasting changes in gene expression. Recent studies, mostly with mice, have linked physical abuse and other adverse experiences early in life to epigenetic changes that alter the brain's response to future stressful events (Science, 2 July 2010, p. 24). Epigenetics may also amplify individuality by interjecting an element of randomness into how genes are expressed that ultimately affects behavior, says Moshe Szyf, an epigeneticist at McGill University in Montreal, Canada. Szyf notes that studies with genetically identical lab mice and with human identical twins have consistently found individual differences in epigenetic alterations to DNA. These epigenetic modifications “are enzymatic processes that are prone to error,” Szyf says. “And that can just stochastically create differences in the way genes are expressed in the brain.” Other researchers have been studying another potential source of random variation in gene expression in developing brain cells: so-called jumping genes, wandering bits of DNA that can insert themselves into other genes and alter their function (Science, 15 April 2011, p. 300). But so far it's not clear what effect this has on brain function, let alone behavior. For the most part, neuroscientists have viewed individual differences in brain anatomy and activity as a source of statistical error rather than a source of insight, says Geraint Rees, a cognitive neuroscientist at University College London. Most neuroimaging studies, for example, average brain activity across people, in part because that requires fewer subjects and cuts down on expensive scanner time. But individual variations in the brain aren't hard to find for those who look. Rees says his interest was piqued by studies finding that the size of the human primary visual cortex can vary up to three-fold. He wondered whether that resulted in differences in vision, an idea his lab has been investigating with a combination of optical illusions and functional magnetic resonance imaging (fMRI). At the end of 2010, Rees's group reported online in Nature Neuroscience that people with a smaller visual cortex more strongly experience certain illusions in which the apparent size of an object depends on its visual context. The findings suggest to Rees that even something as basic as how we perceive the world around us varies from person to person in subtle ways that can be traced to variations in brain anatomy. Richard Haier, a neuroscientist at the University of California, Irvine, is one of the few intrepid scientists who've waded into the potentially touchy realm of individual differences in the brain that influence intelligence. His work, beginning in the late 1980s, has identified a network of regions of parietal and frontal cortex whose anatomy and activity correlates with scores on tests of general intelligence. At the same time, Haier's work suggests that this network isn't identical in all individuals with similar intelligence scores. In other words, smart brains may be built in a variety of ways. The largest study ever undertaken to look at individual wiring variations in the human brain is the Human Connectome Project, a 5-year,$38.5 million effort funded by the U.S. National Institute of Mental Health. Now in its third year, the project aims to enroll 1200 healthy adults for a battery of behavioral tests and brain scans, including diffusion imaging scans that show connections between regions of the brain. The overall goal is to investigate individual variations in brain structure and activity and how they may correlate with differences in memory, emotion, and other functions, says David Van Essen of Washington University in St. Louis, Missouri, who is one of the project's leaders.

The project will also examine heritability of brain characteristics by enrolling 300 pairs of twins, plus one or more non-twin siblings for each pair. Researchers will collect DNA for genotyping and possibly whole genome sequencing if the cost drops enough by the final year of the project, Van Essen says.

Whether the differences in neural circuitry that make each person unique will be visible at the resolution of MRI scans is an open question. “It's sobering for sure that the resolution is only at the level of a millimeter or two, which means that each voxel contains literally hundreds of thousands of neurons or axons,” Van Essen says. (A voxel is the smallest volume of brain tissue discernable in a brain scan.) “But I'm confident we'll see interesting individual differences.”

Other researchers are working on far more detailed maps of neural circuitry. Sometimes called microconnectomics, these efforts employ recently developed methods in genetic engineering, automated microscopy, and image analysis to map out the synaptic connections of individual neurons. So far, the approach has been applied only to millimeter-size chunks of tissue in worms and mice, but some researchers see a microconnectome of the human brain as an ultimate if distant goal.

It's not clear what such a circuit diagram would reveal. Proponents think it would explain a great deal about how the brain works and about the nature of individual differences. Critics contend that deciphering brain function from a circuit diagram—no matter how detailed—is like trying to figure out what a computer does by studying its wiring diagram. In both cases, the circuitry may say something about what the machine is capable of, but it's the precise pattern of electricity coursing through it at a given time that determines what it's actually doing.

It seems far off, but there may yet come a day when brain scans and genetic tests can predict—with enough accuracy to matter in the real world—an individual's mental strengths and weaknesses, predisposition to psychiatric problems, or maybe even his favorite color. In the meantime, in the cafes and bars, there will be plenty to discuss.

9. Mysteries of the Brain

# Can We Make Our Brains More Plastic?

1. Gretchen Vogel

Neuroscientists have begun to understand a few of the factors that govern the flexibility of certain parts of the maturing brain, which may one day make it possible to rewire the adult brain.

Rewiring the brain is hard work, and as we age it gets even more difficult. A baby exposed to multiple languages can, without apparent effort, become fluently bilingual or even trilingual. Most adults have to work much harder to master new languages, and few are able to achieve the fluency of native speakers.

There are good reasons that our brains become less flexible as they mature: A developing brain gives up some of its plasticity in favor of efficiency and stability. “A fully plastic brain is not very helpful,” says Gerd Kempermann, a neuroscientist at the Center for Regenerative Therapies Dresden and the German Center for Neurodegenerative Diseases. “It learns everything but remembers nothing.” Too much plasticity may also play a role in some neurological disorders, including epilepsy and schizophrenia.

In certain situations, however, more plasticity could be helpful, making it easier for patients to recover after a stroke or spinal cord injury, for example. And it would be nice to effortlessly pick up the intricacies of Russian grammar. So, will we one day be able to turn on—and control—our brain plasticity at will?

Neuroscientists have begun to understand a few of the factors that govern the flexibility of certain parts of the maturing brain. By studying the development of sensory systems such as sight and hearing, they have uncovered a network of genes and proteins that influence so-called critical periods, windows of time in which the brain is primed for certain types of input. It is during these critical periods that the brain becomes wired for certain tasks, such as turning the signals received from the eyes into recognizable images, or distinguishing sounds present in spoken language. If a brain doesn't receive the right inputs during a critical period, it is extremely difficult to recover from the deficits that result. Children born with cataracts or a lazy eye will never see clearly unless the condition is corrected in the first years of life. Both mice and humans that lack adequate social contact as babies and juveniles have permanent behavioral and cognitive deficits.

The critical periods that arise earliest in development govern senses such as sight, hearing, and balance. Later ones govern higher-order skills such as language acquisition and social interactions. Most critical periods occur during infancy and childhood, when the brain is still growing and producing new neurons. But more important than the new cells are the connections the neurons make with each other. Connections that receive reinforcement are strengthened and protected, for example, by the growth of myelin sheaths around axons. Connections that go unused are pruned back.

In recent years, evidence has mounted that the critical periods close not only because plasticity-driving signals decrease, but also because the brain begins to produce signals that limit new connections between cells. When scientists use genetic tricks to remove these brakes on brain plasticity in experimental mice, the critical periods last well into adulthood. That's encouraging to those who wish to improve plasticity in adult humans, says Carla Shatz, a neuroscientist at Stanford University in Palo Alto, California. Boosting the plasticity of an adult human brain may not require replacing a whole network of signals that turn on that flexibility, she suggests. “Just take away the brakes,” she says, and the brain can perhaps recover its lost capabilities.

In lab animals, at least, that's possible: Researchers have bred mice that lack some of the various genes that act as plasticity brakes. When these so-called knockout mice lose sight in one of their eyes for a few days—the researchers suture it shut—their brains quickly compensate and reassign more area to the good eye, a process that resembles the plasticity seen in newborn brains. The mutant mice also recover from strokes better than control animals. And in several tests of neural function, they seem like supermice. On a rotarod, a kind of motor skills test for lab mice that resembles a log-rolling contest, the knockout animals “are like Olympians,” Shatz says. She and her colleagues have done a range of behavioral tests on their knockout mice. So far, “they're really good at everything.” That's certainly not the whole story, she adds: “There has to be some downside.”

One likely disadvantage is that too much rewiring can lead to short circuits—in a brain, that could mean seizures. Indeed, those same knockout mice respond to a smaller dose of seizure-inducing drugs than typical mice. In humans, the result of unleashing brain plasticity might be epilepsy. Shatz notes that unexplained epilepsy is much more common in childhood—when the brain is more plastic in many areas—and some epilepsy patients eventually outgrow their disease. A newborn “has to learn things fast or it's not going to survive. It's worth the risk of instability. But it's kind of dangerous to learn that fast. Once the organism has acquired fundamental experiences, you slow it down a bit and put on these brakes,” Shatz says. Closing critical periods may also provide a firm foundation for further brain development, says Brigitte Röder, a neuropsychologist at the University of Hamburg in Germany. “If you're always shaking the basement, you can't build a taller house,” she says. Takao Hensch, a neuroscientist at Boston Children's Hospital, notes that missing plasticity brakes are suspects not only in epilepsy but also in schizophrenia and Alzheimer's disease.

Some evidence suggests that the brain's plasticity can be augmented without the danger posed by completely removing the brakes. Michael Merzenich, a neuroscientist and professor emeritus at the University of California, San Francisco, explores how certain kinds of sensory signals—mainly sound and touch—can rewire adult brains. He and his colleagues have shown that specially designed computer games can improve performance on memory and other cognitive tasks in both children and older adults, even months after the training stops. Research led by Daphne Bavelier, a neuroscientist at the University of Geneva in Switzerland, has shown that playing action video games, such as Medal of Honor, can improve vision and several kinds of cognitive skills.

The success of those games might be linked to the brain's reward and attention systems, Hensch says. Several of the molecules identified as plasticity brakes involve these pathways. Two drugs that enhance attention, fluoxetine (better known as Prozac) and Aricept, can lengthen or even reopen critical periods in experimental mice. Both drugs are now in clinical trials for reversing the effects of lazy eye in childhood, and in one clinical trial fluoxetine helped stroke patients recover lost motor skills.

Fluoxetine also seems to influence another type of brain plasticity, the growth of new neurons throughout life in certain parts of the brain. Although most neurogenesis stops in childhood, two areas of the brain keep producing new neurons: the subventricular zone, which connects to the olfactory bulb; and the subgranular zone of the dentate gyrus, a part of the hippocampus. There are several ways to boost the production of new neurons in these regions; increased physical exercise and exposure to unfamiliar or complex environments are two clear neurogenesis enhancers. Fluoxetine and other antidepressants that act through the dopamine pathway also increase the neuronal birthrate and may keep the newborn neurons flexible longer.

What this ongoing production of neurons means for the brain is unclear, however. Although the rate of adult neurogenesis in an individual's brain is correlated with certain kinds of learning, the connection is not straightforward.

Some evidence points to the idea that in the dentate gyrus, the new neurons may aid the brain in adjusting to new environments, perhaps by helping the brain detect unfamiliar aspects of an otherwise familiar setting. Kempermann has proposed that adult neurogenesis might be an adaptation that has helped certain animals—mice and humans, for example—to adapt to and thrive in a wide variety of unstable ecological niches.

The new neurons “are an extreme form of plasticity,” says Fred Gage, a neuroscientist at the Salk Institute for Biological Studies in San Diego, California. He and his colleagues have found that the newborn cells seem to have their own critical period, lasting roughly 4 weeks, during which they are particularly excitable. (Recent studies suggest that fluoxetine might lengthen this period.) Gage speculates that the birth of neurons provides a continually fresh source of short-term critical periods for certain kinds of learning throughout life. The newborn neurons “are young kids that respond to everything,” he says. By the time one set of neurons has grown up and settled down, there's another set of cells ready to take their place.

Determining how those new neurons interact with the circuits already in place might help scientists better understand how the circuits are wired in the first place—and how to safely and efficiently rewire when needed.

10. Mysteries of the Brain

# Brain Teasers

1. Greg Miller

The brain poses many more than just the five quandaries highlighted in this package. Science summarizes six more mysteries of the brain that any neuroscientist should be proud to tackle.

The brain poses many more than just the five quandaries we've highlighted on these pages. Delve into any one of them and you'll soon run into another. Remembering the past (p. 30), for example, is a significant part of human experience, which raises one of the slipperiest questions in all of science: What is the biological basis of consciousness? (See Science, 1 July 2005, p. 79.) The elusive nature of that problem has convinced some researchers to stick to memory. “It's as close to consciousness as I can get and still look myself in the mirror in the morning,” quips Loren Frank, a neuroscientist at the University of California, San Francisco. Below are six more mysteries of the brain that any neuroscientist should be proud to tackle.

Star power. Researchers now realize that star-shaped astrocytes do more than just clean up after neurons. Recent studies find that they help shape synaptic connections in the developing brain, influence synaptic function throughout life, and may go haywire in a number of neuropsychiatric disorders. Given that astrocytes make up nearly half the cells in the human brain, we know too little about them.

Uncharted territories. What the heck does the habenula do? Or how about the retrosplenial cortex? Some brain regions get all the love from neuroimagers (yes, we're talking about you, anterior cingulate), while others get ignored. There's still much to learn about these rarely studied regions, their anatomical connections, and their contributions to cognition and behavior.

Snooze fest. Why we sleep isn't just a mystery of the brain, but considering that it's the brain that switches animals into slumber mode, the organ must be at the heart of it. Do animals sleep simply to conserve energy and stay out of trouble in the dangerous dark? Or is sleep necessary, as some newer research suggests, to reset the brain to meet the challenges of a brand-new day? And what, if anything, does dreaming accomplish?

What's the code? Asking how information is encoded in the nervous system may be one step shy of asking how the brain works. But getting a better handle on how neural firing patterns—or is it which neurons are doing the firing?—represent information is crucial for understanding everything we do, including perception, memory, and decision-making.

Getting reconnected. The inability of the central nervous system of adult mammals to regenerate after injury is a vexing puzzle. Research with rodents has led to a better understanding of the cellular signals that put the brakes on such repair. But translating that work to people with spinal injuries remains an elusive goal.

Feeling immune. Many immune system proteins take on different roles in the brain, and immune responses are known or suspected contributors to a number of brain disorders. Yet scientists have only scratched the surface of how the immune system and nervous system interact. Recent findings that the gut microbiota may act through the immune system to influence the brain and behavior add another intriguing twist.