News this Week

Science  09 Apr 2010:
Vol. 328, Issue 5975, pp. 150
  1. Scientific Literacy

    NSF Board Draws Flak for Dropping Evolution From Indicators

    1. Yudhijit Bhattacharjee

    Americans are less likely than people in the rest of the world to believe that humans evolved from earlier species and that the universe began with a big bang. Those findings appear repeatedly in surveys of scientific literacy, and until this year they also showed up in the National Science Foundation's biennial compilation of the state of global science.

    But the 2010 edition of Science and Engineering Indicators omits any mention of those two hot-button issues in its chapter on public attitudes toward science and technology. A section describing the survey results and related issues was edited out of the massive volume by the National Science Board (NSB), NSF's oversight body and the official publisher of Indicators. Board members say the answers don't properly reflect what Americans know about science and, thus, are misleading. But the authors of the survey disagree, and those struggling to keep evolution in the classroom say the omission could hurt their efforts.

    “Discussing American science literacy without mentioning evolution is intellectual malpractice,” says Joshua Rosenau of the National Center for Science Education, an Oakland, California–based nonprofit that has fought to keep creationism out of the science classroom. “It downplays the controversy.” The 2008 edition of Indicators, for example, discussed “Evolution and the Schools” along with its analysis of the two survey questions, and the 2006 edition contained an article titled “Evolution Still Under Attack in Science Classrooms.”

    NSB officials counter that their decision to drop the survey questions on evolution and the big bang from the 2010 edition was based on concerns about accuracy. The questions were “flawed indicators of scientific knowledge because the responses conflated knowledge and beliefs,” says Louis Lanzerotti, an astrophysicist at the New Jersey Institute of Technology in Newark and chair of the board's Science and Engineering Indicators (SEI) committee. John Bruer, a philosopher and president of the James McDonnell Foundation in St. Louis, Missouri, and the lead reviewer for the chapter, says he recommended removing the text and related material because the survey questions “seemed to be very blunt instruments, not designed to capture public understanding” of the two topics.

    The board's action surprised science officials at the White House, to whom the board officially submits Indicators. “The [Obama] Administration counts on the National Science Board to provide the fairest and most complete reporting of the facts they track,” says Rick Weiss, a spokesperson and analyst at the White House Office of Science and Technology Policy. In recent weeks, OSTP has asked for and received an explanation from the board about why the text was deleted.

    CREDITS (TOP TO BOTTOM): GENERAL SOCIAL SURVEY, 2008; THINKSTOCK

    Science has obtained a copy of the deleted text, which does not differ substantially from what has appeared in previous Indicators. The two questions (see graphic) have been part of an NSF-funded survey on scientific understanding and attitudes toward science since 1983. The deleted section notes that the 45% of Americans who answered “true” to the statement: “Human beings, as we know them today, developed from earlier species of animals” is similar to the percentage in previous years and much lower than in Japan (78%), Europe (70%), China (69%), and South Korea (64%). A similar gap exists for the response to the statement: “The universe began with a big explosion,” with which only 33% of Americans agreed.

    Bruer proposed the changes last summer, shortly after NSF sent a draft version of Indicators containing this text to OSTP and other government agencies. In addition to removing a section titled “Evolution and the Big Bang,” Bruer recommended that the board drop a sentence noting that “the only circumstance in which the U.S. scores below other countries on science knowledge comparisons is when many Americans experience a conflict between accepted scientific knowledge and their religious beliefs (e.g., beliefs about evolution).” At a May 2009 meeting of the board's Indicators committee, Bruer said that he “hoped indicators could be developed that were not as value-charged as evolution.”

    Bruer, who was appointed to the 24-member NSB in 2006 and chairs the board's Education and Human Resources Committee, says he first became concerned about the two survey questions as the lead reviewer for the same chapter in the 2008 Indicators. At the time, the board settled for what Bruer calls “a halfway solution”: adding a disclaimer that many Americans didn't do well on those questions because the underlying issues brought their value systems in conflict with knowledge. As evidence of that conflict, Bruer notes a 2004 study described in the 2008 Indicators that found 72% of Americans answered correctly when the statement about humans evolving from earlier species was prefaced with the phrase “according to the theory of evolution.” The 2008 volume explains that the different percentages of correct answers “reflect factors beyond unfamiliarity with basic elements of science.”

    George Bishop, a political scientist at the University of Cincinnati in Ohio who has studied attitudes toward evolution, believes the board's argument is defensible. “Because of biblical traditions in American culture, that question is really a measure of belief, not knowledge,” he says. In European and other societies, he adds, “it may be more of a measure of knowledge.”

    The scientist who first drew up the survey question thinks the board has made a big mistake. “Evolution and the big bang are not a matter of opinion,” says Jon Miller, a science literacy researcher at Michigan State University in East Lansing who conducted the survey until 2001. “If a person says that the earth really is at the center of the universe, … how in the world would you call that person scientifically literate?” Miller asks.

    Tom Smith of the National Opinion Research Center in Chicago, Illinois, which conducts the science knowledge survey for Indicators as part of the NSF-funded General Social Survey, does not believe the two questions are flawed. Prefacing the evolution and big bang statements with qualifiers might degrade the responses, he says, because “that could have the effect of tipping respondents off about the right answer.” He says NSF hasn't asked him to make any changes in the survey, which is now in the field for planned use in the 2012 Indicators.

    Not yet, at least. Both Lanzerotti and Lynda Carlson, director of NSF's statistical office that manages the survey and produces Indicators, say it is time to take a fresh look at the survey questions. Last week, after his interview with Science, Lanzerotti asked the head of NSF's Social, Behavioral and Economic Sciences Directorate to conduct a “thorough examination” of the questions through “workshops with experts.”

    Will the 2012 Indicators include a section on evolution? Lanzerotti, whose 6-year term ends next month, would like to see the topic handled “in the proper way,” using the different measuring instrument. In hindsight, he says, the 2010 Indicators should have explained why the two questions had been dropped.

    Miller believes that removing the entire section was a clumsy attempt to hide a national embarrassment. “Nobody likes our infant death rate,” he says by way of comparison, “but it doesn't go away if you quit talking about it.”

  2. Climate Change

    Scientists Ask Minister to Disavow Predecessor's Book

    1. Martin Enserink

    PARIS—Battles over the science of global warming are raging around the world, and France is the latest hot spot. Two months ago, former French science minister Claude Allègre published L'imposture climatique (The Climate Fraud), a scathing attack on the climate research field and the Intergovernmental Panel on Climate Change (IPCC). Allègre, an emeritus geochemist from the Institute of Earth Physics of Paris (IPGP), describes the field as a “mafia-like” system built around a “baseless myth.” Exasperated climate scientists have shot back, claiming the book itself is full of factual mistakes, distortions of data, and plain lies. And last week, more than 500 French researchers signed a letter asking Science Minister Valérie Pécresse to disavow Allègre's book by publicly expressing her confidence in French climate science.

    Science minister from 1997 to 2000, Allègre is a frequent guest on French radio and TV shows and has written more than 20 popular science books; his new climate book has sold more than 110,000 copies. But the letter to Pécresse says Allègre and IPGP Director Vincent Courtillot—a self-described “moderate global warming skeptic” who also expressed doubts about IPCC's conclusions in a book last year—have made “false accusations” and “have forgotten the basic principles of scientific ethics, breaking the moral pact that binds every scientist to society.”

    One key complaint about Allègre's book, which consists of a series of interviews with journalist Dominique de Montvalon, is that its hand-drawn graphics misrepresent data. A graph showing how temperatures rose and fell since 500 C.E., for example, is adapted from a 2008 paper in Climate Dynamics by Swedish paleoclimatologist Håkan Grudd. After he was alerted to Allègre's graph, Grudd noticed that until 1900 it largely matched his version. After that, the two started diverging, and whereas Grudd's graph ended in 2004, Allègre's continued with sharply declining temperatures, in an apparent extrapolation of global cooling, until 2100. In a statement he sent to Science, Grudd calls the changes “misleading and unethical.” Other climate scientists found similar problems. Louise Sime of the British Antarctic Survey says a graph Allègre attributed to her “is not an accurate drawing of anything. … It has nothing to do with my work.”

    Up or down?

    Paleoclimatologist Håkan Grudd says his temperature graph (red) doesn't match one drawn by Claude Allègre (black).

    CREDIT: HÅKAN GRUDD

    Allègre did not respond to requests for comments. In an e-mail posted on the blog of Sylvestre Huet, a science journalist at Libération, Allègre did not deny the discrepancies but said the hand-drawn graphs are nothing more than “virtual support for the written argument.” He also told Libération the letter to Pécresse was a “useless and stupid petition.” Climatologists, he was further quoted, “have wasted a lot of public money, and they're afraid of losing their funding, afraid of losing their jobs.” Courtillot says that he finds it “quite astonishing” that scientists asked Pécresse, a politician, to take sides in a scientific debate.

    Pécresse has expressed her trust in French climate science and has asked the Academy of Sciences to organize a debate about the research. But she will not choose sides, a ministry spokesperson says: “The minister cannot decide who's right and wrong in a scientific debate.”

  3. Patents

    Cancer Gene Patents Ruled Invalid

    1. Eliot Marshall

    A legal bombshell hit the biotech world last week: A federal judge in New York City used sweeping language to invalidate a handful of human gene patents, casting doubt on hundreds more. The decision applies only in New York state and is sure to be appealed—a process that could take years. Still, it undercuts the idea that DNA sequence can be owned.

    Judge Robert Sweet of the federal district court in New York City ruled that key discoveries from the mid-1990s, the BRCA1 and BRCA2 genes associated with breast and ovarian cancers, are not truly inventions. They are “products of nature,” he writes in a blunt 29 March opinion. Legal claims generally refer to genes as “isolated DNA” to support the idea that sequence information is a product of human ingenuity, not nature. But many scientists view this as a “lawyer's trick,” Sweet observes. He seems to agree. The BRCA genes in this case, he writes, “are not markedly different from native DNA as it exists in nature.” And for that reason he concludes that they are “unpatentable.”

    That is exactly the outcome sought by a group of cancer patients, medical experts, and others who jointly brought this suit. Their case was presented to the court by the American Civil Liberties Union (ACLU) and the Public Patent Foundation (PUBPAT) of Yeshiva University's Benjamin N. Cardozo School of Law in New York City (Science, 22 May 2009, p. 1000). They argued that the patents should never have been granted. Their chief target was the diagnostic company Myriad Genetics in Salt Lake City, Utah, which has exclusive use of the BRCA genes. The second defendant was the University of Utah Research Corp. in Salt Lake City, which owns the patents. In addition, ACLU-PUBPAT sued the U.S. government, saying that by granting a monopoly on BRCA genes, the Patent and Trademark Office had violated everyone's freedom of speech.

    The critics zeroed in on the way Myriad blocked the work of independent researchers and diagnostic groups. For example, the suit tells how Myriad halted work by Haig Kazazian and Arupa Ganguly at the University of Pennsylvania (both plaintiffs). In the late 1990s, using their own methods, the Penn group offered BRCA screening to individuals for a fee, but not as a commercial enterprise. Myriad asked Penn to sign a license agreement, but the researchers declined, saying Myriad's terms were too restrictive. Myriad sued, and Penn halted the screening. Today, Ganguly is “very happy” that the court stepped in—but considers it late: “I could have done something with BRCA 10 years ago,” Ganguly says, but she has now moved on. Such impacts harmed the public interest, ACLU and PUBPAT argue. Sweet listed the “public harm” controversies in detail but set them aside as “not resolvable” in this proceeding. He also dismissed the suit against the government as moot.

    Myriad brushed off the defeat, saying it will appeal and that only seven of its 23 key patents were rejected. The company uses a private database of mutations and medical outcomes to analyze client DNA and gauge a person's risk for cancer. Myriad says it steers clients to early care and claims that “countless lives have been saved” as a result. For families with less common BRCA mutations, the company charges $3000 or more per test. In 2008, according to Sweet's opinion, Myriad's costs for providing tests was about $32 million and its revenue was $222 million.

    Puncturing legal doctrine.

    Isolated DNA cannot be patented, Judge Robert Sweet says, because it is a product of nature.

    CREDIT: COURTESY ROBERT W. SWEET

    Biotech legal experts were surprised by Sweet's decision, partly because gene patents have been around for so long. Li Westerlund, vice president for global intellectual property at Bavarian Nordic, a vaccinemaker, expressed a common bewilderment: “It seems very hard to argue that something that has been patented for decades should not be patentable,” she said. The decision threatens to make a risky business even more uncertain, she said.

    The first U.S. gene patent was granted in 1980, and the first BRCA patent in 1998. In Europe, researchers and health agencies bitterly resisted enforcement of BRCA patents, and the European Patent Office narrowed but did not quash BRCA patents (Science, 24 June 2005, p. 1851). Today, there are roughly 3000 to 5000 patents on human genes, estimates Robert Cook-Deegan, director of the Center for Genome Ethics, Law, and Policy at Duke University in Durham, North Carolina. ACLU claims that the New York decision is “the first time a court has found patents on genes unlawful.” Some people were also surprised by Judge Sweet's logic. He argues that Myriad's legal case focused too much on chemistry and structure but failed “to acknowledge the unique characteristics of DNA that differentiate it from other chemical compounds.” DNA, he writes, has “unique qualities as a physical embodiment of information” that direct the synthesis of other molecules in the body. It is DNA's information—not chemistry—that people want to patent, and Sweet believes that information comes from nature.

    If upheld on appeal, this decision could spike a lot of patents. The Biotechnology Industry Organization in Washington, D.C., warned about this in a brief to the court before the decision, saying that a decision in favor of ACLU-PUBPAT could undermine “the viability of the domestic biotechnology industry” and “would put at risk the validity of a whole host of patents on isolated natural substances.”

    Knocking down isolated gene patents could actually be good for some companies, says Cook-Deegan. “While this does threaten certain business models—the gene-at-a-time testing models—I think those don't have much of a survival value anyway.” He believes companies doing full genome sequence analysis and multiallele testing, such as Allumina, Affymetrix, 23andMe, and Navigenics, will benefit if the courts clear away small sequence claims. “It's good news in that it will reframe the debate, … which had been largely premised on tired old arguments about isolated DNA,” Cook-Deegan says.

    Many observers—including both Westerlund and Cook-Deegan—predict that the special patent court that will probably review this decision, the Court of Appeals for the Federal Circuit, will likely be more sympathetic to the company. Ultimately, the case could go all the way to the Supreme Court; no one's predicting what may happen at that stage.

  4. Paleoanthropology

    Candidate Human Ancestor From South Africa Sparks Praise and Debate

    1. Michael Balter
    Out of the ages.

    Australopithecus sediba makes its debut.

    CREDIT: L. BERGER ET AL., SCIENCE

    “Dad, I found a fossil!”

    Lee Berger glanced over at the rock his 9-year-old son, Matthew, was holding and figured the bone sticking out of it was probably that of an antelope, a common find in ancient South African rocks. But when Berger, a paleoanthropologist at the University of the Witwatersrand, Johannesburg, took a closer look, he recognized it as something vastly more important: the collar bone of an ancient hominin. Then he turned the block around and spotted a hominin lower jaw jutting out. “I couldn't believe it,” he says.

    Now on pages 195 and 205 of this issue of Science, Berger and his co-workers claim that these specimens, along with numerous other fossils found since 2008 in Malapa cave north of Johannesburg and dated as early as 2 million years ago, are those of a new species dubbed Australopithecus sediba. Sediba means “wellspring” in the Sesotho language, and Berger's team argues that the fossils have a mix of primitive features typical of australopithecines and more advanced characteristics typical of later humans. Thus, the team says, the new species may be the best candidate yet for the immediate ancestor of our genus, Homo.

    That last claim is a big one, and few scientists are ready to believe it themselves just yet. But whether the new hominins are Homo ancestors or a side branch of late-surviving australopithecines, researchers agree that because of their completeness—including a skull and many postcranial bones—the fossils offer vital new clues to a murky area in human evolution. “This is a really remarkable find,” says paleontologist Meave Leakey of the National Museums of Kenya in Nairobi, who thinks it's an australopithecine. “Very lovely specimens,” says biological anthropologist William Kimbel of Arizona State University (ASU), Tempe, who thinks they are Homo.

    Such different views of how to classify these fossils reflect a still-emerging debate over whether they are part of our own lineage or belong to a southern African side branch. The oldest Homo specimens are scrappy and enigmatic, leaving researchers unsure about the evolutionary steps between the australopithecines and Homo. Some think that the earliest fossils assigned to that genus, called H. habilis and H. rudolfensis and dated to as early as 2.3 million years ago, are really australopithecines. “The transition to Homo continues to be almost totally confusing,” says paleoanthropologist Donald Johanson of ASU Tempe, who has seen the new fossils. So it is perhaps no surprise that the experts disagree over whether the new bones represent australopithecines or early Homo. And for now, at least, they don't seem to mind the uncertainty. “All new discoveries make things more confusing” at first, says anthropologist Susan Antón of New York University.

    The finds stem from a project Berger embarked on in early 2008 with geologist Paul Dirks, now at James Cook University in Townsville, Australia, to identify new caves likely to hold hominin fossils. Malapa, just 15 kilometers northeast of famous hominin sites such as Sterkfontein, had been explored by lime miners in the early 20th century; they apparently threw the block that Matthew Berger found out of the cave. (Matthew was originally included as a co-author on one of the papers, but Science's reviewers nixed that idea, Berger says.)

    When Berger's team excavated inside the cave, it found more of that first individual, a nearly complete skull and a partial skeleton of a boy estimated to be 11 or 12 years old, plus an adult female skeleton, embedded in cave sediments. These fossils are reported in Science. The researchers also found bones of at least two other individuals, including an infant and another adult female, that are yet to be published.

    Dirks enlisted several experts to help date the fossils. Labs in Bern, Switzerland, and Melbourne, Australia, independently performed uranium-lead radiometric dating, taken from cave deposits immediately below the fossils. They yielded dates of 2.024 million and 2.026 million years respectively, with maximum error margins of ±62,000 years. Paleomagnetic studies suggest that layers holding the fossils were deposited between 1.95 million and 1.78 million years ago, and animal bones found with the hominins were consistent with these dates.

    The uranium-lead dating is “credible” and indicates that the fossils are no more than 2 million years old, says geochronologist Paul Renne of the Berkeley Geochronology Center in California, citing the strong reputations of the Bern and Melbourne groups. But Renne regards the paleomagnetic work, which relies on correctly identifying ancient polarity reversals in Earth's magnetic field, as less convincing. The cave's stratigraphy might not be complete enough to formally rule out a much younger paleomagnetic signal for the fossils, he says. Geochemist Henry Schwarcz of McMaster University in Hamilton, Canada, notes that the team suggests that the hominin bodies might have been moved by river flows after they fell into the cave from holes in the earth above. If so, the fossils may not be tightly associated with the dated deposits below and above them, Schwarcz says. But Dirks rejects that suggestion, pointing out that the bones were partly articulated with each other, implying that they were buried soon after death.

    For now, many researchers are accepting the dates and moving on to consider the team's hypothesis that A. sediba represents a new species transitional between australopithecines and early Homo. That idea fits with Berger's long-held—and controversial—view that A. africanus, rather than the earlier species to which “Lucy” belongs, A. afarensis, was the true ancestor of Homo. (Some of Berger's other past claims have sparked strong criticism, including a highly publicized 2008 report of small-bodied humans on Palau, which Berger thought might shed light on the tiny hobbits of Indonesia. But other researchers say the Palau bones belong to a normal-sized modern human population.)

    The team's claims for A. sediba are based on its contention that the fossils have features found in both genera. On the australopithecine side, the hominin boy's brain, which the team thinks had reached at least 95% of adult size, is only about 420 cubic centimeters in volume, less than the smallest known Homo brain of about 510 cc. The small body size of both skeletons, a maximum of about 1.3 meters, is typical of australopithecines, as are the relatively long arms. The team says A. sediba most resembles A. africanus, which lived in South Africa between about 3.0 million and 2.4 million years ago and is the most likely ancestor for the new species.

    Fossil finder.

    Nine-year-old Matthew Berger at the moment of discovery at Malapa cave.

    CREDIT: LEE BERGER

    But A. sediba differs from A. africanus in traits that also link it to Homo. Compared with other australopithecines, A. sediba has smaller teeth, less pronounced cheekbones, and a more prominent nose, as well as longer legs and changes in the pelvis similar to those seen in later H. erectus. This species, also known in Africa as H. ergaster and considered an ancestor of H. sapiens, first appears in Africa about 1.9 million years ago. Some features of A. sediba's pelvis, such as the ischium (bottom portion), which is shorter than in australopithecines, “do look like they are tending more in a Homo direction,” says Christopher Ruff, a biological anthropologist at Johns Hopkins Medical School in Baltimore, Maryland.

    The claimed Homo-like features suggest to some people that the fossils belong in that genus rather than Australopithecus. “I would have been happier with a Homo designation,” based on the small size of the teeth and also their detailed structure, such as the shape of their cusps, says Antón. “It's Homo,” agrees Johanson, citing features such as the relative thinness of the hominin's lower jaw.

    But others are unconvinced by the Homo argument. The characteristics shared by A. sediba and Homo are few and could be due to normal variation among australopithecines or because of the boy's juvenile status, argues Tim White, a paleoanthropologist at the University of California, Berkeley. These characters change as a hominin grows, and the features of a young australopithecine could mimic those of ancient adult humans. He and others, such as Ron Clarke of Witwatersrand, think the new fossils might represent a late-surviving version of A. africanus or a closely related sister species to it, and so will be chiefly informative about that lineage. “Given its late age and Australopithecus-grade anatomy, it contributes little to the understanding of the origin of genus Homo,” says White.

    Putting A. sediba into Homo would require “a major redefinition” of that genus, adds paleoanthropologist Chris Stringer of the Natural History Museum in London. At no earlier than 2 million years old, A. sediba is younger than Homo-looking fossils elsewhere in Africa, such as an upper jaw from Ethiopia and a lower jaw from Malawi, both dated to about 2.3 million years ago. Berger and his co-workers agree that the Malapa fossils themselves cannot be Homo ancestors but suggest that A. sediba could have arisen somewhat earlier, with the Malapa hominins being late-surviving members of the species.

    The team thought long and hard about putting the fossils into Homo but decided that given the small brain and other features, the hominin was “australopithecine-grade,” says team member Steven Churchill of Duke University in Durham, North Carolina. However they are classified, the Malapa finds “are important specimens in the conversation” about the origins of our genus, says Antón, and “will have to be considered in the solution.”

  5. ScienceNOW.org

    From Science's Online Daily News Site

    Mass of the Common Quark Finally Nailed Down Using supercomputers and mind-bogglingly complex simulations, researchers have calculated the masses of particles called “up quarks” and “down quarks” that make up protons and neutrons with 20 times greater precision than the previous standard. The new numbers could be a boon to theorists trying to decipher particle collisions at atom smashers like Europe's Large Hadron Collider or trying to develop deeper theories of the structure of matter.

    Notorious Drug Stanches Bleeding Despite its horrifying history of causing birth defects, thalidomide has recently made a comeback as a treatment for diseases such as the cancer multiple myeloma. Now, a new study suggests that the drug may also ease the symptoms of a genetic disease called hereditary hemorrhagic telangiectasia—a discovery that could guide researchers to novel therapies for HHT and other vascular diseases.

    Wind Turbines Would Support A Stable Grid Individual wind turbines and even whole wind farms remain at the mercy of local weather for how much electricity they can generate. But researchers have confirmed that linking up such farms along the entire U.S. East Coast could provide a surprisingly consistent source of power. In fact, such a setup could someday replace much of the region's existing generating capacity, which is based on coal, natural gas, nuclear reactors, and oil.

    That Tortilla Costs More Than You Think Which costs more, a dollar's worth of sugar or a dollar's worth of paint? That's not a trick question: The sugar costs more, if you count the liters of water that go into making it, according to a new study. Uncovering the water behind the dollars in sectors including cotton farming and moviemaking could help industries use water more wisely.

    Read the full postings, comments, and more at news.sciencemag.org/sciencenow.

  6. Planetary Science

    Fresh Signs of Volcanic Stirrings Are Radiating From Venus

    1. Richard A. Kerr

    Scientists long viewed Venus as a possible twin of Earth, but in recent decades the differences have begun outweighing the similarities. Venus is so hot that lead would melt on its surface, and geologists eventually pronounced Venus to be free of plate tectonics. Is there nothing these siblings have in common?

    In a paper published online this week in Science (www.sciencemag.org/cgi/content/abstract/science.1186785), researchers report new evidence that Venus is still reshaping its surface—as well as cooling its interior—through volcanic outpourings like Hawaii's or Iceland's. One contingent of planetary scientists has long held that most of the venusian surface renewed itself half a billion years ago in a single volcanic paroxysm and has been nearly dormant since, but the Science authors argue that Venus also resembles Earth in steadily resurfacing itself.

    Lookin' hot, and young.

    Extensive venusian lava flows (within dashed line) radiate heat more efficiently (pinks), implying they are less weathered and therefore young.

    CREDIT: S. E. SMREKAR ET AL., SCIENCE

    Planetary scientists are welcoming additional evidence of a geologically youthful Venus. “These lava flows have to be young,” says planetary scientist M. Darby Dyar of Mount Holyoke College in South Hadley, Massachusetts. “The most likely explanation is volcanism going on right now.”

    Present-day volcanism had seemed plausible. Radar imaging by the Magellan probe in the early 1990s revealed nine “hot spots,” low rises each a couple of thousand kilometers across. Lava flows had obviously streamed from the hot spot centers. Magellan gravity measurements also indicated that plumes of hot rock are slowly rising beneath venusian hot spots, much as plumes rise beneath Hawaii and Iceland to feed their eruptions.

    But researchers couldn't put an age on the flows. So planetary scientist Suzanne Smrekar of NASA's Jet Propulsion Laboratory in Pasadena, California, and colleagues looked at observations of three hot spots by the Visible and Infrared Thermal Imaging Spectrometer (VIRTIS) on the European Space Agency's Venus Express, which is still in orbit. VIRTIS found that the hot spots radiate distinctly more heat than the rest of the planet does. The group infers that the hot spot flows are probably still relatively fresh and unweathered by the 460°C temperature and crushing carbon dioxide atmosphere.

    No one has weathered enough Venus-like rock under venusian conditions to say just how recently the hot-spot lavas flowed. But when Smrekar and her colleagues assumed wide ranges of plausible lava volumes and lava production rates in their calculations, they got flow ages ranging from a few million years down to 2500 years. For a planet that's been around 4.6 billion years, that's young. The group goes further by suggesting that such widespread expanses of young-looking lava must have been produced by a steady, long-term, relatively slow volcanic outpouring rather than a single long-ago catastrophe.

    “They certainly have identified some very young flows,” says planetary scientist Mark Bullock of the Southwest Research Institute in Boulder, Colorado. “That's exciting.” It means that, like Earth, Venus is actively cooling its interior and reshaping its exterior through volcanism. But is such plume-fed, hot-spot volcanism the only way Venus has operated? “I don't think we know,” says planetary scientist Sean Solomon of the Carnegie Institution for Science in Washington, D.C. He and others see too many assumptions, uncertainties, and extrapolations in the case against venusian catastrophism. Someone, they say, needs to start cooking up a lot more venusian rock.

  7. Natural Disasters

    Scientists Count the Costs of Chile's Quake

    1. Antonio Regalado

    Chilean scientists have estimated that the magnitude-8.8 earthquake that rocked the country on 27 February caused some $200 million in damage to research facilities and equipment. Researchers are asking the government for emergency funding and for the establishment of a seismology center that would, among other things, run the nation's tsunami warning system.

    The quake shook laboratories, burned down an important chemistry center, and wrecked an oceanographic station, potentially setting back Chilean science by years (Science, 12 March, p. 1308).

    Scientists Unified for the Reconstruction of Chile, a lobby group formed following the disaster, said this week that it will send a list of seven recommendations to Chile's minister of education to get research in the country back on track. The list includes additional grants for students whose projects are on hold and an emergency $90 million line of credit so researchers can replace damaged equipment. “We lost very expensive instruments that you can only buy in the U.S., Europe, or Japan,” says Alfonso Droguett, a communications official at the University of Chile.

    The recommendations grew out of a meeting held in mid-March in Santiago chaired by Raúl Morales Segura, dean of sciences at the University of Chile. The group now includes the country's leading scientific societies and universities.

    Among the scientists' recommendations is to transfer control of the tsunami warning system from the navy to a new national seismology center run by scientists. The navy came under intense criticism for failing to provide an early warning of the tsunami that followed the quake. “We want a system where the scientists, not the uniformed people, are in charge,” says Droguett. In principle, such a system would be similar to the one in the United States, where tsunami warnings are handled by the National Oceanic and Atmospheric Administration.

  8. U.S. Census

    Asking the Right Question Requires Right Mix of Science and Politics

    1. Jeffrey Mervis

    On 1 April, the United States celebrated Census Day. At a cost of $14.7 billion, the 2010 Census represents the federal government's largest civilian undertaking. The stakes are high. The enumeration will determine how congressional districts are apportioned, as well as how nearly half a trillion dollars in federal funds should be disbursed.

    The U.S. Constitution mandates a decennial census, and Thomas Jefferson directed the first one in 1790. But the process for choosing questions has never been spelled out. The two-page form mailed out last month to every household in the country—“10 questions in 10 minutes”—asks for basic demographic information. For most social science and survey researchers, however, the meat lies in the second arm of the census, called the American Community Survey (ACS). It's a monthly sampling of 3 million households a year that asks residents 75 questions about everything from their incomes and disabilities to how long it takes them to get to work.

    The changing American Community Survey.

    ACS is a living document, with questions being added (above) and removed to reflect the data needs of various federal agencies.

    The ACS replaced what used to be called the “long form,” a detailed questionnaire that went to one in every six households once a decade along with the 10-question “short” form. The 2000 Census was the last one to employ the long form; the monthly ACS succeeded it in 2005. Except for the question about racial identity, the contents of the short form seldom change. But the ACS is a living document. So the procedure for determining its content can have a big impact on what information is collected.

    One significant change since the ACS replaced the long form has been the expansion of the executive branch into shaping the questions—an area that historically has been the sole purview of Congress. Federal officials say the shift, triggered by a push from the National Science Foundation (NSF) for a new question on the ACS, promises to improve the overall quality of data gathering. At the same time, Congress isn't likely to relinquish its role in this intensely personal communication with the American public.

    Watching anxiously from the sidelines are the social science and survey research communities that rely upon the data. “It's one of the great black holes of all time,” says Ed Spar, executive director of the Council of Professional Associations on Federal Statistics, about how the process looks to outsiders. He and others hope that the new procedures, still being ironed out, will be well defined and more transparent.

    A matter of degree

    Lynda Carlson, the head of NSF's Division of Science Resources Statistics, has seen the process from up close. In 2005 the career civil servant, known fondly to her peers as the Energizer bunny of the federal statistics community, had an idea to improve NSF's National Survey of College Graduates (NSCG). The longitudinal survey, conducted for NSF by the Census Bureau, follows the career paths of those with scientific training or in scientific occupations. Its sample is drawn from the long form of the decennial census, which asks residents for their highest level of education attained.

    The survey helps NSF carry out its mission to monitor the health of the U.S. scientific workforce. Unfortunately, the census question didn't distinguish those who majored in biology or chemical engineering from those who studied, say, English literature. As a result, the Census Bureau used proxies—type of job, income, and so on—to create the pool that NSF wanted tracked. Indeed, only one in three graduates surveyed in past years had majored in a relevant science and engineering field or was working in the sector. Wouldn't it be great, thought Carlson, if the census also asked people for their field of degree?

    Historically, it has taken an act of Congress to place a question on the census. Although that legislative mandate isn't codified anywhere, Congress has long operated on the principle that its members, because they are elected by the populace, should have the final say in determining the content. “If there wasn't legislation, it didn't go on the long form,” explains Susan Schechter, chief of the ACS office at the Census Bureau.

    Of course, those decisions are subject to the vagaries of the political process. Sometimes Congress bows to the wishes of an individual member. That happened when then-Senator Bob Dole (R–KS) won approval more than a decade ago for a three-part question about grandparents who provide care for their grandchildren. Other times it responds to outside pressure, as when the telecommunications industry and federal communications officials campaigned successfully for a census question on Internet and computer access as part of the 2008 Broadband Data Improvement Act.

    Accordingly, Carlson's first step in 2005 was to contact the Senate panel that oversees NSF. She eventually persuaded its staff to include a mandate for an ACS question about field of degree in a bill then under consideration by Congress reauthorizing NSF's programs. But the idea ran into opposition from the House subcommittee that oversees the census. “We hadn't even had a year of validated data from ACS,” recalls John Cuaderes, then staff director of the House panel. “Adding a question to a new survey creates the possibility of screwing it up.”

    Length was also an issue. As any survey researcher will tell you, making a questionnaire longer—ACS is already 28 pages—can undermine the quality of the answers. In particular, it can mean a lower response rate. But because filling out the census is mandatory—nonresponders can be fined up to $5000—the chief impact of a longer survey is that the government must spend more money to chase down those who didn't answer the first time around.

    Legislators who had fought for the ACS as an improvement to the decennial census were wary of the fiscal impact of adding a question, says Cuaderes, now Republican deputy staff director on the full committee. “If that happens, then the appropriators could say, ‘It's getting too expensive. Let's kill it.’” At the same time, he says that legislation seemed like overkill for the problem that NSF wanted to solve.

    A gentleman's agreement

    Fortunately for NSF, the executive branch had created a process that could, in theory, provide an alternative path for revising the ACS. In 2000, as the Census Bureau retired the long form and began gearing up for the ACS, the White House set up an interagency statistical committee under the Office of Management and Budget (OMB) to oversee this new survey instrument. After first documenting the legislative history of each question on the ACS, OMB in 2003 asked each of the 30 agencies to identify questions that might need to be revised, dropped, or added, along with a 5-year timeline for possible implementation with the consent of Congress.

    A 1980 law already gives OMB the authority to vet almost any request for information from a federal agency to make sure that it doesn't impose an undue burden on the public. But that authority had never been applied to the census. “Initially, we adhered to the existing policy that any changes [in the census] would require legislation,” explains an OMB official familiar with how the committee operates.

    Although he had balked at NSF's request for legislation, Cuaderes agreed to look for another way to get a field-of-degree question on the ACS. That search eventually led to discussions with OMB about how to handle an agency request that hadn't been enshrined in legislation.

    A deal was eventually struck in 2006. Simply put, says Schechter, “we came up with a gentleman's agreement that OMB would take the heat and be responsible for deciding what would be added to the ACS. And the federal government, through OMB, agreed it would be very restrictive [about using that authority] because of the mandatory nature of the collection.”

    That agreement gave NSF the green light to design and field-test its question. Finally, in January 2009 the field-of-degree question appeared on the ACS. The new mechanism also allowed the Census Bureau to move ahead with three new questions that had passed muster with the interagency review. The questions—on health insurance coverage, marital history, and disabilities stemming from military service—made their ACS debuts in 2008.

    Schechter says this second, parallel track gives the federal survey community a larger role in the decision-making process and promises to improve the value of the ACS to the government. The pending question on Internet access, mandated by the 2008 broadband law, she suggests, might have benefited from such vetting. “Is that the most important new question to ask?” she says. “Maybe, but it never even got reviewed by the federal committee.”

    To be sure, even survey researchers can disagree about the suitability of a particular question. For example, statistics council head Spar isn't sure that ACS is the right place for NSF's question on field of degree. “Why do you need that information at the tract level [the smallest unit of the census, typically a portion of a city block]?” he asks. “What are they going to do with it?” Carlson says that NSCG needs to report on tiny population subgroups, say, female Hispanic geologists (their identities remain confidential). “Over time,” she adds, “it will also give us better geographic estimates.”

    Once the 2010 Census is history, the federal statistics committee plans to discuss other ways to improve the ACS. One option is annual or biennial supplements that explore a particular topic, say, education or volunteerism, but that do not become part of the permanent ACS. “The key is to keep the burden down and keep the quality up,” says the OMB statistics official. “Despite the public's willingness to cooperate, people have their limits. The ACS is a national treasure, and we risk [diminishing] the quality of the survey if we load it up.”

  9. ScienceInsider

    From the Science Policy Blog

    The United Kingdom plans to create the world's largest marine reserve—an area larger than California—in its territorial waters in the Indian Ocean. The Chagos Archipelago boasts the largest coral atoll in the world and many smaller reefs.

    The National Science Foundation has quantified for the first time the scope of interdisciplinary research by asking U.S. graduate students about their doctoral dissertations.

    The editor of Medical Hypotheses, who got into hot water for publishing a paper by AIDS “denialist” Peter Duesberg, could be sacked unless he changes reviewing practices at the journal.

    Agricultural officials in California have abandoned plans to eradicate the invasive light brown apple moth and now hope to “contain, control and suppress” the pest.

    A U.K. parliamentary panel examining the hacking of e-mails at the University of East Anglia's Climatic Research Unit says researchers should make all of their evidence available to global warming skeptics and the public.

    The leading public research universities in the United States are holding five regional meetings this month as part of a campaign for increased government support.

    For the fifth consecutive year, the number of international students applying to U.S. graduate schools has risen, fueled by a boom in Chinese higher education. The trend erases a steep drop in 2003–05 attributed to tightened visa procedures.

    For the full postings and more, go to news.sciencemag.org/scienceinsider.

  10. Evolution of Behavior

    Did Working Memory Spark Creative Culture?

    1. Michael Balter

    A provocative model suggests that a shift in what and how we remember may have been key to the evolution of human cognition.

    Blackboard of the mind.

    Working memory is key to conscious thought.

    CREDIT: M. TWOMBLY/SCIENCE

    COLORADO SPRINGS, COLORADO—About 32,000 years ago, a prehistoric artist carved a special statuette from a mammoth tusk. Holding the abstract concepts of “human” and “animal” in his or her mind, the artist created an imaginary beast with the body of a human and the head of a lion. Archaeologists found the 28-centimeter-tall figurine in hundreds of pieces in the back of Germany's Hohlenstein-Stadel cave in 1939, and after World War II, they put the fragments back together, reconstructing the ancient artwork.

    Today, archaeologists hail the “Lion Man” as one of the earliest unambiguous examples of artistic expression, a hallmark of modern human behavior. The figurine “has acquired an iconic status for modern archaeologists as profound as it must have been for the original artisan,” wrote Thomas Wynn and Frederick Coolidge, both of the University of Colorado, Colorado Springs, in a paper last year. Wynn and Coolidge argue that the figurine's creation—as well as its subsequent reconstruction by archaeologists—is an excellent example of something unique to our species: an enhanced capacity to hold and manipulate information in one's conscious attention while carrying out specific tasks, an ability psychologists call working memory.

    Right now you are using working memory as you read this story: You are holding the concept of the figurine, or its image from the illustration above, in your mind. As you go from sentence to sentence, you are also remembering the meaning of each bit of text. And you must pay active attention, shutting out extraneous thoughts such as how your grant application is doing.

    We use our working memory for tasks as trivial as remembering a telephone number while we dial it, as technically challenging as designing an airplane, and as imaginative as creating works of art and music. Psychologists and neuroscientists consider working memory essential to the capacity for language, planning, and conscious experience. “Any symbolic processing, such as language, requires it,” says David Linden, a psychologist at Bangor University in Gwynedd, U.K. Working memory is “the blackboard of the mind,” as the late Patricia Goldman-Rakic of Yale University put it.

    In the view of Wynn and Coolidge—an archaeologist and a psychologist who form an unusual scientific partnership—a stepwise increase in working memory capacity was central to the evolution of advanced human cognition. They argue that the final steps, consisting of one or more genetic mutations that led to “enhanced working memory,” happened sometime after our species appeared nearly 200,000 years ago, and perhaps as recently as 40,000 years ago. With enhanced working memory, modern humans could do what their ancestors could not: express themselves in art and other symbolic behavior, speak in fully grammatical language, plan ahead, and make highly complex tools.

    “The enhancement of working memory opened up a whole load of new possibilities for hominids,” says anthropologist Dwight Read of the University of California, Los Angeles. “It was a qualitative shift.”

    However, others question how well the archaeological record supports Wynn and Coolidge's ideas. The pair argue that the final boost in working memory capacity came late in human evolution, whereas many archaeologists see the stirrings of complex behavior thousands of years earlier (see p. 164)—and some say, even in other species. “Enhanced working memory capacities can be observed in both modern humans and Neandertals,” insists Anna Belfer-Cohen, an archaeologist at The Hebrew University of Jerusalem. “Working memory does not appear to be the major shift” in human evolution.

    Memorable pair.

    Thomas Wynn (left) and Frederick Coolidge hypothesize that working memory shaped human evolution.

    CREDIT: EMILY WYNN

    Despite the critics, Wynn and Coolidge's ideas are increasingly popping up in scientific journals. The pair “have made a really big splash,” says Philip Barnard, a cognition researcher at the University of Cambridge in the U.K. This month, Current Anthropology devotes a special online supplement to the topic, and later in April, Wynn and Coolidge will update their ideas at a major meeting on the evolution of language in the Netherlands. The theory makes sense to many. “It is the most impressive, explicit, and scientifically based model” so far, says archaeologist Paul Mellars of the University of Cambridge.

    Thanks for the memory

    For as long as anyone can remember, researchers have been debating how many kinds of memory there are. In the late 19th century, American psychologist William James proposed two types of information storage: a temporary store that James called “the trailing edge of consciousness” and a more durable and even permanent store. His model was not immediately adopted, but by the 1960s, experiments with human subjects, including patients with amnesia or brain damage, had convinced many researchers that there are two types of memory: short-term and long-term.

    It soon became clear that short-term memory was not just a passive, temporary store-house. Experiments showed that retaining information in conscious memory required active “rehearsal” to keep it there, as we do when we repeat a telephone number in our minds until we have the chance to write it down. Temporary memory is a dynamic part of the conscious mind, engaged in all sorts of work, including processing and manipulating information. In 1974, psychologists Alan Baddeley and Graham Hitch of the University of York in the U.K. proposed a new model, replacing the concept of short-term memory with what they called working memory.

    Baddeley and Hitch argued that working memory had three components: a “phonological loop,” which stores and processes words, numbers, and sounds; a “visuospatial sketchpad,” which stores and processes visual and spatial information; and a “central executive,” which focuses the mind's attention on the information in the other two systems and controls how it is used. In 2000, Baddeley added a fourth component, an “episodic buffer,” which serves as an interface between the other three systems and long-term memory (see diagram).

    Memory modules.

    This view of working memory remains influential.

    CREDIT: ADAPTED FROM ALAN BADDELEY

    Central to the model's experimental approach were so-called dual-task exercises, in which subjects had to do more than one memory-taxing thing at a time. Such experiments showed that some tasks interfere with each other: You memorize a list of words less efficiently when you are also reciting a series of numbers. But other tasks are apparently separate: You can remember the colors of items while reciting numbers.

    This research convinced Baddeley, Hitch, and many others that words and numbers are stored in one kind of temporary memory buffer while visual information goes into another. Meanwhile, the central executive is thought to control the flow of information between working memory's temporary storage buffers, long-term memory, and other cognitive functions. Indeed, although the Baddeley model now has competitors (see sidebar, p. 162), most researchers agree that dynamic concepts of working memory, rather than passive storage, best explain sophisticated human cognition, which requires that we be masters and not slaves of our memories.

    Working memory allows us to juggle the past and the present in our conscious minds, says Jackie Andrade, a psychologist at the University of Plymouth in the U.K., and so is critical to complex behavior. We can “aim for future goals rather than just being driven by our current goals or environment,” she says. Or, as psychologist Nelson Cowan of the University of Missouri, Columbia, puts it, “Working memory holds the plan until we can execute it.”

    Wynn and Coolidge acknowledge that earlier hominins, and even apes, had enough working memory to carry out certain skilled tasks. The difference was one of degree, they say, and came in evolutionary stages. For example, the 1.8-million-year-old Homo erectus, creator of the first known bifacial tools, probably had more working memory than apes and australopithecines. The first big jump in working memory capacity, Wynn and Coolidge contend, came with the 650,000-year-old H. heidelbergensis, which made highly symmetrical hand axes—a talent that probably required holding a mental template of the tool in the mind while making it (Science, 6 February 2009, p. 709).

    A meeting of minds

    Wynn and Coolidge note that working memory has two key features that could make it subject to natural selection: It varies among individuals, and that variation may have a genetic basis. Numerous studies have found a close correlation between working memory capacity and performance in cognitive tasks such as language learning and reasoning ability. In a 2004 review, psychologist Randall Engle of the Georgia Institute of Technology in Atlanta and his co-workers identified nearly 40 cognitive tasks significantly correlated with working memory capacity. Some researchers have proposed that intelligence tests actually measure working memory capacity, although this is controversial. And a number of studies conclude that variances in working memory capacity, for example those linked to learning disabilities, could have a genetic component.

    It was evidence for such genetic variation that first brought Coolidge and Wynn together. In 2000, Coolidge published a twin study suggesting a strong genetic correlation between attention deficit hyperactivity disorder and deficits in what researchers call “executive functions”—a range of mental abilities such as forming goals and planning ahead.

    Coolidge, who had long been interested in archaeology, went to see Wynn, whom he then knew only casually. Wynn, known for analyzing the mental steps in hominin tool-making, was attracted by Coolidge's suggestion that executive functions were key to modern human evolution.

    In their first paper together in 2001, the pair focused solely on executive functions. But not long afterward, Wynn recalls, “Fred walked into my office and said, ‘It's working memory.’” Coolidge had realized that there was considerable overlap between “executive functions” and the “central executive” in Baddeley's model. The best way to explore the evolution of modern human cognition, the pair decided, was to adopt the Baddeley model and see where it led them. “[It's] probably the most cited cognitive model of the past 30 years,” Wynn says. “It gave us a theoretical model that had punch.”

    They scoured the archaeological record and began to spin out a series of papers contending that modern humans had greater working memory capacity than earlier hominins. For example, they argued in 2004 in the Journal of Human Evolution that Neandertals fell short mainly in areas such as complex hunting strategies and symbolic expression, which required enhanced working memory. They point out that for 200,000 years, Neandertal stone-tool technology, although skillful, changed little. And even when it did shift, it was “on a scale and at a rate that would appear to rule out conscious experimentation and creativity, the stuff of enhanced working memory,” Wynn and Coolidge wrote.

    The pair also cited the relative lack of evidence for Neandertal symbolic behavior, such as the elaborate burials and artistic expression typical of modern humans, as support for their conclusion. Neandertal cognition, they contended in another 2004 paper, was like “modern human thinking” but with “a single piece missing”: enhanced working memory.

    Not everyone agrees with this evaluation of Neandertals, but Wynn and Coolidge see other examples of uniquely modern human behaviors that they think required enhanced working memory. Chief among them is the iconic Hohlenstein-Stadel figurine. “That is a beautiful archaeological example of their model,” says archaeologist Lyn Wadley of the University of the Witwatersrand, Johannesburg, in South Africa. Wynn and Coolidge also cite research at Niah Cave in Borneo suggesting that about 40,000 years ago, modern humans deliberately set forest fires to nurture tubers and other edible plants and perhaps also trapped pigs. This “managed foraging” is evidence for advanced planning, they say.

    The pair also sees enhanced working memory at work in the bone “tally sticks” found at sites in France and the Democratic Republic of the Congo. These notched objects date to 28,000 years ago or later and may have been “external memory devices” used to record something, according to work by archaeologist Francesco D'Errico of the University of Bordeaux in France. Wynn and Coolidge think that the sticks were used to perform calculations and perhaps to enhance working memory by transferring it to a physical object.

    In choosing these examples, however—none of which date to earlier than 40,000 years ago—Wynn and Coolidge have bucked an increasing trend in archaeology. Many researchers now interpret artifacts such as 75,000-year-old beads and etched ochre at Blombos Cave in South Africa and 90,000-year-old beads at Qafzeh Cave in Israel as evidence that symbolic expression has much deeper evolutionary roots.

    “They may have painted themselves into a corner by setting down a date of 40,000 years for modern cognition,” Wadley says. Miriam Haidle, an archaeologist at the University of Tübingen in Germany, agrees. “Everything must start much earlier than they think,” Haidle says. “Working memory was a neurological complex that slowly developed over at least 2 million years.”

    Wynn and Coolidge acknowledge that their model swims against the current tide of early claims for modern human behavior. But they argue that the Blombos artifacts may have represented only a simple sort of symbolism, such as that used to mark social identities, rather than the fully realized symbolic behavior typical of later modern humans. “We just think the late, jerky explanation requires fewer assumptions and caveats and takes the archaeological record seriously, instead of trying to explain it away,” says Wynn.

    What was that number?

    Chimpanzees are better than humans at some memory tasks.

    CREDITS: COURTESY OF TETSURO MATSUZAWA

    Still, Wynn told Science that he is now “mostly convinced” by evidence, reported by Wadley last year, for the use of ochre adhesives to haft stone tools at the 70,000-year-old South African site of Sibudu, which suggests enhanced working memory. He and Coolidge say that the genetic mutation or mutations they propose may not have been as late as 40,000 years ago. They're open to other scenarios, such as that the cognitive advance was gradually manifested in the archaeological record, or that the genetic variants coding for enhanced working memory did not rise to high frequencies in human populations until more recently.

    From past to future

    While Wynn and Coolidge have focused their attention on relatively recent enhancements in working memory capacity, other researchers trace its evolution much further back in time—back to the split between humans and chimpanzees, about 5 million to 7 million years ago. In a 2008 paper in Evolutionary Psychology, UCLA's Read marshaled several lines of evidence suggesting that chimpanzees have much more limited working memory capacity than modern humans. He argued that the common ancestor of humans and chimps also had limited working memory capacity and limited ability to engage in what linguists call recursion, the embedding of phrases within each other, as in this sentence. Many researchers consider recursion the hallmark of modern human language.

    However, primatologist Tetsuro Matsuzawa of Kyoto University in Japan claimed in 2007 that some young chimps are better than adult humans at a memory task using numbers flashed on a computer screen. “Seeing is believing,” Matsuzawa told Science. “They are better than us in this memory test.”

    Read argues that this test measures a simpler form of passive photographic memory rather than full-fledged working memory. Wynn agrees that the chimp studies are not comparable with the dual-task experiments done with humans. “The tests included no distractions” to engage the central executive and explore the ability to focus attention on the task at hand, Wynn says.

    Meanwhile, the working-memory concept continues to inspire others. Psychologists Thomas Suddendorf of the University of Queensland in Brisbane, Australia, and Michael Corballis of the University of Auckland in New Zealand think working memory is crucial to “mental time travel,” the ability to harness memories of the past to imagine the future. They argue that this capacity was crucial to the evolution of language. Harvard University psychologist Daniel Schacter agrees. On the basis of brain-imaging studies and other research, Schacter and Donna Rose Addis of the University of Auckland have concluded that the same neural networks are implicated in both remembering the past and imagining the future and that both processes probably involve something like Baddeley's proposed episodic buffer. “Working memory is critically important for constructing simulations of future events,” Schacter says.

    Wynn and Coolidge, who routinely cite Schacter and Addis in their own papers, say that a jump in working memory capacity was also key to the construction of modern human symbolism and artistic expression, including the ability to imagine things that have never existed and never will, like the Lion Man. “We may have the exact timing wrong and the exact nature of the genetic events” wrong as well, Coolidge says. “But something happened that was less than gradual in the evolution of the human mind.”

  11. Evolution of Behavior

    Does 'Working Memory' Still Work?

    1. Michael Balter

    The idea that a better working memory made Homo sapiens smarter than its ancestors is attracting attention from psychologists, archaeologists, and neuroscientists alike (see main text). But now some researchers are challenging some of the basic tenets of the model it's based on.

    Sticking to his model.

    Alan Baddeley of the University of York.

    CREDIT: LINDSEY BOWES

    The idea that a better working memory made Homo sapiens smarter than its ancestors is attracting attention from psychologists, archaeologists, and neuroscientists alike (see main text, p. 160). The architects of the hypothesis, Thomas Wynn and Frederick Coolidge of the University of Colorado, Colorado Springs, base their idea on a model of working memory proposed 35 years ago by two British psychologists. That model, devised by Alan Baddeley and Graham Hitch of the University of York in the United Kingdom, “was seminal” in memory research, says psychologist Randall Engle of the Georgia Institute of Technology in Atlanta. But now he and other researchers are challenging some of its basic tenets. “The Baddeley model has pretty stiff competition now” from alternative models, says psychologist Jackie Andrade of the University of Plymouth in the U.K.

    Baddeley proposed that working memory includes separate, temporary storage areas for verbal and visual information, plus a central executive to direct information flow. Today, a key issue is whether our temporary memories are actually stored in buffers separate from long-term memory or if these simply represent “activated” parts of long-term memory.

    The latter model is “more neurologically feasible,” says psychologist Nelson Cowan of the University of Missouri, Columbia, who cites recent brain-imaging studies that he says contradict Baddeley's model. “The neural data don't support a buffer model of working memory,” agrees Mark D'Esposito, a cognitive neuroscientist at the University of California, Berkeley, whose lab has done such experiments. Different parts of the brain are activated depending on what kind of working-memory task is being done, says D'Esposito, who concludes that “working memory” involves many different parts of the brain working together. “It doesn't appear that information is transferred to some other location, like RAM in a computer,” he says.

    The field is split, with many Europeans and psychologists tending to favor the Baddeley model, whereas many Americans and neuroscientists tend to favor activation models. “There are pretty much two traditions,” says Andrade.

    Baddeley finds the brain-imaging studies inconclusive. They “result in a veritable plum pudding of different areas activated by apparently similar tasks, whereas the [psychological and clinical] evidence has been broadly coherent and very fruitful over a 35-year span,” he says. Psychologist David Linden of Bangor University in Gwynedd, U.K., also sees no reason to jettison the Baddeley model. He says working memory “is preserved in many patients who have severe amnesia and cannot encode new material into long-term memory. And it can deal with information that has no relevant representation in long-term memory, such as characters of an unknown language or novel sounds.”

    Although Wynn and Coolidge favor Baddeley's model, they say they are not wedded to it. “I don't think our approach stands or falls with the Baddeley model,” says Wynn. Coolidge agrees: “Our puzzle for the future lies more in explaining what enhanced working memory did for humans rather than in strict lab tests of Baddeley's components.”

  12. Archaeology

    Did Modern Humans Get Smart Or Just Get Together?

    1. Elizabeth Culotta

    The first archaeological signs of art and symbolism may mark new heights of social interaction rather than a cognitive leap.

    When did humans start to do the things that make us human? The archaeological record is the obvious place to find out, but it presents a puzzle: About 40,000 years ago in Europe, when modern humans had moved into the continent, there was a burst of creativity expressed in everything from cave paintings to figurines to jewelry and complex tools. Some researchers have argued that this explosion of artistic material culture reflects a leap in human cognition—the point when we finally got smart enough to think symbolically and craft complex tools, perhaps because of an advance in working memory (see p. 160).

    Primitive style.

    Early artists made shell beads in Israel 90,000 years ago (left) and etched eggshells in Africa 60,000 years ago (right).

    CREDIT (LEFT TO RIGHT): M. VANHAEREN ET AL., SCIENCE 312, 5781 (23 JUNE 2006) ; COURTESY OF PIERRE-JEAN TEXIER, DIEPKLOOF PROJECT

    But over the past decade, signs of such “modern” behavior have been found much earlier. Marine shell beads turn up in Israel about 90,000 years ago, then disappear. Chunks of red ochre with geometric scratchings and tiny shell beads pop up about 70,000 years ago or more in Africa and then vanish; etched ostrich eggshells then appear about 60,000 years ago. Complex behavior seems to flicker in and out of the record (Science, 6 February 2009, p. 709).

    The pattern is “pretty hard to reconcile with a gene” that conferred a cognitive advance, says anthropologist Robert Boyd of the University of California, Los Angeles (UCLA). So several researchers, including Boyd and Stephen Shennan of University College London (UCL), have suggested another kind of explanation: demography. Perhaps our complex culture does not stem simply from individual cognition but from the shared knowledge we construct in groups, Shennan proposed in a talk at a recent high-level meeting on what makes humans unique.* In this view, complex culture requires a “cultural ratchet”—the cumulative effect of many people's contributions over time, each building on the other. (A recent computer tournament explored the power of such social learning compared with individual innovation; see p. 165.) If so, factors such as population size and structure may have helped to kindle, extinguish, and rekindle modern behavior.

    This demography-based theory is an intriguing idea, but researchers have struggled to test it. At the meeting, one presenter backed the idea with simulations, and one with data.

    Shennan presented modeling of demo-graphic effects on culture, done with UCL colleagues Adam Powell and Mark Thomas. They simulated how culture would evolve in a population made up of small bands of humans, assuming that people could learn from others how to make a kayak or craft jewelry but that the learning process was not perfect. Larger groups had a higher probability of creating innovations that went beyond the previous best, so individuals in big groups, or who often traveled among groups, had a better chance of learning from an improved version.

    In the simulations, bigger populations with more migration showed more cultural accumulation (Science, 5 June 2009, p. 1298). Populations that became smaller and more isolated actually lost culture. “We found that you could get stable, lasting differences between regions,” Shennan said in his talk. He and colleagues compared their models with the archaeological record, using genetic data from living people to roughly estimate ancient population sizes. They found that the population densities during Europe's cultural flowering 40,000 or so years ago were reached in Africa about 100,000 years ago—not long before cultural complexity arose there.

    Next came Boyd, who set out to test the model in the real world. He analyzed data from previously studied traditional societies on 10 islands in Oceania. Working with UCLA graduate student Michelle Kline, he compared the number and complexity of tools used to forage for marine resources on each island.

    Boyd and Kline found a clear picture: Islands with bigger populations had more tools, whereas smaller populations had fewer tools. “The results are very strong,” says Boyd. “Nothing else seems to matter at all.” Shennan was enthusiastic about the data: “The predictions of the model were borne out in a modern situation where you can collect information on all the relevant variables,” he says. “That doesn't happen all the time.”

    Translated into the past, this theory suggests that any cognitive leap happened perhaps 90,000 years ago or earlier and that bursts of complex culture may reflect bigger populations or more contact among groups. That may be true in Europe of 40,000 years ago, says archaeologist Francesco D'Errico of the University of Bordeaux in France, who thinks modern humans and Neandertals may have had similar cognitive abilities.

    Conversely, if climate deteriorated and patches of habitat were farther apart, more isolated groups might lose culture. That happened on the island of Tasmania, where people lost the ability to craft bone tools and boats when they were isolated by rising sea levels about 10,000 years ago, Boyd notes.

    At this point, says Shennan, theory and data “add up to a very strong alternative to the cognition model. Now we need more model development, testing, and data collection.” But workshop co-organizer Curtis Marean of Arizona State University, Tempe, points out that cognition also plays a role, for example, influencing the rate of innovation in Shennan's models. “It's not just demography or cognition. We need to pull all of these together.”

    • * Human Uniqueness and Behavioral Modernity Workshop, Arizona State University, 20–22 February.

  13. Cultural Evolution

    Conquering by Copying

    1. Elizabeth Pennisi

    A computer tournament has revealed the benefit of copying someone else's actions over solving a problem solo, a finding that has implications for cultural evolution.

    Lost in a jungle?

    Survival may depend on copying others or trial-and-error learning.

    CREDIT: M. TWOMBLY/SCIENCE

    Suppose you find yourself in an unfamiliar environment where you don't know how to get food, avoid predators, or travel from A to B. Would you invest time working out what to do on your own, or observe other individuals and copy them? If you copy, who would you copy? The first individual you see? The most common behaviour? Do you always copy, or do so selectively?

    What would you do?

    With those provocative words, posted on a Web site and included in a flyer sent to colleagues and academic departments around the world in late 2007, Kevin Laland threw down the gauntlet on an international competition that he hoped would accelerate his academic field. Laland, an evolutionary biologist at the University of St. Andrews in the United Kingdom, is part of a consortium that has a €2 million grant to explore how human culture evolves. A crucial part of this issue is how people develop new behaviors. To probe this question, Laland and the rest of the consortium wanted to examine the relative importance of social learning, the acquisition of behaviors from watching other people, versus individual innovation.

    The ability to learn from others is central to the evolution and persistence of culture, and it is viewed as part of the reason humans have come to dominate the planet. But, notes Laland, “it has proven quite tricky in a formal mathematical sense to link social learning to our success as a species.”

    Equally important, it hasn't been clear how people best learn socially. Sometimes individuals copy the behaviors of others seemingly at random; other times they appear to decide who to copy based on the level of prestige of the individual. “We find evidence of different rules, but it was frustrating because we didn't know which of these rules was the best,” Laland says.

    To address that question, the consortium decided to host a tournament, with a €10,000 prize, in which all comers would pit computer programs incorporating social-learning strategies against each other. “It was a gamble,” Laland recalls. “My biggest fear was that nobody would participate.”

    The gamble paid off. More than 100 teams, including a few from high schools, competed—“far more than we even hoped for,” says Laland. The analysis of the competition, reported on page 208, “addresses in a clearly new way some of the questions that have been nagging at the field of evolutionary social learning for more than 2 decades,” says Luc-Alain Giraldeau, a behavioral ecologist at the University of Quebec, Montreal, in Canada.

    A simple approach, from a surprising entry, won hands down. Most researchers had thought that a mix of learning on one's own and social learning would be the best strategy. Yet a pair of graduate students won the contest with a strategy heavily tilted toward imitation rather than innovation (see sidebar, p. 166). “The main take-home message is that it pays to imitate success, except when there is evidence that what has been successful recently is no longer working well,” says Robert Axelrod of the University of Michigan, Ann Arbor.

    The tournament's results, say some researchers, have implications for human cultural evolution. “It implies that our success as a species rests heavily on the right social and networking skills of knowing who, what, and when to copy,” notes Samuel Bowles, an economist at the Santa Fe Institute.

    Getting the game together

    As strange as a face-off might seem for science, there's a powerful precedent for a tournament giving the study of human behavior a much-needed jolt. In the 1980s, Axelrod organized a competition to get a handle on why cooperation evolves. He challenged contestants to engage repeatedly in the prisoner's dilemma—in which two prisoners must choose between squealing on one another or cooperating in their refusal to talk to the police. “The strength of a tournament is that it provides the means to achieve insights that can be totally unexpected by both the contestants and even the designers of the tournament,” says Axelrod.

    In his contest, involving 28 entrants, a simple tit-for-tat strategy—cooperate if your fellow prisoner cooperates, otherwise don't—proved the most effective. The tournament “was enormously influential because before, not many people were studying the evolution of cooperation,” says Laland, “It kick-started the field.”

    Laland felt the study of social learning would benefit from a similar boost. He tapped St. Andrews postdoctoral fellow Luke Rendell to design the tournament and recruited help from experts in social learning, cultural evolution, and game theory. Rendell's first task was to figure out a problem to be solved by contestants. The cooperation field had already accepted the prisoner's dilemma as relevant, but there was no such theoretical tool for assessing the adaptive value of social learning in a complex environment.

    After much discussion, Laland's team decided to center the tournament on a learning problem called the restless multiarmed bandit. The name is inspired by the “onearmed bandit”—the slot machine—in which pulling a lever sets off a game of chance. In the tournament's case, the multiple “arms” represent different abstract “behaviors”—100 possible ones in all, each associated with a different payoff. The challenge is to maximize one's payoff over the long run. “I often envisage it as being dropped onto a jungle island, where there are a number of possible ways of getting food,” says Rendell. One might learn to gather fruit, hunt, fish, grub for bugs, dig out tubers, et cetera, with varying degrees of success.

    In a new environment like a jungle, there are two ways to pick up a new skill: figure out what to do by trial and error or copy what others do after observing or interacting with them. Both learning approaches take time and may or may not lead to a good payoff. But eventually, one builds up a repertoire of useful behaviors.

    Rendell created a computer program for running the multiarmed bandit problem under varying conditions. In each scenario, 100 individuals have three options every round: “observe,” to learn a behavior by watching another individual; “innovate,” to develop one of the behaviors on one's own; or “exploit,” which is the equivalent of pulling one of the bandit's arms by engaging in one of the behaviors that have been acquired. Each round, only exploiters get points, so there is always a tradeoff between using an existing behavior and getting a payoff right away or learning a new, and potentially better, behavior that could earn points later.

    The simulated individuals in Rendell's artificial world can follow different strategies programmed into them, such as observing more and innovating less, and each round some randomly “die.” They are replaced, at times by copies of individuals with more effective learning strategies, as determined by earned points; the process approximates natural selection's survival of the fittest.

    Rendell also made it so that he could vary the conditions between rounds. Learning from others might become more or less error-prone, for example. The payoff points associated with each behavior could also increase or decrease over the course of a simulation, adding another element to the changing environment. This twist introduces the possibility that the simulated individuals are acting on outdated information, which changes the dynamic between social learning and innovating.

    Ready. Set. Go.

    Designing, testing, and refining the tournament's rules took 18 months, and during that time no one in the consortium knew if it would attract entrants. To their relief, the interest was overwhelming; 104 entries from 16 countries and more than a dozen disciplines accepted the challenge, each providing a computer program encoding a strategy to guide the behavior of individuals in Rendell's tournament.

    The first stage of the tournament, which took place in 2008, pitted pairs of strategies head to head, in more than 5000 contests. Each contest started with a simulation in which one strategy guided the actions of all 100 individuals for 100 rounds. Because of elements of chance built into the simulation—it might take one round or many to acquire a particular behavior by observing someone, for example—each individual accumulated a different point tally over that starting period. Then individuals whose actions are controlled by an opponent's program start to “invade,” dropping into the “world” to replace some of the ones that “died.” The two strategies would then battle for 10,000 rounds. Over that time, individuals accumulating more points reproduced more, representing an ever-greater proportion of the population. The strategy having the most individuals in the final 2500 rounds was declared the winner of that simulation. The overall winner in each head-to-head contest was determined by tallying 20 such simulations.

    Like father.

    Evolutionary biologist Kevin Laland being copied by his son, a tendency that has helped make humans so successful.

    CREDIT: GILLIAN BROWN

    This first round, requiring more than 100,000 simulations, winnowed the field to 24 top learning strategies, which then further competed in a round-robin runoff to determine the top 10.

    In early 2009, the second stage, the “melee,” pitted all 10 finalists together in simulations. They played against each other under 15,000 scenarios, each with 10,000 rounds. The whole tournament was only able to be held thanks to more than 65,000 hours of computer time provided by the U.K. National Grid Service.

    The winning team, two Canadian graduate students from outside the small world of social-learning research, was a shocker to many. “To be honest, I was quite surprised and annoyed. We really thought we had winning entries because we had run several tournaments using our own strategies before submitting the final versions,” says Giraldeau.

    The lack of formal training in social learning didn't prevent the winners from creating a strategy that depended more heavily on that process than others. “Very few other strategies realized that it never paid to innovate and that observation was the only choice,” says Daniel Cownden, one of the two students from Queen's University in Kingston, Canada.

    The pair's strategy was “exceptionally clever,” says Robert Boyd, a biological anthropologist at the University of California, Los Angeles, who is a consortium member and an author of the tournament report. “In [an] environment where the world is changing, the best strategy is a lot of imitation.”

    Before the tournament, many researchers had considered imitation limited in value because one might unknowingly waste time copying an outdated or inappropriate behavior. The contest demonstrated that copying was a good approach because those being imitated had likely already decided on—and were enacting—the best strategy they could. “It's kind of parasitizing good ideas that other strategies are generating,” says Laland. New, possibly better, behaviors still arise because copying is typically imperfect and errors in imitating a behavior at times yielded improvements.

    In general, the learning strategies that best succeeded in the tournament shared several features. They emphasized social learning but spent as much time as possible enacting known behaviors and earning payoff points. The results “say clearly that if you spend too much time learning, life will pass you by,” says Rendell. Given the conditions set up in the tournament, devoting just 10% of one's time to learning proved to be optimal.

    The competition also showed that it is important to assess changes in the environment, such as shifting payoffs for behaviors, and adjust accordingly, even in the middle of a run. Players benefited, too, if they could keep track of when something was learned, because in a changing environment older behaviors were more likely to become outdated.

    Bowles suggests that the tournament's results will reorient thinking about what drives human progress. “Most people, when they think about where new ideas come from, think about some eccentric tinkering in a garage or some shy geek playing around with a computer. We think that's how progress gets made,” he explains. “What this group of authors [is] suggesting is that this does go on, but what really is decisive is spreading these ideas.”

    Next step

    Richard McElreath, an evolutionary ecologist at the University of California, Davis, agrees that the social-learning tournament was valuable, but he has some reservations. “Simulations and tournaments can be solidly criticized for being both difficult to interpret and potentially misleading,” he says.

    Consider the strategy ranked 95th in the tournament. Graduate students Shane Gero of Dalhousie University in Halifax and Marianne Marcoux of McGill University in Montreal, both in Canada, based their entry, “higherlearning,” on the idea that the education of graduate students often depends on a lot of innovation and individual learning rather than social learning. In the tournament's simulations “being a graduate student was very maladaptive,” Gero notes. In real-world academia, however, such behavior arguably does quite well.

    Alan Rogers, an anthropologist at the University of Utah in Salt Lake City, suggests that the options in the tournament were also not as clear-cut as they appeared. He proposes that the very act of doing something, or exploiting, involves some subtle elements of individual learning that are not seen as learning per se.

    Two more tournaments testing learning strategies are in the works. One will look at what happens when information flow is restricted: for example, when players can observe only a subset of the behaviors. In the other, the players will know something about the age and success rate of players whose behavior they can copy. Things like the perception of experience and prestige “may be very important in human culture,” says Laland.

    No matter what happens in any subsequent contests, this one's place in history seems assured. “This tournament will invigorate the field by attracting new scholars,” says McElreath. “I expect it to become a classic.”

  14. Cultural Evolution

    A Winning Combination

    1. Elizabeth Pennisi

    The pair of graduate students who designed the winning computer program in a recent social-learning tournament (see main text) obsessively spent hundreds of hours perfecting their social-learning strategy.

    Scheming.

    Daniel Cownden (foreground) and Timothy Lillicrap worked out a winning social-learning strategy.

    CREDIT: BRONWYN MCLEAN

    The chance to spend 4 months designing a computer program to compete in a social-learning tournament (see main text, p. 165) immediately appealed to Timothy Lillicrap, a graduate student in computational neuroscience at Queen's University in Kingston, Canada. “So much time in science is spent getting half-answers to complicated questions, which often take decades to appreciate fully. The tournament was an opportunity to work on something with a hard and fast goal and a hard and fast answer,” he recalls.

    Lillicrap teamed up with Daniel Cownden, a fellow graduate student in mathematics, and together they obsessively spent hundreds of hours perfecting their social-learning strategy. Although they lacked experience in social-learning research, Lillicrap knew about figuring out the most efficient way to accomplish a task and estimating outcomes based on what known data is available, whereas Cownden understood evolutionary game theory. This mix of knowledge proved a winning combination, as their program easily beat about 100 others. “Certainly a reason for our success was the balance between Tim's concrete, computer science, ‘code it and see’ approach and my abstract, mathematical, ‘hem and haw’ approach,” says Cownden.

    Early on, the pair set up their own in-house competition according to the tournament's rules. The duo met weekly to face off with the latest iterations of their computer programs. The loser had to make the winner dessert but got to see the winner's program and crib from it if desired. They quickly homed in on potentially good approaches. “We were one of the few entries which noticed that it is virtually always better to observe others rather than gather information through innovation,” says Lillicrap.

    The two students created a player that had access to all the hidden variables, allowing it to “cheat” and make nearly perfect decisions. By recording how this superplayer worked the game, they obtained information needed to train a neural network that would underpin their final strategy. The cheater figured out, for example, how to evaluate whether to learn something that might be useful later on or just use a behavior already in its repertoire.

    Cownden and Lillicrap called the strategy they entered “discount-machine” because it discounted less certain future rewards for more guaranteed immediate gains. That involved weighing how fast the environment will change—if it's changing fast, then past social learning gets outdated quickly—and how good, and reliable, the payoffs for an action were. Based on that information, discountmachine decides whether to do what it already knows how to do, getting an immediate reward, or whether to see what someone else is doing and learn from them, in the hope of getting what might be a bigger reward later.

    Cownden and Lillicrap were definitely dark-horse candidates going into the contest. “We were quite surprised nobody from a social-learning lab had gone on to win,” says Luke Rendell, an organizer of the tournament from the University of St. Andrews in the United Kingdom. “You have to take your hat off to them.”

    For Cownden, the tournament has made a big difference in his graduate studies. He wants to develop a better method for comparing the value of information with the value of actions in making decisions and plans to combine decision theory with evolutionary game theory to do so. In this way, he hopes to come up with a better way to approach problems such as that offered by the tournament. “My research goals were vague and unformed when I encountered the tournament. Now they are focused and clear.”

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution