News this Week

Science  01 Aug 1997:
Vol. 277, Issue 5326, pp. 635

    Archaeologists Rediscover Cannibals

    1. Ann Gibbons


    When Arizona State University bioarchaeologist Christy G. Turner II first looked at the jumbled heap of bones from 30 humans in Arizona in 1967, he was convinced that he was looking at the remains of a feast. The bones of these ancient American Indians had cut marks and burns, just like animal bones that had been roasted and stripped of their flesh. “It just struck me that here was a pile of food refuse,” says Turner, who proposed in American Antiquity in 1970 that these people from Polacca Wash, Arizona, had been the victims of cannibalism.

    Unkind cuts.

    A Neandertal bone from Vindija Cave, Croatia.


    But his paper was met with “total disbelief,” says Turner. “In the 1960s, the new paradigm about Indians was that they were all peaceful and happy. So, to find something like this was the antithesis of the new way we were supposed to be thinking about Indians”—particularly the Anasazi, thought to be the ancestors of living Pueblo Indians. Not only did Turner's proposal fly in the face of conventional wisdom about the Anasazi culture, but it was also at odds with an emerging consensus that earlier claims of cannibalism in the fossil record rested on shaky evidence. Where earlier generations of archaeologists had seen the remains of cannibalistic feasts, current researchers saw bones scarred by ancient burial practices, war, weathering, or scavenging animals.

    To Turner, however, the bones from Polacca Wash told a more disturbing tale, and so he set about studying every prehistoric skeleton he could find in the Southwest and Mexico to see if it was an isolated event. Now, 30 years and 15,000 skeletons later, Turner is putting the final touches on a 1500-page book to be published next year by the University of Utah press in which he says, “Cannibalism was practiced intensively for almost four centuries” in the Four Corners region. The evidence is so strong that Turner says “I would bet a year of my salary on it.”

    He isn't the only one now betting on cannibalism in prehistory. In the past decade, Turner and other bioarchaeologists have put together a set of clear-cut criteria for distinguishing the marks of cannibalism from other kinds of scars. “The analytical rigor has increased across the board,” says paleoanthropologist Tim D. White of the University of California, Berkeley. Armed with the new criteria, archaeologists are finding what they say are strong signs of cannibalism throughout the fossil record. This summer, archaeologists are excavating several sites in Europe where the practice may have occurred among our ancestors, perhaps as early as 800,000 years ago. More recently, our brawny cousins, the Neandertals, may have eaten each other. And this behavior wasn't limited to the distant past—strong new evidence suggests that in addition to the Anasazi, the Aztecs of Mexico and the people of Fiji also ate their own kind in the past 2500 years.

    These claims imply a disturbing new view of human history, say Turner and others. Although cannibalism is still relatively rare in the fossil record, it is frequent enough to imply that extreme hunger was not the only driving force. Instead of being an aberration, practiced only by a few prehistoric Donner Parties, killing people for food may have been standard human behavior—a means of social control, Turner suspects, or a mob response to stress, or a form of infanticide to thin the ranks of neighboring populations.

    Not surprisingly, some find these claims hard to stomach: “These people haven't explored all the alternatives,” says archaeologist Paul Bahn, author of the Cambridge Encyclopedia entry on cannibalism. “There's no question, for example, that all kinds of weird stuff is done to human remains in mortuary practice”—and in warfare. But even the most prominent skeptic of earlier claims of cannibalism, cultural anthropologist William Arens of the State University of New York, Stony Brook, now admits the case is stronger: “I think the procedures are sounder, and there is more evidence for cannibalism than before.”

    White learned how weak most earlier scholarship on cannibalism was in 1981, when he first came across what he thought might be a relic of the practice—a massive skull of an early human ancestor from a site called Bodo in Ethiopia. When he got his first look at this 600,000-year-old skull on a museum table, White noticed that it had a series of fine, deep cut marks on its cheekbone and inside its eye socket, as if it had been defleshed. To confirm his suspicions, White wanted to compare the marks with a “type collection” for cannibalism—a carefully studied assemblage of bones showing how the signature of cannibalism differs from damage by animal gnawing, trampling, or excavation.

    “We were naïve at the time,” says White, who was working with archaeologist Nicholas Toth of Indiana University in Bloomington. They learned that although the anthropological literature was full of fantastic tales of cannibalistic feasts among early humans at Zhoukoudian in China, Krapina cave in Croatia, and elsewhere, the evidence was weak—or lost.

    Indeed, the weakness of the evidence had already opened the way to a backlash, which was led by Arens. He had deconstructed the fossil and historical record for cannibalism in a book called The Man-Eating Myth: Anthropology and Anthropophagy (Oxford, 1979). Except for extremely rare cases of starvation or insanity, Arens said, none of the accounts of cannibalism stood up to scrutiny—not even claims that it took place among living tribes in Papua New Guinea (including the Fore, where cannibalism is thought to explain the spread of the degenerative brain disease kuru). There were no reliable eye witnesses for claims of cannibalism, and the archaeological evidence was circumstantial. “I didn't deny the existence of cannibalism,” he now says, “but I found that there was no good evidence for it. It was bad science.”

    Physical anthropologists contributed to the backlash when they raised doubts about what little archaeological evidence there was (Science, 20 June 1986, p. 1497). Mary Russell, then at Case Western Reserve University in Cleveland, argued, for example, that cut marks on the bones of 20 Neandertals at Krapina Cave could have been left by Neandertal morticians who were cleaning the bones for secondary burial, and the bones could have been smashed when the roof caved in, for example. In his 1992 review in the Cambridge Encyclopedia, Bahn concluded that cannibalism's “very existence in prehistory is hard to swallow.”

    Rising from the ashes

    But even as some anthropologists gave the ax to Krapina and other notorious cases, a new, more rigorous case for cannibalism in prehistory was emerging, starting in the American Southwest. Turner and his late wife, Jacqueline Turner, had been systematically studying tray after tray of prehistoric bones in museums and private collections in the United States and Mexico. They had identified a pattern of bone processing in several hundred specimens that showed little respect for the dead. “There's no known mortuary practice in the Southwest where the body is dismembered, the head is roasted and dumped into a pit unceremoniously, and other pieces get left all over the floor,” says Turner, describing part of the pattern.

    White, meanwhile, was identifying other telltale signs. To fill the gap he discovered when he looked for specimens to compare with the Bodo skull, he decided to study in depth one of the bone assemblages the Turners and others had cited. He chose Mancos, a small Anasazi pueblo on the Colorado Plateau from A.D. 1150, where archaeologists had recovered the scattered and broken remains of at least 29 individuals. The project evolved into a landmark book, Prehistoric Cannibalism at Mancos (Princeton, 1992). While White still doesn't know why the Bodo skull was defleshed—“it's a black box,” he says—he extended the blueprint for identifying cannibalism.

    In his book, White describes how he painstakingly sifted through 2106 bone fragments, often using an electron microscope to identify cut marks, burn traces, percussion and anvil damage, disarticulations, and breakages. He reviewed how to distinguish marks left by butchering from those left by animal gnawing, trampling, or other wear and tear. He also proposed a new category of bone damage, which he called “pot polish”—shiny abrasions on bone tips that come from being stirred in pots (an idea he tested by stirring deer bones in a replica of an Anasazi pot). And he outlined how to compare the remains of suspected victims with those of ordinary game animals at other sites to see if they were processed the same way.

    When he applied these criteria to the Mancos remains, he concluded that they were the leavings of a feast in which 17 adults and 12 children had their heads cut off, roasted, and broken open on rock anvils. Their long bones were broken—he believes for marrow—and their vertebral bodies were missing, perhaps crushed and boiled for oil. Finally, their bones were dumped, like animal bones.

    In their forthcoming book, the Turners describe a remarkably similar pattern of bone processing in 300 individuals from 40 different bone assemblages in the Four Corners area of the Southwest, dating from A.D. 900 to A.D. 1700. The strongest case, he says, comes from bones unearthed at the Peñasco Blanco great house at Chaco Canyon in New Mexico, which was the highest center of the Anasazi culture and, he argues, the home of cannibals who terrorized victims within 100 miles of Chaco Canyon, where most of the traumatized bones have been excavated. “Whatever drove the Anasazi to eat people, it happened at Chaco,” says Turner.

    The case for cannibalism among the Anasazi that Turner and White have put together hasn't swayed all the critics. “These folks have a nice package, but I don't think it proves cannibalism,” says Museum of New Mexico archaeologist Peter Bullock. “It's still just a theory.”

    But even critics like Bullock acknowledge that Turner and White's studies, along with work by the University of Colorado, Boulder's, Paolo Villa and colleagues at another recent site, Fontbrégoua Cave in southeastern France (Science, 25 July 1986, p. 431), have raised the standards for how to investigate a case of cannibalism. In fact, White's book has become the unofficial guidebook for the field, says physical anthropologist Carmen Pijoan at the Museum of Anthropology in Mexico City, who has done a systematic review of sites in Mexico where human bones were defleshed. In a forthcoming book chapter, she singles out three sites where she applied diagnostic criteria outlined by Turner, White, and Villa to bones from Aztec and other early cultures and concludes that all “three sites, spread over 2000 years of Mexican prehistory, show a pattern of violence, cannibalism, and sacrifice through time.”

    White's book “is my bible,” agrees paleontologist Yolanda Fernandez-Jalvo of the Museum of Natural History in Madrid, who is analyzing bones that may be the oldest example of cannibalism in the fossil record—the remains of at least six individuals who died 800,000 years ago in an ancient cave at Atapuerca in northern Spain.

    Age-old practices

    The Spanish fossils have caused considerable excitement because they may represent a new species of human ancestor (Science, 30 May, pp. 1331 and 1392). But they also show a pattern familiar from the more recent sites: The bones are highly fragmented and are scored with cut marks, which Fernandez-Jalvo thinks were made when the bodies were decapitated and the bones defleshed. A large femur was also smashed open, perhaps for marrow, says Fernandez-Jalvo, and the whole assemblage had been dumped, like garbage. The treatment was no different from that accorded animal bones at the site. The pattern, says Peter Andrews, a paleoanthropologist at The Natural History Museum, London, is “pretty strong evidence for cannibalism, as opposed to ritual defleshing.” He and others note, however, that the small number of individuals at the site and the absence of other sites of similar antiquity to which the bones could be compared leave room for doubt.

    A stronger case is emerging at Neandertal sites in Europe, 45,000 to more than 130,000 years old. The new criteria for recognizing cannibalism have not completely vindicated the earlier claims about Krapina Cave, partly because few animal bones are left from the excavation of the site in 1899 to compare with the Neandertal remains. But nearby Vindija Cave, excavated in the 1970s, did yield both animal and human remains. When White and Toth examined the bones recently, they found that both sets showed cut marks, breakage, and disarticulation, and had been dumped on the cave floor. It's the same pattern seen at Krapina, and remarkably similar to that at Mancos, says White, who will publish his conclusions in a forthcoming book with Toth. Marseilles prehistorian Alban DeFleur is finding that Neandertals may also have feasted on their kind in the Moula-Guercy Cave in the Ardeche region of France, where animal and Neandertal bones show similar processing. Taken together, says White, “the evidence from Krapina, Vindija, and Moula is strong.”

    Not everyone is convinced, however. “White does terrific analysis, but he hasn't proved this is cannibalism,” says Bahn. “Frankly, I don't see how he can unless you find a piece of human gut [with human bone or tissue in it].” No matter how close the resemblance to butchered animals, he says, the cut marks and other bone processing could still be the result of mortuary practices. Bullock adds that warfare, not cannibalism, could explain the damage to the bones.

    White, however, says such criticism resembles President Clinton's famous claim about marijuana: “Some [although not all] of the Anasazi and Neandertals processed their colleagues. They skinned them, roasted them, cut their muscles off, severed their joints, broke their long bones on anvils with hammerstones, crushed their spongy bones, and put the pieces into pots.” Borrowing a line from a review of his book, White says: “To say they didn't eat them is the archaeological equivalent of saying Clinton lit up and didn't inhale.”

    White's graduate student David DeGusta adds that he has compared human bones at burial sites in Fiji and at a nearby trash midden from the last 2000 years. The intentionally buried bones were less fragmentary and had no bite marks, burns, percussion pits, or other signs of food processing. The human bones in the trash midden, however, were processed like those of pigs. “This site really challenges the claim that these assemblages of bones are the result of mortuary ritual,” says DeGusta.

    After 30 years of research, Turner says it is a modern bias to insist that cannibalism isn't part of human nature. Many other species eat their own, and our ancestors may have had their own “good” reasons—whether to terrorize subject peoples, limit their neighbors' offspring, or for religious or medicinal purposes. “Today, the only people who eat other people outside of starving are the crazies,” says Turner. “We're dealing with a world view that says this is bad and always has been bad. … But in the past, that view wasn't necessarily the group view. Cannibalism could have been an adaptive strategy. It has to be entertained.”


    Feeling a Protein's Motion

    1. David Ehrenstein

    There's a new way to watch proteins shimmy and dance as they carry out their biological tasks. Researchers traditionally follow these shape changes spectroscopically, deducing them from changes in the molecules' ability to absorb particular wavelengths of light. But in the 22 July Proceedings of the National Academy of Sciences, a group in Israel reports taking a more direct approach: planting the tip of an ultrafine glass fiber on top of the protein and actually feeling it move.

    Fine touch.

    Monitored by a laser, a probe senses shape changes in a film of the protein bacteriorhodopsin (bottom) when struck by pulses from other lasers.


    The technique gives researchers studying the rates and extent of conformational changes in proteins a new tool, sensitive to motions that spectroscopy cannot detect, says Mordechai Sheves of the Weizmann Institute for Science in Rehovot, Israel, a member of the group. Other researchers are intrigued, but say they want confirming evidence from other groups that the method really detects only protein motion.

    The researchers—Aaron Lewis, Michael Ottolenghi, Sheves, and their colleagues at Hebrew University in Jerusalem and the Weizmann Institute—zeroed in on bacteriorhodopsin (bR), a protein found in the membranes of certain bacteria, where it responds to light by changing shape and pumping protons across the membrane. In order to “feel” these motions, the group used a variant of atomic force microscopy, a technique in which an ultrafine probe is scanned across a surface, sensing its atom-scale bumps and depressions to make an image.

    For their experiment, however, Lewis and his colleagues kept the ultrafine probe in one place, poised atop a thin film of bR-filled membranes. In response to pulses of laser light, the proteins changed shape and then relaxed again in a matter of milliseconds. At the same time, another laser sensed a minuscule displacement of the probe tip. To show that the motion wasn't caused by laser-induced heating, the group used a well-documented property of bR—that another, appropriately delayed laser pulse of the right wavelength can reverse the photoreaction, stopping the protein midcycle and sending it back to its initial state. The second pulse sent the tip back toward the sample, as expected; heating would have displaced the probe further outward.

    Because the group's apparatus was equipped with an unusually stiff probe that is able to respond at high frequencies, they could track the protein motions on time scales of microseconds—unprecedented resolution for atomic force microscopy. The time course of the probe motion doesn't quite match the data obtained spectroscopically, but some of their time constants are in rough agreement. According to Sheves, the group also detected stages of the protein's shape change that researchers have never reported before. These “spectroscopically silent” motions, says Sheves, point to a new model for the initial responses of the protein to light.

    Sheves expects the technique to deliver similar insights into the contortions of other molecules. “It gives you a new direct probe to look at conformational changes in proteins,” says Sheves. Robert Glaeser of the University of California, Berkeley, is intrigued by the new model for bR's reaction to light, which he calls “unprecedented,” but he thinks the work still needs some “reality checks.” For one thing, while the researchers know the tip moved, they didn't convert the laser signal into an actual distance.


    Possible Glimpse of Earth-like Geology In Mars Rock

    1. Richard A. Kerr

    One small rock on the vast, rusty desert of Mars has riveted the attention of geologists on Earth. The rock, dubbed Barnacle Bill for its pitted appearance by members of the Mars Pathfinder mission, has a strangely familiar makeup, according to an analysis by Pathfinder's rover, Sojourner. It contains far more of the ubiquitous substance silica than any known rock in the solar system outside Earth.

    Pathfinder's world.

    The rover Sojourner is analyzing a rock called Yogi in this 360-degree still life. Tracks in the martian dust lead back to Sojourner's first target, the silica-enriched Barnacle Bill.


    “That is an intriguing and tantalizing result,” says geologist Kevin Burke of Rice University in Houston. Intriguing, because high-silica-content rocks on Earth generally come from volcanic eruptions fueled by the sinking of plates of surface rock into the planet's interior, a process thought to be uniquely terrestrial. Tantalizing, because the Pathfinder lander and Sojourner will be hard pressed to pin down whether the rock really is volcanic or whether it's the product of some other process, such as sedimentation or meteorite impact. Nor can Sojourner, with its limited range and capabilities, tell researchers how common such silica-rich rock is on Mars. “It's an initial result,” says Burke. “Exactly what the significance is, we can't tell.”

    The one certainty seems to be that some process on Mars has concentrated silica, which consists of silicon combined with oxygen, until it constitutes about 60% of Barnacle Bill, or at least its surface. The football-size rock was the first that Sojourner analyzed with its alpha proton x-ray spectrometer (APXS). This instrument exposes the rock to a radioactive source of protons and alpha particles and analyzes those particles and x-rays bouncing back from the rock's outer 100 micrometers or so at energies that depend on the rock's mixture of elements. The indications of high silica in Barnacle Bill stunned geologists because ordinary basalt, the kind of rock expected to make up most of the martian surface, is only 45% to 50% silica.

    Two-thirds of Earth is also covered with basalt, in the form of ocean crust, and Burke notes that “it's quite hard to make rocks with more silica.” Earth does so in deep-sea trenches, where plate tectonics sends water-laden slabs of ocean plate slanting down into the mantle. On the way down, the water percolates upward into mantle rock above the slab, where it, in effect, distills some of the mantle's silica into a silica-enriched magma that feeds chains of volcanoes like the Andes, the Cascades, and the Aleutian Islands. These volcanoes mostly produce a rock called andesite, after the Andes, that contains about 60% silica, just like Barnacle Bill.

    Same silica content, same process? Not necessarily, say planetary geologists. For one thing, the high silica ratio doesn't prove that Barnacle Bill is an andesite, much less one produced by plate tectonics. It could also be made of small bits of basalt combined by the force of a large meteorite hit, or by sedimentary processes, with bits of an even more silicic rock. In ordinary volcanoes with no obvious link to sinking tectonic plates—and there are plenty of those on Mars—silica-poor minerals can sometimes crystallize and settle to the floor of the magma chamber, leaving magma enriched with silica. The resulting rock can range from andesitic in composition to as much as 75% silica.

    A geologist could at least tell whether Barnacle Bill is a mixture of rocks or a single lava by picking it up, breaking it open with a rock hammer, and inspecting it to see whether it has the fine-grained texture of a lava or is a coarse agglomeration of particles. Pathfinder is a good deal less capable than a human field geologist, but color imaging from the lander has already suggested that Barnacle Bill is uniform down to the centimeter scale, as expected of an andesite.

    If Barnacle Bill continues to look like an andesitic lava, it could lend support to a new picture of Mars's geologic past. Today, Mars has neither oceans nor any signs that plate tectonics is at work there. It appears to be a “one-plate planet,” encased in a single, thick layer of cold, immobile rock, as our moon has been for billions of years. But geophysicist Norman Sleep of Stanford University proposed in 1994 that the great northern lowlands of Mars, which cover one-third of the planet and lie 3 kilometers below the ancient highlands of the southern hemisphere, are the martian equivalent of ocean basins. Sleep proposed that they formed 3 billion to 4 billion years ago by the same drifting of plates still operating on Earth.

    Sleep's proposal has been controversial. “It's worth considering the concept of plate tectonics on an early Mars,” says planetary physicist David Stevenson of the California Institute of Technology in Pasadena. But he adds that “it's hard to know how to test it or develop a convincing theoretical argument.”

    Andesitic lavas would certainly bolster the case for plate tectonics on Mars if they turned up all across the planet. But finding more even at the Pathfinder site won't be easy. Barnacle Bill was the only bona fide rock to be cleanly analyzed in the rover's first 18 days. Operational problems, apparently resolved now, caused repeated delays (see sidebar), but the rocks themselves are presenting challenges as well.

    APXS analysis of another rock, called Yogi, at first suggested that it was more basaltic than Barnacle Bill, but a closer look at the rock face analyzed by APXS revealed what looked like a coating of dust and weathered minerals, says team member Ronald Greeley of Arizona State University in Tempe. Viking lander images had suggested that martian rocks would have such problem coatings (Science, 19 April 1996, p. 347). That leaves the makeup of the rock itself still uncertain, says Greeley. Although the APXS analysis of the third rock, Scooby Doo, had not been released at press time, team members are now describing it as more like a crust of solidified soil than a rock.

    Both the lander and rover seem to have weeks and even months of productive work ahead, however, so team members remain upbeat about getting a look at a lot more rocks. “Things are never quite as simple as you might like them to be,” says Greeley, “but that makes it interesting.”


    Flawless Hardware, Fallible Humans

    1. Richard A. Kerr

    No one had ever dropped a rover on another planet and tooled around the alien landscape, so some problems might have been expected when Mars Pathfinder arrived at its destination last month. Surprisingly, they did not come during the probe's high-speed, Rube-Goldberg-like crash landing. In fact, the Pathfinder lander and its rover Sojourner have put on flawless mechanical performances. Instead, it's the software and “humanware” that have proved imperfect.

    Right off, Sojourner and the lander had trouble conversing via radio. By repeatedly shutting down and restarting the rover's modem, a trick that often works on a home computer, controllers quickly solved that problem, but they never found its cause. Later, the lander's computer repeatedly dropped what it was doing and reset itself, throwing operations into disarray. Tests on the ground pinned down that problem to a software flaw. A software fix transmitted to the lander seems to have eliminated the glitch.

    Human behavior proved more difficult to perfect. Supervisors eventually had to send excited engineers home to get some rest after errors possibly due to fatigue—such as running the rover up onto a rock—began cropping up. And 2 weeks into the mission, what mission manager Richard Cook of the Jet Propulsion Laboratory (JPL) in Pasadena, California, calls “troubling miscues” began disrupting communications between the lander and mission operators at JPL. On one occasion, mission managers sent commands to the lander while its receiver was still turned off to conserve power, losing a whole day of operations. On another, they failed to be as precise as needed when they told operators of NASA's world-girdling network of radio dishes, the Deep Space Network (DSN), how to pick up Pathfinder's feeble signal, and another day was lost.

    “Telling [the DSN] what we want has been the problem,” says Cook. “You have to be precise, and it's taken us some time.” After back-to-back days of broken communications 16 days into the mission, JPL engineers carefully edged back into a reliable radio link with Pathfinder. They also began long-range planning of communications operations that should smooth out the link to Pathfinder. “We did know we were going to have to learn as we went along,” says Cook. Perhaps the rest of Sojourner's trip will go as smoothly as its arrival on the Red Planet.


    A Developmental Biology Summit in the High Country

    1. Wade Roush

    ALTA, UTAH—The ski hills surrounding this old silver-mining town provided an exhilarating setting for more than 1000 scientists who gathered here from 5 to 10 July for an unusual joint conference of the International Society of Developmental Biologists and the Society for Developmental Biology. A head-spinning assortment of topics from evolving gene families to fruit fly eyes abetted the high-altitude daze.

    Segmentation's Origins

    Biologists have long believed that the diverse body segments of most insects—head segments with antennae, for example, and thoracic segments with wings and legs—evolved from the many identical segments of more primitive arthropods that looked like today's centipedes and millipedes. In the 1980s, researchers thought they might have a simple explanation for the genetic changes responsible for this diversification of segments: a duplication and diversification of genes. But at the Utah meeting, Jennifer Grenier and colleagues in the lab of developmental biologist Sean Carroll at the University of Wisconsin, Madison, described new results challenging that explanation.

    Standard rations.

    Insects don't have more Hox genes than related groups; they just use them differently. (Red shows Ubx expression; blue is abd-A.)


    The older explanation grew out of the discovery that the fruit fly genome carries eight consecutive “homeobox” (Hox) genes, named after the conserved DNA sequence they all contain. Because each Hox gene helps a particular segment acquire its unique identity during development, the find suggested that the insects' evolutionary ancestor had only a few Hox genes, and that insects acquired distinct structures on their segments as extra copies of these genes accidentally cropped up in insect DNA and then specialized.

    If so, then other surviving descendants of this hypothetical ancestor would be expected to lack some of the fly's eight Hox genes. But Grenier and her colleagues now report that they have detected all eight genes in centipedes and even in onychophorans—wormlike creatures that are often described as “living fossils,” the closest living relatives to the group that gave rise to the arthropods, including insects. (The work is also described in the 1 August issue of Current Biology.) Because ancestors of the two groups diverged from insect ancestors long before the insect body plan subdivided, says Grenier, the finding implies that “the gene duplications didn't happen during insect evolution. They were much more ancient.”

    Indeed, says geneticist and Nobel Prize winner Ed Lewis of the California Institute of Technology in Pasadena, one of the earliest proponents of idea that Hox gene duplication brought about insect segment diversity, the Grenier team's work “quite nicely” puts that theory to rest. As an alternative, Grenier proposes that segment diversity arose from changes in Hox gene activity.

    To analyze the Hox genes of centipedes and onychophorans, Grenier, fellow Carroll lab members Theodore Garber and Robert Warren, and Australian collaborator Paul Whitington first purified the organisms' DNA, and then used the polymerase chain reaction to amplify their homeobox regions. The researchers then sequenced the regions and compared them with fruit fly sequences. They found that each fly Hox gene has a related or “orthologous” gene in the centipedes and onychophorans.

    Having ruled out the simplest theory of insect segment diversification, Grenier and her co-workers went on to explore whether changes in gene regulation, either of the Hox genes themselves or of the genes the Hox genes control in turn, might explain it instead. The group found major differences in the ways embryonic fruit flies, centipedes, and onychophorans deploy orthologous genes. For example, the Hox genes Ubx and abd-A are expressed primarily in the abdominal segments of fruit fly larvae, but in the centipede embryo the two genes are active in all segments but the head, and in the onychophoran embryo they are turned on only in the hindmost segment.

    To understand exactly how these organisms came to have different body plans, says Carroll, researchers will have to compare the complex regulatory regions of the Hox genes and the genes they regulate in many different species, reconstructing shifts in the timing and location of gene expression that may have altered different lineages' development. While this form of evolutionary tinkering is “more complicated” than simply duplicating existing genes, Grenier says, “it's also more exciting, because you can see the huge potential for generating diversity.”

    Arraying the Fly Eye

    A fly's compound eye is one of the marvels of development. Because each of its hundreds of independent photoreceptor units, called ommatidia, sees in a slightly different direction, they have to be assembled in an absolutely uniform hexagonal array in order for the fly's brain to piece together a coherent, wide-angle view of the world. But just how this design is imposed on the eye imaginal discs, the clumps of undifferentiated cells in the fly larva that give rise to the ommatidia, has long puzzled biologists. In Utah, however, developmental researcher Ross Cagan reported studies of the fruit fly Drosophila melanogaster that may finally explain how the tidy array arises.

    Compound interest.

    Lingering atonal activity (green) marks the cells that will be ommatidia.

    R. CAGAN

    Researchers have known for years that during fly eye formation an indentation known as the “morphogenetic furrow” sweeps across the imaginal disc from back to front, leaving behind rows of perfectly spaced ommatidia. “The furrow is the transition from no pattern to pattern,” says Cagan. “But how do you get the pattern? That's one of the Holy Grails of fly genetics.” Patricia Powell and Susan Spencer, postdocs in Cagan's lab at Washington University in St. Louis, have now gained a handhold on that prize.

    They have shown that as the furrow moves, cells destined to create a new ommatidium supply a protein signal that prevents the cells immediately ahead from forming another ommatidium. Instead, they form the narrow interommatidial spaces, while the cells between these spaces, which have not received the inhibitory signal, differentiate into ommatidia. These in turn release the inhibitory signal, and the process repeats until the furrow fully traverses the disk. “Periodic spacing patterns are everywhere in development, but we know very little about how the placement of one pattern element affects the positioning of the other,” says Don Ready, a developmental biologist at Purdue University. “Cagan's results open [this] to molecular genetic attack.”

    To come up with this model, Cagan's group started by looking at the expression of a gene called atonal (ato), shown by previous researchers to provide a signal crucial to ommatidium development by instructing one cell to become the central, so-called “R8” photoreceptor. As the furrow moves forward, other studies had shown, ato is expressed in the cells just in front of it. When the furrow engulfs the ato-expressing cells, ato expression is switched off again, but Cagan's team noticed that while this shutdown occurs immediately in the cells destined to become interommatidial spaces, it takes longer in the future R8 cells.

    Because these results indicated that the timing of ato expression is key to ommatidia differentiation, the group went looking for the signals that control it. They eventually found that as the furrow approaches, the activity of ato is turned down by a signal sent through the Ras pathway, a well-known intracellular signaling cascade. This doesn't happen right away, however, in cells destined to be ommatidia, because the Ras signal they receive is apparently weakened when it meets up with another protein called Rhomboid. The combination of Ras and Rhomboid also induces the expression of a protein called Argos, which interferes with the Ras pathway and further slows the shutdown of ato.

    Argos, however, seems to prevent this same process from unfolding in the next row of cells. By tagging Argos with a fluorescent marker, the group saw that each nascent ommatidium squirts the protein forward into the pending row. Earlier research had shown that Argos blocks the expression of Rhomboid, so with that observation, a model for the entire patterning pathway clicked into place.

    When it reaches the cells of the unpatterned region immediately ahead of the furrow, Argos inhibits Rhomboid there. That blocks the signals that slow the shutdown of ato, resulting in an interommatidial space. Without Rhomboid, however, the Ras pathway can't make new Argos, so more distant cells in the unpatterned region don't receive Argos early, allowing them to produce a new ommatidial field—which in turn activates new Argos to continue the cycle. Thus, each ommatidium clears the region ahead of it, ultimately resulting in a hexagonal pattern. Cagan's group is now attempting to verify each of these steps, by studying how the pattern is affected when various components of the system are knocked out.

    Organs Made to Order

    A developing organism's cells are like highly trained orchestra members: Each carries the full genetic score but has to listen to cues from the cells around it to know when to play its own specific portion. Now, scientists from Japan have learned to administer a key cue to undifferentiated cells from newt and frog embryos, enticing them to play whole movements by growing into finished organs such as a liver or even a beating heart. If a similar approach can be made to work for mammals, it might aid efforts to construct replacement human organs from embryonic and fetal tissues.

    For the work, developmental biologist Makoto Asashima and his colleagues at the University of Tokyo used cells taken from a part of the early amphibian embryo called the animal cap, a region that normally expands to form all the embryo's mesodermal (middle) tissues such as muscle and internal organs. Several years ago, the researchers noticed that low concentrations, about 0.5 nanogram per milliliter (ng/ml) of activin—a protein known to be important for organ formation in the intact embryo—in the culture fluid caused animal cap cells to develop into red or white blood cells, while slightly higher doses induced formation of muscle tissue. Once initiated, these tissues' developmental programs seemed to run without external input. “This led me to wonder whether it would be possible to create a complete, functional organ in vitro,” Asashima says.

    He has accomplished just that. A 50 ng/ml dose of activin produced a notochord, a rod of cells along the embryo's dorsal surface that gives rise to much of the nervous system. A dose of about 75 ng/ml gave a heart, complete with heartbeat, while 100 ng/ml yielded a liver. By adding other substances such as retinoic acid and insulin-like growth factor at various points in development, Asashima also made a pronephros—a precursor to the kidneys—and rudimentary eyes and ears. “The question of what purified factors can do, exemplified by the work of Asashima and colleagues, is very interesting,” says Hazel Sive, a molecular biologist at the Whitehead Institute for Biomedical Research in Cambridge, Massachusetts. “Maybe if you get the right tissues and the right factors, you could get [human organs] to regenerate in a dish, which is what everybody really wants.”


    New Insights Into How Babies Learn Language

    1. Marcia Barinaga

    When it comes to understanding language, it's a phonetic jungle out there. Adult speech is far from uniform, with countless subtle variations on each sound, such as the “a” in “cat” or the “o” in “cot.” But somewhere a line must be drawn, separating the cats from the cots. So, as children learn language, they must master which phonetic differences to pay attention to and which to ignore. A paper in this issue of Science and one in last week's issue of Nature shed some new light on how babies gain this key skill.

    A melody with meaning.

    Mothers' speech may help babies to form their own vowel triangles, although at higher pitch, by 20 weeks of age.


    Adults in many cultures use a singsong type of exaggerated speech when they speak to babies. This speech, often called “parentese,” seems to serve to get the baby's attention and to communicate and elicit emotions. But on page 684, Patricia Kuhl of the University of Washington, Seattle, and her co-workers provide evidence that it may be more than just a tool of endearment. Their analysis of the exaggerated and varied “caricatures” of vowel sounds that mothers use when talking to babies suggests, Kuhl says, that those distortions help infants learn the key features of the sounds.

    But babies then set aside their capacity to make some of these distinctions. Christine Stager and Janet Werker at the University of British Columbia in Vancouver report in the 24 July Nature that when infants begin learning words, they neglect some differences between sounds. Presumably, that's because those distinctions won't matter until later, when their vocabulary becomes crowded with similar-sounding words.

    Indeed, it appears that at each stage of early language learning, from categorizing sounds to applying those categories as they learn words, infants' brains are honing their efficiency, making rules for what to notice and what to dismiss. “To be experts in a language, we need to learn not only to make relevant distinctions, but to ignore irrelevant variability,” says Stanford developmental psychologist Anne Fernald.

    Kuhl and others have studied the sound-sorting process that precedes word learning. In 1992, Kuhl's team reported that by 6 months of age, Swedish and American babies learn to categorize vowel sounds, paying attention to distinctions that are meaningful in their native language, such as the difference between “ee” and “ah,” while ignoring meaningless variations, such as all the ways a person might say “ee.” Work done in the 1980s by Nan Bernstein Ratner at the University of Maryland suggested that English parentese might help babies learn these distinctions. Now Kuhl has probed further by studying the parentese of three different languages—English, Swedish, and Russian—to see if the distorted tones provide cues that may be useful for vowel pronunciation.

    Her team's analysis focused on formants, the resonant frequencies that, like notes in a musical chord, make up each vowel sound. If vowel sounds are plotted on a graph, with the frequencies of the two dominant formants represented on the x and y axes, the result is a “vowel triangle,” with the sounds “ah,” “ee,” and “oo” at the corners.

    Kuhl's group found that, in all three languages, mothers talking to their babies produced exaggerated versions of vowels, emphasizing the features that distinguish them from each other. This nearly doubled the area of the vowel triangle. “It looks like the mothers are increasing the value of the signal,” says Kuhl.

    The mothers' speech also provided many examples of each vowel sound. This, Kuhl proposes, may help babies learn the features that make each sound special, and learn to ignore the phonetic variations that fall within a given vowel sound. Indeed, by 20 weeks of age, babies' babbling contains distinct vowel sounds that form their own—albeit higher pitched—vowel triangle.

    The work “illustrates a close tie between the input and what the child is doing,” says language researcher Peter Jusczyk of Johns Hopkins University. But that falls short of proving that parentese serves an instructive role. “The fact that parents do it doesn't necessarily mean that it is essential for language learning,” says Stanford's Fernald. That hypothesis might be tested, she says, with studies across cultures that use different amounts or types of parentese.

    Once infants learn the important distinctions between speech sounds in their native language, they appear to bank some of those abilities for later use. Stager and Werker showed this in a study of infants at 14 months, an age when babies are just beginning to learn words and match them to meanings. They tested to see whether infants who were engaged in word learning would catch small but significant changes in those words.

    In earlier studies, Werker and Les Cohen at the University of Texas, Austin, showed that 14-month-olds could learn to associate a particular word with an image, and would notice if the word was changed. Werker and Cohen alternately showed the infants a picture of one nonsense object while playing a tape of the spoken nonsense word “lif” and a picture of another object while playing the word “neem.” If, after many repetitions of the object-word pairs, the babies were shown the “lif” object but heard “neem,” they studied the object longer, indicating they noticed the name switch.

    The current study had the same design, but the names—“bih” and “dih”—differed phonetically by just one sound. Control studies showed that babies could make this distinction when they heard the words on their own. But when the words were linked with objects, the babies didn't seem to notice the switch. “To our surprise, they are actually listening less carefully” when they are listening for word meaning, says Werker. Werker suggests the babies miss the switch between “bih” and “dih” because—as studies by other teams have shown—their tiny vocabularies don't generally have words that differ by only one sound, so they don't yet need to concentrate on that level of detail. The distinctions they have learned are “almost like reserve capacity,” she says.

    That makes sense, says Jusczyk; for babies to spend effort on such distinctions would be a waste at that stage of development. “There is only so much you can do at once,” he says, and it is important for infants engaged in the daunting task of learning words to disregard information that is not absolutely necessary. Later, when their vocabularies become crowded with words, that reserve capacity to distinguish sounds—a payoff perhaps of parentese—will be essential for navigating in the phonetic jungle.

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution

Navigate This Article