News this Week

Science  27 Apr 2001:
Vol. 292, Issue 5517, pp. 614

    Universities, NIH Hear the Price Isn't Right on Essential Drugs

    1. Eliot Marshall

    When the University of Minnesota won a drug patent fight in 1999, its president, Mark Yudof, said it was “like winning the lottery.” Now that lottery prize, worth as much as $300 million over time, has put Minnesota into the middle of an international debate over whether the public should get back some of the profits generated by biomedical research it has funded. A major driving force has been the cost and availability of AIDS drugs in developing countries—an issue on which advocates of limiting drug profits claimed a victory last week in South Africa. And the U.S. National Institutes of Health is being drawn reluctantly into the fray by a congressional directive to identify big moneymaking drugs derived from NIH-funded research.

    Research rewards.

    Minnesota's Robert Vince and his very valuable antiviral molecule, carbovir (below).


    What's at stake is the process by which drug companies develop and bring new products to market. Much of the underlying research for new drugs stems from university-based work, typically funded by the government. Under a series of laws passed since 1980, publicly funded researchers and their institutions are encouraged to patent and commercialize discoveries. But billion-dollar sales and high profit margins on certain drugs created in part with public monies have led some to ask whether the government deserves a slice of the revenues. Universities are also nervous about being seen as greedy.

    The public is starting to “look askance” at academic biomedicine's interest in profits, says Stanford University's dean of medicine, Eugene Bauer. Speaking last week at the National Academy of Sciences, Bauer warned that “we are perhaps entering into an era of distrust” about faculty patents and spin-off corporations. He suggested that medical schools need to build public confidence with clear conflict-of-interest rules. At the same time, faculty patent holders are bewildered by the mixed signals they're getting. Says Robert Vince, a medicinal chemist and drug inventor at Minnesota's Twin Cities campus: “I like to think that what we did was useful. … All of a sudden, we're bad guys because we developed an AIDS drug.”

    Two years ago, Minnesota managed to convince Glaxo Wellcome, now GlaxoSmithKline, that Vince and his colleague Mei Hua, who had filed patents on nucleoside analogs, were partial inventors of Ziagen (abacavir), a drug approved for AIDS therapy in December 1998. But this month, a group of Minnesota students staged a teach-in to pressure the university to forgo some of these royalties and not enforce its Ziagen patent in poor countries. These demands echo a campaign 10,000 kilometers away in South Africa, where activists are cheering a decision last week by 39 companies to withdraw from a suit to protect their patent rights. And activists at Yale University won another concession: They persuaded the university to rewrite a license agreement with Bristol-Myers Squibb Co. for the AIDS drug d4T so that generic knock-offs could be sold with impunity in South Africa.

    The debate is most acute in Africa, but U.S. politicians are getting involved, too. Last December, the U.S. Senate asked NIH to keep tabs on “blockbuster” drugs that arise from government-sponsored research and to draw up a plan to recapture some of the money. The proposal was advanced by Senator Ron Wyden (D-OR), who has long been concerned about government research “walking out the door” without an adequate return to taxpayers, one observer says. A decade ago, as a member of the House, Wyden investigated the NIH-funded discovery of Taxol, the toxic compound derived from the Pacific yew that was developed into an anticancer compound by Bristol- Myers Squibb. Wyden failed last fall to attach his proposal to an NIH spending bill, but his colleagues agreed to include it in an accompanying report on the bill.

    With the goal of “securing an appropriate return on the NIH investment in basic research,” the report asks NIH to draw up a list of FDA-approved drugs with annual U.S. sales of $500 million that also received NIH backing and to prepare a plan to “ensure that taxpayers' interests are protected.” The report is due in July, but the high threshold could limit it to one or two drugs. NIH officials declined to comment, saying only that they are working on the report.

    Back in the Twin Cities, meanwhile, the University of Minnesota is holding firm. Christine Maziar, vice president for research, says the university “applauds” Glaxo's plans to reduce the cost of other drugs and “would welcome” a price reduction of Ziagen in sub-Saharan Africa, “despite a potential reduction in royalties.” But the university will not abandon its intellectual property: “As a public institution, we are not able to give away a public asset,” Maziar says about the patent on Ziagen. “If a farmer were to donate land, we wouldn't be able to give that away, either.”

    Amanda Swarr, a graduate student in women's studies at Minnesota and leader of the Ziagen protest, argues that “negligible” revenues are at stake in Africa. Besides, she says, “the university needs to put people's lives over patents.”

    Vince says he's trying to do exactly that by putting his share of the Ziagen money to work on three potential new AIDS drugs and a drug design center. Those dreams, however, rest on the expected royalties from university-owned patents.


    Disputed AIDS Theory Dies Its Final Death

    1. Jon Cohen

    At an unusual Royal Society meeting in London last September, a controversial theory that a contaminated polio vaccine triggered the AIDS epidemic was all but pronounced dead. Now, a paper in this issue (see p. 743) and three more in this week's issue of Nature collectively declare that—to paraphrase the Munchkin coroner in The Wizard of Oz—the theory is not only merely dead, it's really most sincerely dead.

    The Royal Society meeting (Science, 15 September 2000, p. 1850) and these new studies are a response to a hotly debated 1999 book, The River. In it, British writer Edward Hooper links the first known cases of AIDS to tests of an oral polio vaccine in 1 million Africans more than 40 years ago. Hooper contends that in the manufacturing process, scientists accidentally introduced a precursor of HIV, a chimpanzee virus known as SIVcpz, into the vaccine. Specifically, Hooper asserts that the scientists grew the poliovirus vaccine in cells taken from chimps infected with SIVcpz. The scientists, led by Hilary Koprowski, former director of the Wistar Institute in Philadelphia, denied the charge, asserting that they grew the vaccine virus in monkey, not chimp, cells. They further contended that no evidence supported the notion that SIVcpz or HIV had contaminated any batches of the vaccine.

    Preliminary data presented at the Royal Society meeting challenged each of Hooper's main claims, and these four new papers now formally dismiss them. Three of the four papers—including the one in this issue by Hendrik Poinar and colleagues at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany—examined old samples of Koprowski's vaccine and found that none contained DNA from chimpanzee cells. Each lab also found evidence of monkey DNA. Two of the labs further looked for genetic material from HIV or SIVcpz but found none.

    The fourth paper, by evolutionary biologist Edward Holmes and co-workers at Oxford University, analyzed an altogether different contention made by Hooper: that the odd shape of the evolutionary tree formed by different strains of HIV supports the contaminated polio vaccine theory. Hooper highlighted the fact that the various subtypes of HIV seemed to appear simultaneously, forming clusters called “starbursts”; these theoretically could have occurred if this massive human trial used an SIVcpz-contaminated vaccine. In Hooper's hypothetical scenario, the vaccine would have contained a range of viral subtypes, which either existed in one chimp or came together when scientists pooled cells from several chimps.

    By studying 197 HIV isolates obtained in 1997 in the Congo—where the bulk of these polio vaccine tests took place—Holmes and co-workers found that the HIV tree does not show the distinctive subtypes that are seen in previously constructed trees from the entire world. “The starburst is no longer there,” says Holmes. Rather than all of the subtypes originating first in chimps, these data suggest that the subtypes evolved in humans. “A set of people want HIV and AIDS to be a unique thing—it's so unexplainable that they think that somebody must be responsible,” says Holmes. “But it's actually like any other virus. It differs in that what it does to us is so horrendous.” (Hooper did not respond to an interview request.)

    To Holmes, these studies have, in the absence of new evidence, thoroughly dismissed Hooper's theory. “Hooper's evidence was always flimsy, and now it's untenable,” says Holmes. “It's time to move on.”


    Stem Cells Are Coaxed to Produce Insulin

    1. Gretchen Vogel

    In a boost for scientists who hope to turn the potential of undifferentiated stem cells into medical miracles, researchers have found a way to produce insulin-producing cells from mouse embryonic stem (ES) cells.

    There is ready-made demand for anyone who can achieve such alchemy in human cells: millions of patients with diabetes. Doctors have reported promising results in transplanting pancreatic cells from cadavers into diabetic patients, enabling a handful of recipients to stop insulin injections indefinitely. But the demand for cells is far greater than the supply. An unlimited source of cells that can produce insulin in response to the body's cues would thus be a hot commodity.

    But although scientists have transformed ES cells into a range of cell types such as neurons and muscle, pancreatic beta cells—the cells that produce insulin—have been an elusive target. Scientists know relatively little about the genes that control development of the endoderm, the layer of cells in the early embryo that gives rise to many of the internal organs. Nor do they know why ES cells left to differentiate in culture spontaneously produce cells resembling muscle, neurons, and even intestine—but only rarely pancreatic cells.

    In a paper published online today by Science (, Nadya Lumelsky, Ron McKay, and their colleagues at the National Institute of Neurological Disorders and Stroke in Bethesda, Maryland, describe a five-step culturing technique that can turn mouse ES cells into cell clusters that resemble pancreatic islets. The cells produce small amounts of insulin and seem to behave similarly to normal pancreas cells. “The percentage of cells that become insulin positive is remarkable and way above what others have reported,” says developmental biologist Palle Serup, who studies pancreas development at the Hagedorn Research Institute in Gentofte, Denmark.

    McKay's team usually focuses on brain development but was drawn to this area by recent papers showing similarities between neural and pancreatic development. For example, Serup and his colleague Ole Madsen demonstrated last year that pancreas cells and neurons use some of the same genetic pathways during differentiation. And two other teams recently reported that some pancreas cells express nestin, a protein typical of developing neural cells.


    The members of McKay's team already knew how to encourage mouse ES cells to express nestin. They wondered if they could coax their nestin-positive cells to take on characteristics of pancreas cells. When they briefly exposed nestin-positive cells to a growth factor, the cells differentiated not only into neural cells but also into clusters that resemble the insulin-producing islets in the pancreas. The clusters' inner cells produced insulin, while outer cells produced glucagon and somatostatin, two proteins typical of pancreas cells. “It really looks as if you're getting bits of the animal—groups of cells that are assembled together,” McKay says. He says he and his team have grown nestin-positive cells from mouse bone marrow, but they have different properties. They have not yet tried this protocol with these adult cells.

    The ES-derived cells produce insulin in response to glucose—the fundamental role of beta cells—and they increase their insulin production when exposed to chemicals that prompt insulin secretion in normal pancreas cells.

    Important caveats remain, however. The clusters produce only about 2% as much insulin as normal islets do. And when the cells were implanted into diabetic mice, the animals' blood sugar did not return to normal, although transplanted mice survived longer than control animals. Moreover, the cells failed to produce insulin in response to a 5-millimolar concentration of glucose, a level that typically triggers a response in beta cells. “The cells are clearly not behaving as normal beta cells,” says Serup, who also notes that the gene PDX1, a hallmark of mature beta cells, is expressed only at low levels.

    The low insulin production does not discourage researchers such as molecular biologist Ken Zaret of the Fox Chase Cancer Center in Philadelphia. “The glass is 1/50th full,” says Zaret, who predicts that refinements in the culture technique or drug manipulation will boost insulin production. “The amount of insulin they produce is less than it should be if they're mature beta cells,” agrees developmental biologist Douglas Melton of Harvard University. But he is nevertheless eager to see whether the technique works with human cells. McKay has shared the protocol with him, he says, and he is trying it with human ES cells in his lab.


    Images and Model Catch Planets As They Form

    1. Mark Muro*
    1. Mark Muro writes from Tucson, Arizona.

    The story of how planets are built is getting down and dirty. In recent years, the Hubble Space Telescope has produced striking images of the doughnut-shaped disks of gas and dust ringing distant stars—the raw material of planet creation. Meanwhile, detections of extrasolar planets prove that planets form readily under many conditions. What's been missing is evidence of the actual process by which protoplanetary ingredients grow into the finished product.

    True grit.

    Light from circumstellar disks (above) shows that dust can grow into planets even in the hostile Orion nebula.


    Now astronomers think they've spotted it. In a paper published online today by Science (, researchers from the Southwest Research Institute (SwRI) in Boulder, Colorado, and the University of Colorado there report on the first stage of planet formation: the growth of dust grains into significantly larger particles in the circumstellar dust and gas disks of the Orion nebula. They also describe a model showing how these larger particles can survive and grow in the often harsh environments where planets coalesce.

    “This work is important—really the first rigorous empirical look at the process by which nature converts dust in the disks into planets,” says Geoff Marcy, a leading planet hunter at the University of California, Berkeley. Henry Throop, a planetary scientist at SwRI and the first author of the paper, calls it “a missing link … the intermediate stage between the dust that is all around and the planets we see out there.”

    The study began when workers led by co-author John Bally of the University of Colorado observed that six protoplanetary disks in the Orion nebula were invisible in the 1.3-mm radio band. That seemed to imply that particles in the disk had an unexpectedly low total surface area—as they would if they were as large as a few centimeters across, Throop says. Intrigued, the researchers examined Hubble Space Telescope images to see how visible and near-infrared light in the region passed through the largest circumstellar disk in the Orion nebula, known as 114–426. By measuring how dust in the disk scattered light of different wavelengths, the scientists calculated that typical particles in the disk are at least 5 micrometers across, 25 to 50 times larger than common circumstellar dust. In the 100,000 years since 114–426 formed, the scientists concluded, dust particles have begun the agglomeration that ultimately generates planets.

    That raised a harder question: How? To cohere, particles in the Orion nebula must withstand fierce ultraviolet radiation from the nebula's hottest, most massive young stars. In dense clusters like the Orion nebula, Throop notes, high-energy light from type O and B stars tears apart floating dust and gas disks like a cosmic leaf blower, wreaking havoc with nascent planetesimals. Yet clearly some survive and thrive.

    To understand how, Throop and colleagues made a mathematical model that pitted circumstellar disks against the hostile nebular environment. By inputting typical disk masses, initial dust grain sizes, and ionizing sources, the researchers tracked the abundance and size distribution of ices, silicates, and gas over time. After 1 million years, the modelers found, photoevaporation had blown away virtually all the raw materials more than 40 astronomical units from the disk's central star—the distance from our sun to Pluto—that are needed to form planets. In the inner rings of the disk, however, where gravity is strong and dust clouds dense, colliding grains formed meter-sized silicate chunks within 100,000 years. Those boulders were easily large enough to survive the star's photon bombardment, although after 1 million years no ice or gas could last in the inner regions either. The message, Throop says, is “if you want to make planets, you'd better do it fast. You've only got about a million years before the disks are destroyed.”

    Some experts are withholding judgment about the group's instant-planets scenario. “The work is potentially significant, but it's so concisely presented that it's hard to assess,” says C. Robert O'Dell, a professor of physics and astronomy at Vanderbilt University in Nashville, Tennessee. Although more details are needed for the results to win acceptance, O'Dell says, he is willing to be persuaded that even in tough stellar neighborhoods, new worlds can emerge faster than anybody thought.


    Critics of 'Halo Matter' Outrace the Presses

    1. Mark Sincell*
    1. Mark Sincell is a science writer in Houston.

    The deliberate, patient world of traditional astronomy has run headlong into its high-speed future. The crunch came when astronomers announced the discovery of a new population of dim old stars called white dwarfs. In a paper published online by Science on 23 March, the team concluded that the stars are an important source of galactic dark matter, the mysterious substance that provides 90% of the gravitational force that binds the Milky Way together. If confirmed, it would be the first direct sighting in the 30-year search for dark matter.

    But the behind-the-scenes struggle to verify the discovery has been as bruising as a hotly contested Cabinet nominee's Senate hearing. In the month between online publication of the paper (on Science Express) and its appearance in print (on p. 698 of this issue), its authors have weathered criticism that typographical errors and misinterpreted data inflated their estimate of the dark matter density. The critics are also taking some heat for posting their papers online before their peers had an opportunity to review them.

    One thing everyone agrees on is that the Internet, specifically the Los Alamos National Laboratory (LANL) preprint server, catalyzed the debate by doing what it was set up to do: giving preliminary results a speedy public forum. “If the LANL server did not exist, [Science] probably would not be writing this article,” says the lead author of the Science paper, astronomer Ben R. Oppenheimer of the University of California, Berkeley.

    The dispute started innocuously when a team including Oppenheimer claimed to have discovered 38 white dwarfs orbiting in the halo of material that surrounds the disklike Milky Way. But there is a possible catch: Their survey covered a nearby part of the galactic disk in which disk residents mingle with halo stars that just happen to be passing through. Indirect observations show that almost all of the galaxy's dark matter is native to the halo, making disk stars nearly useless for explaining it.

    In principle, disk and halo dwarfs are easy to tell apart. Disk dwarfs are whirling around the galactic center at an average speed of 220 km/s, the same as the sun; halo dwarfs are, on average, standing still. Applying that test to separate out halo dwarfs and then extrapolating their density to the entire galactic halo, Oppenheimer and his collaborators estimated that white dwarfs could account for at least 2% of the dark matter known to make up 90% of the halo's mass. Suspected halo dwarfs currently too faint to be seen could boost their estimate to as much as a third of the dark matter.

    Shortly after their paper appeared on Science Express, however, several scientists fired back a barrage of electronic critiques. On 5 April, a team led by astronomer Neill Reid of the Space Telescope Science Institute in Baltimore posted a paper destined for the Astrophysical Journal Letters on the LANL server. Reid and collaborators argued that Oppenheimer's criterion was not stringent enough to remove all of the disk white dwarfs from his sample. In Reid's view, the white dwarfs are not dark matter at all, but merely a part of a previously known population of stars in the galactic disk. “It is a gaping hole in their argument,” Reid says.

    A week later, astrophysicist David Graff of Ohio State University in Columbus made a similar argument in an LANL posting that was also submitted to Science as a Technical Comment. And on 16 April, in another paper posted on the LANL server and submitted to Science as a Technical Comment, astronomers Brad Gibson of Swinburne University in Australia and Chris Flynn of the Tuorla Observatory in Finland pointed out typographical errors in Oppenheimer's online paper. (The printed version of the paper in this issue includes minor corrections.)

    Oppenheimer says he welcomes the criticism but that public release of the preprints before they had been peer reviewed distorted the debate. “Our paper went through the standard channels of scrutiny, with two referee reports that were very favorable,” he says. “None of these comments or papers have been properly refereed.” And rapid-fire online publication, he says, left no time to make a considered response. As a result, Oppenheimer says, “the playing field was unfair.”

    On 20 April, editors at Science asked the authors of the two Technical Comments to withdraw their preprints from the LANL server until they appeared in the journal. Gibson and Flynn complied with the request, although they say that it disrupts the normal flow of scientific discussion. “I'm quite stunned that Science is more concerned with being first than they are with being correct,” Gibson says.

    But the proverbial cat is out of the bag. Every one of the several experts Science contacted was already intimately familiar with the preprints in question. And their consensus on the white dwarf controversy is that the new survey has turned up some dark matter, but maybe not as much as the team claims. “I'd say that the Oppenheimer team makes a few assumptions that tend to increase the number of their white dwarfs attributed to the halo,” says astrophysicist Dave Bennett of the University of Notre Dame in South Bend, Indiana. “The Reid team does the opposite.” Astrophysicist Brad Hansen of Princeton University agrees. “Bottom line, these white dwarfs are definitely interesting, and I'm not sure anyone has the right picture yet.”


    Two Pledges Boost SESAME Project

    1. Robert Koenig

    ANKARA—A long-planned synchrotron project for the Middle East took a major step forward last month after its Jordanian hosts pledged the money to house the instrument and its German donors agreed to ship it.

    SESAME (Synchrotron Radiation for Experimental Science and Applications in the Middle East) was founded in 1999 to implement Germany's donation of BESSY-I, a 0.8-giga electron volt synchrotron that has been mothballed in Berlin. Last month in Cairo, Jordan promised to fund a building for the accelerator and its upgraded beamlines at a site at Al-Balqa' Applied University outside Amman. At the same time, German research officials said they would ship BESSY shortly after groundbreaking this summer. “When this was announced, the whole atmosphere became positive, since SESAME members now think that the project will fly,” notes Herwig Schopper, former CERN director-general and head of SESAME's interim governing council.

    Schopper says five more countries have expressed interest in the project, making the total 16 and leaving Saudi Arabia and Syria as the only major nations in the region that have not yet joined. The members will help pay for the estimated $8 million in upgrades needed. Construction on the new building, to cost $11 million, is expected to begin this fall and be completed by the end of 2002.

    The council also approved plans for a biomedical institute alongside SESAME. The new entity, to be called the Middle East Biological Sciences Institute for Research, will make use of the synchrotron's beamlines. “We hope it will foster regional cooperation in the life sciences,” says Said Assaf, director-general of the Arafat National Scientific Center for Applied Research in Ramallah and the Palestinian Authority's representative to SESAME. “Science, like medicine, is for all who could utilize it best—and appreciate it.” Work on the new institute will wait until after completion of SESAME.

  7. JAPAN

    Reforms Could Threaten Facility Spending Hike

    1. Dennis Normile

    TOKYO—The Ministry of Education, Science, Technology, Sports, and Culture last week promised to spend $13 billion over the next 5 years to renovate and expand cramped and outdated research facilities in Japan's universities. Now, the country's researchers and educators are waiting to see whether the promise survives the expected election this week of Junichiro Koizumi as prime minister and the resulting government reshuffle.

    A recent ministry survey found that about one-fourth of the total floor space at national universities was more than 25 years old, meaning not only that the buildings are aging but also that they probably don't meet current standards for resisting earthquakes. Universities have also not expanded their research facilities in step with increased funding for research and additional numbers of postdocs and technicians. “The condition of facilities is really choking [research activities],” says Reiko Kuroda, a professor of chemistry at the University of Tokyo and a member of the Council for Science and Technology Policy, the country's highest policy advisory body. The council has made improving research facilities one of its priorities in a new 5-year spending plan.

    The new infrastructure money is seen as a sign of the government's intent to follow the council's overall plan, which calls for spending $195 billion on research-related projects. The problem was supposed to have been addressed under the previous 5-year plan. But a lack of coordination between the Education Ministry and the Ministry of Construction, which builds and remodels public buildings, held down spending to $8 billion, far below the target. The council was given increased authority to carry out the program as part of a government restructuring earlier this year. “To facilitate this rebuilding, we will be trying to coordinate [efforts] among the different ministries,” says Hiroshi Tamada, deputy director of policy planning for the council. The chief beneficiaries are expected to be graduate school classrooms and labs, designated centers of excellence, and biomedical facilities.

    Although Japan prides itself on its ability to carry out such long-range plans, the fate of the initiative is uncertain. Koizumi, a self-proclaimed reformer within the ruling Liberal Democrat Party best known for advocating the privatization of the country's huge postal savings system, has pledged to examine public works spending, which has been used repeatedly over the last decade to stimulate a stagnant economy.

    Shinichi Yamamoto, director of the University of Tsukuba's Research Center for University Studies, says he believes there will be strong support for continuing the recent boost in science funding. “And I think there is widespread understanding that we cannot perform research just with money; we need infrastructure, too,” he says. Still, he and others realize that it may be a while before they find out if the new government agrees.


    New Gene May Be Key to Sweet Tooth

    1. R. John Davenport

    Can't resist sweets? Sensory scientists have discovered a gene that may be responsible for your sweet tooth. Variations in the gene seem to explain why some mice prefer sweet flavors more than others do, and the same may be true for humans as well.

    Researchers have known for many years that taste cells on the tongue recognize five distinct tastes—sweet, sour, bitter, salty, and umami (or monosodium glutamate). For sweet, bitter, and umami tastes, this is done with the aid of cell surface proteins called receptors that bind to a particular taste chemical and then send a message to the brain. (Sour and salty directly change the ion flux of taste cells.) Last year, scientists found genes for receptors that recognize bitter and umami tastes. But the sweet receptor has remained elusive, leaving a major gap in our understanding of how humans recognize the spectrum of subtle flavors in the gustatory universe.

    Taste block.

    Extra sugar molecules (red) on an altered sweet receptor may quell a mouse's sweet tooth.


    Now, four research groups have isolated a gene that may code for the sweet receptor. The work is published in the May issue of Nature Neuroscience by Robert Margolskee and co-workers at Mount Sinai School of Medicine in New York City; in the May issue of Nature Genetics by Linda Buck's group at Harvard Medical School in Boston; and in the May Journal of Neurochemistry by a team led by Susan Sullivan of the National Institute on Deafness and Other Communication Disorders. A fourth group led by Gary Beauchamp at the Monell Chemical Senses Center in Philadelphia announced its results on 27 April in Sarasota, Florida, at the annual meeting of the Association for Chemoreception Sciences.

    Taste physiologist Sue Kinnamon of Colorado State University in Boulder says that the discovery of the gene “is very exciting. It allows you to really start asking what is the whole pathway that mediates this response.” Understanding that pathway could, among other things, help the food industry develop better artificial sweeteners and help basic researchers identify potential links between taste and dietary health.

    The search for the various taste receptors has been hampered by the fact that taste cells are sparsely distributed on the tongue and are buried within nontaste tissue, making direct receptor isolation difficult. And because researchers haven't been able to culture taste cells or express working receptor proteins in cultured cells of other types, they've lacked a good way to test whether candidate receptor genes respond to particular taste molecules.

    The availability of the sequence of the human genome, however, has provided another way of tracking down stubborn genes. To narrow their hunt, all four teams turned to strains of mice having different sweet preferences. Some strains, called tasters, prefer liquids sweetened with sucrose or saccharin over nonsweetened solutions. Other strains, called nontasters, show no preference for sweets. Previous work by several groups had shown that genetic variations mapped to a region on the mouse genome dubbed Sac underlie this difference.

    To home in on the specific gene involved, the researchers combed the region of the human genome sequence that corresponds to the mouse Sac locus. All four groups pinpointed a single gene, called T1R3 by two of them, which seems to encode a protein with the right characteristics for a sweet receptor. The protein's sequence suggests that it is a G protein-coupled receptor—a member of a family of proteins that span the cell membrane and transmit signals into the cell via so-called G proteins. In addition, the mouse version of the gene, identified with the aid of the human gene sequence, is expressed exclusively in taste cells, and variations in the gene sequence distinguish taster mice from nontaster mice.

    For example, the protein produced by nontaster mice carries a site, not found in the protein of tasters, where sugar molecules could be attached, says Margolskee. He thinks that sugars at that position could prevent receptor interactions needed to activate the internal signaling pathway. That could explain why nontaster mice are indifferent to sweets, although more work is needed to pin down whether sugars do in fact attach at that site, and if they do, whether they affect sweet taste perception.

    There are also intriguing hints that T1R3 variations might underlie differences in human responses to sweet tastes. The Monell group looked at the gene sequence of 30 human volunteers and found that 10% of them carried variations in the sequence. Although the variations are different from those distinguishing taster and nontaster mice, their positions in the sequence suggest that they are in the part of the protein that protrudes from the outside of the cell. If so, they could disrupt binding of the receptor either to a taste compound or to other receptors. Danielle Reed of Monell says their team is now looking to see if the two versions of the gene are linked to variations in sweet sensitivities of human populations.

    Still, although the results strongly suggest that the researchers have finally identified a sweet receptor, they do not yet prove that beyond a shadow of a doubt. That will depend on showing that the gene from taster mice will confer sweet sensitivity, either to nontaste cells in culture or to nontaster animals.


    The First Urban Center in the Americas

    1. Heather Pringle*
    1. Heather Pringle is the author of In Search of Ancient North America

    Peru's coastal desert, one of the most parched places on Earth, does not look like a particularly inviting spot for early civilization. But to the puzzlement of archaeologists, the region has given birth to a succession of spectacular cultures. Now, new dates from a Peruvian-American archaeological team working at the sprawling inland site of Caral, some 200 kilometers north of Lima, indicate that inland settlements there were even more important early on than most archaeologists had realized. The dates, published on page 723, push back the emergence of urban life and monumental architecture in the Americas by nearly 800 years—to 2627 B.C.—and cast serious doubt on one commonly held view of the relationships between inland and coastal centers in early Peru. But the research suggests an answer to the puzzle of why the desert sites became so prominent early on: Some were easy to irrigate.

    Arid birthplace.

    New dates indicate that Caral in Peru was the first complex society in the New World.


    “It looks like Caral is really the first complex society in the New World historically,” says Jonathan Haas, a co-author of the paper and an archaeologist at the Field Museum in Chicago, who specializes in the rise of early states. “Caral gives us an opportunity to look at the development process.”

    Situated in the middle Supe River Valley, 23 kilometers from the Pacific Ocean, Caral was the architectural wonder of its day in the Americas. At its apogee, it boasted eight sectors of modest homes and grand stone-walled residences, two circular plazas, and six immense platform mounds built from quarried stone and river cobbles. Warrens of ceremonial rooms, which probably served as symbols of centralized religion, crowned the mounds, the largest of which towered four stories high and sprawled over an area equivalent to 4.5 football fields.

    To determine the age of these structures, lead author Ruth Shady Solis, an archaeologist at the National University of San Marcos in Lima, and her associates obtained carbon-14 dates on 18 excavated plant samples from the site. Some were taken from bags woven from short-lived reeds, which the builders used for hauling stones and left inside the mounds. “The team has got very nice dates that we can associate with a specific human event,” such as the building of the mounds, says Brian Billman, an archaeologist at the University of North Carolina, Chapel Hill.

    Exactly what fueled the early construction boom at Caral is still unclear, but the excavators point to a new development in the Americas: irrigation agriculture. After running out of floodplain land in the Supe Valley, farmers turned to Caral, several kilometers away. To grow squash, beans, guava, and cotton there, they would have needed to irrigate their lands—the first people to do so in the Americas. In all likelihood, observes Haas, the geography of the Supe Valley helped greatly in this development. Today, farmers irrigate the area by cutting a shallow channel 2 kilometers upstream to a simple headgate that controls the flow; in most other Peruvian valleys, they must build far longer and deeper channels and construct a series of sophisticated gates.

    As the earliest urban center in the Americas, Caral now casts doubt on a favorite idea of many Andeanists: the maritime hypothesis of the origins of Peru's civilizations. First proposed by archaeologist Michael Moseley of the University of Florida, Gainesville, in 1975, the hypothesis suggests that Peru's rich marine resources—huge schools of fish and shellfish beds—permitted early fishers and foragers to settle along the coast, build elaborate architecture, and develop complex societies, moving inland only later. But Caral is centuries older than any of the early large urban centers outside the Supe Valley. “Rather than coastal antecedents to monumental inland sites,” says archaeologist Shelia Pozorski of the University of Texas-Pan American in Edinburg, “what we have now are coastal satellite villages to monumental inland sites.”

    One of the great challenges now is to explain how this satellite system worked. In all likelihood, the ancient Peruvians moved inland to Caral to expand their diets, adding plant carbohydrates to seafood proteins, and created their elaborate civilization in their new home. Excavations at Caral have turned up abundant fish bones and mollusk shells, and the inhabitants' ancient desiccated feces, notes Haas, “all have anchovy bones in them.” But it is uncertain exactly how Caral's people, living so far from the ocean, acquired all this seafood. One possibility is that they walked half the distance, trading a commodity such as cotton for coastal fishers' surplus. “Cotton was critical for the marine exploitation,” says Haas. “That's what people used to make the nets with.”

    Haas and Winifred Creamer, an archaeologist at Northern Illinois University in DeKalb and co-author of the paper, who is married to Haas, now hope to begin piecing together Caral's economy, examining ancient cotton production along the Peruvian coast during this period and developing a trace- element analysis for cotton fiber grown in different regions. Whether Caral really was an early center of cotton agriculture remains to be seen, but the site's new dates will certainly provoke much debate about the origins of Peru's ancient civilizations.


    How the Brain Understands Music

    1. Constance Holden

    The essayist Thomas Carlyle called music “the speech of angels.” And indeed, music and language are being found to have quite a lot in common. A brain imaging study in the May issue of Nature Neuroscience confirms that people's brains are finely tuned to recognizing “musical syntax,” just as they are for verbal grammar. What's more, they have found that some of this musical processing goes on in Broca's area, which is chiefly associated with language.

    Physicist Burkhard Maess and colleagues at the Max Planck Institute of Cognitive Neuroscience in Leipzig, Germany, tested responses of six right-handed people with no musical training. Using magnetoencephalography (MEG), an imaging technique that uses supersensitive magnetic field detectors to register electrical activity in the brain, they measured responses to three sets of five musical chords concocted by team member Stefan Koelsch, who is also a musician. The first set were five chords in the key of C major that ended, following convention, on the tonic (C major) chord. The second and third chord sequences threw in a wild card: a “Neapolitan” chord that contains two notes that are not found in the key of C major. When inserted as the third in the five-chord sequence, this chord is a bit incongruous. When put in the fifth position it definitely sounds inappropriate, as the first four chords create the expectation of a resolution on the home (tonic) chord.

    Each sequence produced a different MEG pattern, with the largest difference seen between the in-key sequence and the one ending with the Neapolitan chord. The in-key chords were mainly registered in the primary auditory cortex, located in the temporal lobes. But the incongruous set lit up areas above and in front of the temporal lobes, in the speech area known as Broca's on the left and its corresponding region on the right. The data suggest that while, just as with speech, the auditory cortex receives incoming sounds, it is Broca's area and its right-hemisphere mate that are in charge of the trickier job of making sense of them. Adds Koelsch, “We found that musical syntax is not only processed in the same area [as speech] but also with the same time-course of neural activity.” (That is, brain responses to incongruities peaked at about 200 milliseconds after the stimulus, as they did in an earlier study using verbal incongruities.)

    Because the effects occurred in subjects with no musical training, the study supports existing evidence that the brain has an “implicit” ability to apply harmonic principles to music, the authors write. Overall, the effects of the music were more pronounced in the right hemisphere than the left, where more speech functions are headquartered. “Currently, we cannot prove that the processes underlying language and music processing … are the same,” says Maess. Nonetheless, there is “still more overlap than we thought.”

    “Studies such as this teach us to be cautious when talking about ‘language areas' in the brain,” says Aniruddh Patel of The Neurosciences Institute in San Diego. He says the work goes against “a prevailing view that language is ‘modular' [and draws] on special mental operations and brain regions that are not used by any other domain.” Because linguistic and musical syntax are different, he notes, a demonstration that brain regions known to be involved in one are also involved in the other “raises the question of what these brain areas are really doing.”

    Even more precise probes will be required to sort that out. Neuroscientist Robert Zatorre of McGill University in Montreal says the Maess team has successfully demonstrated the “physiological trace” of musical syntax processing and shown that it overlaps with language responses. But whether they are “part of the same [system] or independent is not yet certain.”

  11. The Mitochondrion: Is It Central to Apoptosis?

    1. Elizabeth Finkel*
    1. Elizabeth Finkel writes from Melbourne, Australia.

    Researchers studying apoptosis are divided into two camps. At issue: whether the mitochondria or enzymes called caspases are primary in triggering programmed cell death

    Peer down a microscope to watch cells undergoing programmed cell death and you're in for an awe-inspiring sight. Programmed cell death—or apoptosis as it's also called—unfolds like a well-planned military operation. Within minutes, cells collapse their structural supports, digest and package their contents into membrane-bound parcels, and disappear without a trace into the bowels of scavenger cells. Because apoptosis is key to normal life—in the developing embryo it's needed to cull excess cells, for example, and later in life it eliminates damaged cells—researchers have been working feverishly to piece together the molecular circuitry that underlies this highly choreographed death program.

    Within the past few years, however, the apoptosis community has found itself split into two competing camps with divergent views of just what this circuitry looks like. The dispute concerns what role the cell's mitochondria play in apoptosis.

    These small, membrane-bound structures are best known as the source of the reactions producing most of the cell's energy. But a great deal of recent evidence has led many researchers to think that in most cases the mitochondria make the key decisions about whether a cell lives or dies. According to this view, when the stress signals produced by, say, lack of necessary growth factors or exposure to ultraviolet (UV) light, reach a critical threshold, the mitochondria either rupture or leak, ensuring their own demise and releasing a cocktail of factors that trigger protein-splitting enzymes called caspases. The activated caspases then rapidly cleave proteins in the cell's internal skeleton, membranes, and nucleus to bring about the characteristic hallmarks of apoptosis.

    But despite a tide of data suggesting that mitochondria have their fingers on the cell's self-destruct button, a small minority of researchers argues that they are secondary agents of the cell's demise. The primary apoptosis signals, they believe, route directly to the caspases without first involving the mitochondria. Once those caspases are activated, they may target the mitochondria along with everything else in the cell, leading to further caspase activation. But these mitochondria-linked caspases merely “facilitate cellular dismemberment,” says one proponent of this view, Vishva Dixit of Genentech Inc. in South San Francisco. They do not trigger it.

    The debate is hard to resolve because the events of apoptosis unfold too rapidly to determine their temporal sequence, and the results from techniques such as knocking out the genes for various components of the apoptosis pathways have been inconclusive, providing fuel for both sides. “It's a funny situation,” admits apoptosis researcher Pierre Golstein of the Center of Immunology of Marseille-Luminy, France. “We don't know whether the mitochondrion is at the heart of the mechanism or a lateral process. Right now I'm just counting the votes on each side.”

    The question of who plays the leading role in apoptosis is more than academic. Defects in apoptosis contribute to many major diseases. Too much apoptosis has been linked to nerve cell loss in conditions such as stroke and Alzheimer's disease, and too little to cancer and autoimmune disease. And knowing exactly how to turn apoptosis on or off is key to developing drugs to treat diseases in which it goes awry. “This is a very important debate,” says Jerry Adams of the Walter and Eliza Hall Institute of Medical Research in Melbourne, Australia. “The central question in the apoptotic field is still unanswered. A lot hinges on that.”

    Center stage in apoptosis?

    In this view, numerous cell-death stimuli work through the mitochondrion. They cause pro-apoptotic members of the BCL-2 family, such as BAX and BAK, to either open new pores or modify existing channels in the mitochondrial membrane, releasing cytochrome c and other proteins that lead to caspase activation and cell death. BCL-2 itself, which is antiapoptotic, somehow blocks the pore or channel opening.


    The mitochondrial connection

    Until about 5 years ago, apoptosis researchers paid little attention to mitochondria. Early work, particularly in the roundworm Caenorhabditis elegans, did not point to their involvement, and researchers also found what appeared to be a direct pathway to caspase activation in mammalian cells. This is the Fas death receptor pathway, in which substances such as the so-called Fas ligand bind to a cell surface protein called Fas. Activation of Fas then triggers caspase-8 inside the cell.

    But in 1996, a startling discovery by Xiaodong Wang's group at the University of Texas Southwestern Medical Center (UT Southwestern) in Dallas began to shift the focus to the mitochondria. Wang and his colleagues were investigating how human cells keep tabs on one of the cell's 15 or so other caspases, caspase-3, dubbed an “executioner” caspase because of the major role it plays in bringing about cell death. They found that the enzyme's activity is unleashed by cytochrome c, a mitochondrial protein then thought to be dedicated solely to energy production. “It took us a long time to believe [that finding],” recalls Sharad Kumar of the Hanson Institute in Adelaide, Australia, who studies apoptosis in the fruit fly.

    Subsequent work by Wang's group showed just how cytochrome c plugs into the apoptosis circuitry. Like many other protein-degrading enzymes, the active segments of caspase-3 are buried in a “proenzyme” and need to be clipped out by another enzyme. For procaspase-3, that activation step is performed by caspase-9, and researchers found that caspase-9 requires two other proteins to make it function: Apaf-1, for apoptosis- activating factor—and cytochrome c, which floods out of the mitochondria during apoptosis. The researchers further found that known apoptosis inhibitors like the protein BCL-2 could stop that release.

    Exactly how cytochrome c gets out of the mitochondrion is unclear, but 1996 work by Stephen Fesik of Abbot Laboratories in Chicago, Illinois, and his colleagues provided a clue. BCL-2 is part of a large protein family whose members divide up into opposing factions. Some, like BCL-2 or the closely related BCL-XL, block apoptosis; others, like BAX or BAK or the BH3-only proteins (so-called because they carry only the third domain of the four that characterize BCL-2 family members) are potent triggers of cell death.

    Fesik and colleagues crystallized BCL-XL and found that its structure resembles that of diphtheria toxin, a protein that kills cells by punching holes in their membranes. Some researchers now think that apoptosis-triggering BCL-2 family members release cytochrome c from the mitochondria by creating pores in the mitochondrial outer membrane, while antiapoptotic members somehow preserve mitochondrial integrity.

    Other researchers have proposed different exit routes for cytochrome c, ranging from existing mitochondrial pores that can be modified by pro-apoptosis BCL-2 family members to the complete rupture of the mitochondrial outer membrane. But although there is vigorous disagreement about exactly how cytochrome c gets out, the majority agrees that it is the sudden permeability of the mitochondrial outer membrane that triggers the decision to die.

    Within the past year or so, that view has been buttressed by findings that cytochrome c is not the only pro-apoptotic protein released by mitochondria. Guido Kroemer's group at France's national research agency CNRS in Villejuif found that they also release three caspases and a protein known as apoptosis-inducing factor (AIF) that may play a critical role in the programmed cell death that sculpts the early embryo. And in work reported in the 7 July 2000 issue of Cell, Anne Verhagen in David Vaux's lab at the Walter and Eliza Hall Institute and Chunying Du in Wang's lab independently showed that when mitochondria release cytochrome c, they release another protein, known either as Diablo or Smac. This protein neutralizes a set of caspase inhibitors known as IAPs (for inhibitors of apoptosis), thereby freeing the caspases to do their work.

    Other players make their way from their primary residence—the nucleus—to the mitochondrion in response to apoptosis signals. As reported last August by Hui Li in Xiao-kun Zhang's lab at the Burnham Institute in La Jolla, California, these include TR3, a protein previously known only as a “nuclear receptor” that regulates gene activity in response to steroid hormones. Li found that TR3 appears to quit the nucleus and relocate to mitochondria, where it causes the release of cytochrome c (Science, 18 August 2000, p. 1159). And Natalie Marchenko of the State University of New York, Stony Brook, found something similar for the p53 protein, the sentinel known to make the earliest decisions about cell life and death. It, too, was supposed to exert its effects by regulating genes, but Marchenko showed that p53 also localizes to mitochondria, where it can induce apoptosis more directly.

    Even the Fas death receptor pathway, which was once thought to require only the activity of caspase-8, has recently been shown to require the help of the mitochondrial pathway. Stanley Korsmeyer's team at Washington University School of Medicine in St. Louis found that in liver cells caspase-8 cleaves a protein called BID. One of the products thus released then moves to the mitochondria, where it works through BCL family members BAK and BAX to release cytochrome c (see p. 727).

    Or a side show?

    In the contrary view, the mitochondria come into play only secondarily, once the initiator caspases have been activated. Here Bcl-2 blocks the activity of CED-4, or its mammalian equivalent, which has not yet been found.


    Another view

    Despite the wealth of evidence putting the mitochondria at the center stage of cell death, a small group of researchers remains unconvinced. These include Genentech's Dixit, and Vaux and Andreas Strasser of the Walter and Eliza Hall Institute. They point out that the evidence for a primary role for the mitochondrion comes from studies of broken cells or cells grown in culture dishes. In these highly unphysiological conditions, says Vaux, “virtually everything in the Sigma [chemical company] catalog has been shown to cause apoptosis and cytochrome c release.” He and the other dissenters think that more meaningful results can only come through studies in whole animals, such as C. elegans.

    Researchers, particularly Robert Horvitz and his colleagues at the Massachusetts Institute of Technology (MIT) in Cambridge, have worked out the molecular events leading to apoptosis in C. elegans—and they are not triggered by permeabilizing the mitochondria. The worm's apoptosis circuitry involves just four main proteins. CED-3 (for cell death 3) is the worm's executioner caspase. It is activated by a second protein, CED-4, which is normally kept in check by the protein called CED-9. Apoptosis triggers, like the protein EGL-1, relieve that inhibition by trapping CED-9, freeing CED-4 to activate CED-3.

    The worm proteins, moreover, have turned out to be similar to some identified in mammalian apoptosis. For example, CED-9 is not only structurally similar to the human apoptosis inhibitor BCL-2, but in 1993, Vaux, who was then at Stanford University, and his Stanford colleague Stuart Kim showed that the human Bcl-2 gene, when introduced into the worm, prevents the apoptosis that normally kills cells during the worm's embryonic development. That discovery led researchers to believe that the apoptosis circuitry would be similar in the two organisms, and although many dropped that idea in the face of the mitochondrial results, Vaux and a few other have remained steadfast.

    They got some support about a year ago from Toshiyuki Nakagawa and colleagues at Harvard Medical School in Boston. They showed that when the cell's protein production and transport network, known as the endoplasmic reticulum, is under stress, it activates caspase-12 and triggers apoptosis with no involvement of mitochondria.

    Further support comes from studies of caspase inhibitors. If activation of the enzymes is the primary event in apoptosis, inhibiting them should prevent both cell death and the mitochondrial events. And that's exactly what researchers have found with inhibitors in the fruit fly.

    The final result.

    Mitochondria may—or may not—play a central role in triggering the apoptosis of these mouse myeloid cells (above).


    John Abrams's group at UT Southwestern showed that during the normal apoptosis of developing fly oocytes, changes occur to cytochrome c within the mitochondria that might allow it to access and activate the fly's version of the caspase activator Apaf-1. But caspase inhibitors eliminated these changes, suggesting that caspase activation is the primary event in fly apoptosis.

    Furthermore, Herman Steller at MIT showed that the protein p35, a caspase inhibitor carried by certain insect viruses, prevents cell death in the retinas of flies with a condition similar to retinitis pigmentosa, a hereditary form of human blindness in which the retinal cells of the eye die abnormally by apoptosis. The flies even regained their vision. “If pressed for an answer, I would say that with respect to the main triggers of developmental apoptosis in flies, caspase action does precede events at the mitochondria,” Abrams says.

    Caspase inhibitors have produced much more equivocal results in mammals, however. In cultured cells exposed to various death triggers such as UV light or the chemical staurosporine, the inhibitors do stop cells from executing the typical apoptosis program. Even so, the mitochondria still release cytochrome c and the cells slowly die—a result that supports the pro-mitochondrial contingent.

    The caspase camp counters this finding in two ways: The inhibitors used may not effectively block all of the cell's caspases, and cultured cells bombarded by chemicals and radiation may not behave the way cells in the living organism do. A more telling result, they argue, is the fact that caspase inhibitors like the synthetic peptide ZVAD-fmk reduce the area of tissue damage by as much as 50% in animals subjected to experimental strokes or heart attacks. “If we can keep [the neurons or heart cells] alive during the ischemic storm, they appear to be fine coming out the other end,” says Don Nicholson of Merck Frosst Canada Inc. in Pointe Claire-Dorval, Quebec.

    Members of the pro-mitochondrial camp maintain, however, that the problems these results pose for their point of view aren't insurmountable. Even though mitochondria may be the apoptosis initiators, they could still need caspases to finish the job. “All you need to invoke is that the caspases are providing a positive amplification loop,” says Michael Hengartner, an apoptosis researcher at Cold Spring Harbor Laboratory on New York's Long Island. If so, then the mitochondria, despite having lost their cytochrome c, may be able to recover if no caspases are around. Indeed, Jean-Claude Martinou's group at the Serono Pharmaceutical Research Institute in Geneva, Switzerland, has evidence for this idea in cultured sympathetic neurons.

    Attempts to clarify the issue by inactivating, or “knocking out,” various genes in mice to see how that affects apoptosis have also provided fuel for both camps. Researchers have separately knocked out the genes for Apaf-1 and other proteins in the caspase 3 activation pathway. All of the resulting animals turned out to show some degree of developmentally programmed apoptosis, although it wasn't completely normal. And in work reported just 4 weeks ago in the 29 March issue of Nature, a team led by Nicholas Joza and Josef Penninger of the Amgen Research Institute in Toronto, Canada, found that whereas AIF, a mitochondrial protein, is needed for the programmed cell death that sculpts early mouse embryos, neither caspase-9 nor Apaf-1 is required. For the mitochondrial camp, this is evidence that caspases are dispensable for programmed cell death.

    But the caspase proponents maintain that the residual apoptosis might simply be due to the presence of caspases other than caspase-3 or −9, including some that have not yet been identified. Strasser of the Walter and Eliza Hall Institute, in unpublished work, has evidence for such caspases. He found that in otherwise normal mice, white blood cells lacking Apaf-1 undergo normal culling during embryonic development. The same was true for white cells lacking the genes for caspase-3 or −9. “It's definitely classical apoptosis; we see all the features of caspase activation,” says Strasser.

    If that classical apoptosis is being brought about by alternative caspases, then they would presumably need an activator other than Apaf-1. Strasser and others propose that this could be something like the worm's CED-4. Washington University's Korsmeyer describes that as a “reasonable hypothesis,” but he adds, “a number of us have done a lot of screening, looking for such a molecule, and have not found it.”

    Still, many gaps remain to be filled in the apoptosis circuit diagram, leaving both sides room for hope. “Having the human genome sequence in hand should speed this search [for new caspases or a CED-4 equivalent]. Time will be the best arbitrator; when the dust settles I'm confident our hypothesis will prevail,” says Dixit. But so are the members of the pro-mitochondrial camp.

    Indeed, given the complexity of apoptosis, which can be triggered by many different stresses acting through many different internal cell signaling pathways, it may well turn out that cells can employ different mechanisms in different circumstances. “It's not going to be black or white,” predicts Hengartner. “And a good thing too; this way everyone will be right.”


    Studying Humans--and Their Cousins and Parasites

    1. Ann Gibbons

    KANSAS CITY, MISSOURI—More than 1000 researchers presented 500 papers at the 70th Annual Meeting of the American Association of Physical Anthropologists here, from 28 to 31 March. The talks covered diverse aspects of human and ape evolution, including a genetic study of primates, new dates on an important site for early human fossils, and the evolution of a malaria resistance gene.

    The Least Diverse Ape

    On our planet of the apes, humans are by far the most successful primate, easily outnumbering all other great apes. But despite our stunning reproductive success, we have very little genetic variation compared with other primates, according to a study presented in Kansas City. When researchers compared DNA sequences at the same six regions in the genomes of humans, chimpanzees, bonobos, and gorillas, they found that humans had far less nucleotide variation at almost every region. In only one case—on the Y chromosome of gorillas—were the apes genetically less diverse than humans.

    The new study and similar work by geneticists in Germany are the first to look at a wide range of DNA in all the great apes, and they help resolve earlier, contradictory reports (Science, 5 November 1999, p. 1159). The findings confirm what some previous studies had suggested: We are all descended from a small founding population whose offspring multiplied rapidly in the past 200,000 years. The lack of diversity in humans is now so striking that it strongly supports the theory that our ancestors survived a “bottleneck” that quickly winnowed a larger, genetically diverse population into a smaller, homogeneous one. “This is startling; it shows that the bottleneck we went through must have been profound,” says Linda Vigilant, a molecular anthropologist at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany.

    Studies have shown for a decade that chimpanzees have three to four times as much genetic diversity in their maternally inheri-ted mitochondrial DNA (mtDNA) as humans do. But are chimpanzees exceptionally diverse, or humans exceptionally alike? No one had sequenced enough DNA from other apes to find out, until a study reported in February in Nature Genetics. Geneticists Svante Pääbo and Henrik Kaessmann of the Max Planck Institute for Evolutionary Anthropology compared a 10,000-base-pair stretch of noncoding DNA on the X chromosome in humans, chimpanzees, gorillas, and orangutans. They found that humans had not only much less genetic variation than all other great apes, but also had relatively few of the mutations that accumulate in noncoding regions of the genome at a relatively steady rate. That's a signal that humans underwent a major expansion starting 190,000 to 160,000 years ago, says Pääbo.

    The lucky one.

    This gorilla, Titus, fathered almost all the offspring in his group, giving them all the same Y chromosome.


    Meanwhile, two other teams of molecular anthropologists had joined forces to study six regions in both nuclear DNA and mtDNA, in humans, chimpanzees, bonobos, and western and eastern gorillas. In geneticist Ken Kidd's lab at Yale University, graduate student Michael Jensen-Seaman, now at the Medical College of Wisconsin in Milwaukee, and geneticist Amos Deinard focused on four noncoding loci in nuclear DNA inherited from both parents. They found that chimpanzees had the most diversity, whereas humans had the least.

    At the same time, graduate student Tasha Altheide, in evolutionary geneticist Michael Hammer's lab at the University of Arizona in Tucson, focused on regions of the paternally inherited Y chromosome and mtDNA. Again, she found that humans had the least variation in mtDNA and little in their Y. But, surprisingly, gorillas had the most variation in mtDNA—and none in the Y.

    That exception doesn't shake the pattern of overall low human diversity; rather, it's a clear reflection of gorilla mating habits, gorilla experts say. Gorillas have a haremlike social structure, explains Yale primatologist David Watts: “Relatively few gorilla males in each generation contribute genes to the next one.” Indeed, another paper at the meeting by Brenda Bradley, a graduate student in Vigilant's lab, documented this pattern in mountain gorillas at the Karisoke Research Center in Rwanda. The group had more than one adult male, as is common, but the DNA showed that one male, Titus, had fathered all but one of the 10 offspring. “This is a nice reflection of the effect of the mating system on genetic diversity,” says geneticist Lynn Jorde of the University of Utah in Salt Lake City.

    When it comes to humans, however, mating behavior can't explain the lack of variation at virtually every genetic region studied. Thus, the new data support the idea that a relatively small number of breeding humans survived a bottleneck (Science, 6 January 1995, p. 35), perhaps caused by a speciation event, disease, or changes in climate and habitat. Surviving populations then expanded rapidly but carried no more genetic variation than their ancient founders. The end result: “We really look like one subspecies of chimpanzee that has spread and taken over the world,” says Pääbo.

    New Old Dates for Humans in Java

    When scientists announced in 1994 that an early human, Homo erectus, had lived in Java, Indonesia, as early as 1.6 million to 1.8 million years ago—a million years earlier than expected—anthropologists were stunned (Science, 25 February 1994, p. 1118). Because H. erectus arose in Africa only 2 million years ago, the dates implied that these early humans had spread 10,000 kilometers long before anyone thought they had even ventured out of Africa. Not surprisingly, other researchers were cautious about rewriting human colonization history and raised serious questions about the dates.

    Now, another team of researchers has redated the same fossil beds at one of the Java sites and also finds that H. erectus was there 1.5 million years ago. In Kansas City, the team concluded that nearly all the H. erectus fossils found in the famed Bapang Formation of Sangiran are at least 1 million years old, with some dating back 1.5 million to 1.6 million years. H. erectus was there “for at least a half-million years, beginning 1.5 million years ago,” say paleoanthropologist Russell Ciochon and geoarchaeologist Roy Larick of the University of Iowa in Iowa City, co- authors of a report in the 24 April Proceedings of the National Academy of Sciences.

    But although many researchers are impressed with the team's 3-year geologic study, no one is yet rewriting textbooks. Geochronologist Carl Swisher of Rutgers University in New Brunswick, New Jersey, is pleased his radiometric dates have been confirmed but admits: “There are issues that still need to be addressed to put this thing to rest.”

    Ever since Dutch anatomist Eugene Dubois found the first specimen of H. erectus on the banks of Java's Solo River in 1891, the island has been a treasure trove of early human fossils. Local villagers have found more than 80 H. erectus fossils in the Sangiran Dome—an exposed bluff between two volcanoes in central Java—alone. But these fossils have been notoriously difficult to date, because it's hard to pinpoint where local people found them, and because they may have been moved by water or volcanic activity.

    The new team worked with Indonesian geologists to put the fossils in geological context, then mapped and dated the sedimentary layers—“comprehensive” work that is well-regarded by other geologists, says Marco Langbroek, an archaeologist at Leiden University in the Netherlands. The team then sampled volcanic pumice at five sites plus the one Swisher dated, just above the layer where fossils were found. They sent the samples to geochronologist Matthew Heizler of the New Mexico Bureau of Mines and Mineral Resources in Socorro, who used the ratio of the decay of argon-40 to argon-39 to date the hornblende. His dates matched both Swisher's 1994 dates and new, unpublished argon/argon dates.

    Homo erectus's haunt.

    Sangiran is full of fossils but hard to date.


    Although no one doubts that the two groups got the right age for the volcanic rock, the new team faces the problems that have plagued others in Java. Namely, how much time passed between the formation of the dated volcanic layer and the deposition of the fossils, and was the old pumice transported into younger, fossil-bearing layers? Langbroek also notes that the results are at odds with the dates of two tektites—shock-melted crystals formed by a huge meteor impact 800,000 years ago—from the fossil beds.

    Larick and Heizler respond that their dates are in proper stratigraphic order, with old dates below younger ones, making it “improbable that the pumice is reworked.” Furthermore, the tektites were found and analyzed more than 20 years ago, and some researchers question whether they really came from fossil-bearing layers—and indeed whether they are tektites at all. “The ever-increasing number of early radiometric [dates] calls the published context of these two artifacts into question,” says Larick.

    All the same, the authors will need “tektite-proof helmets” to prove that H. erectus set foot in Java 1.5 million years ago, says Langbroek. He concludes: “We need more than a new set of radiometric dates to clear the problems at Sangiran.”

    The Evolution of Malaria Resistance

    Malaria kills between 1 million and 3 million humans, mostly children, every year, making it the leading cause of death from an infectious disease (Science, 20 October 2000, p. 428). It has plagued humans for at least 3000 years, when an Egyptian mummy was entombed with antigens to malaria in its blood. But scientists still don't know when the deadliest form of malaria—Plasmodium falciparum—arose and first spread in humans. “Studies of the malaria parasite seem to show that it's been around a long time, since the split from chimpanzees,” says biological anthropologist Jonathan Friedlaender of Temple University in Philadelphia. “But when did it become this ubiquitous killer in humans?”

    Now, in a paper at the Human Biology Council's meeting in Kansas City, an international team of geneticists has examined humans' evolutionary response to malaria to help answer that question. By tracing two genetic mutations that give people anemia but also confer resistance to malaria, the team concludes that the disease first began to have a severe effect on humans in the past 11,000 years, says lead author Sarah Tishkoff of the University of Maryland, College Park. The paper offers “the first estimate for actual dates for the occurrence of human genetic mutations associated with malaria protection,” says Friedlaender.

    The work confirms, in part, a long-standing theory that the disease first hit humans severely when agriculture was introduced in the past 10,000 years, although mild forms of malaria have existed since our ancestors split from apes. The new work also pushes back the date for the initial spread of P. falciparum in western Africa by several thousand years, from 2500 to 4000 years ago to a time 12,000 to 7000 years ago, when climate changes in sub-Saharan Africa triggered changes in human lifestyle.

    Several groups have looked for clues to the origins of malaria by tracing the genetic evolution of the malaria parasites—P. falciparum and P. vivax—or the Anopheles mosquitoes that transmit them, but Tishkoff's team examined the human genome. They zeroed in on a common genetic disease—glucose 6-phosphate dehydrogenase (G6PD) deficiency, which affects 400 million people. Those with the deficiency have anemia, but they also have roughly half the risk of getting severe malaria as do most humans.

    The team focused on two of the many variants of G6PD deficiency—the A- variant, found only in Africa, and the Med variant, found in the Mediterranean, the Middle East, and India. Both variants are the result of a single nucleotide substitution in the G6PD gene, and both are found in strong association with certain microsatellite markers in the noncoding region of the gene. This association and low variation indicated to Tishkoff that the variants were both independent and young, as otherwise the microsatellite association would have broken down over time.

    The age of the variants was calculated by population geneticist Andrew Clark of Pennsylvania State University, University Park. He counted mutations and estimated the mutation rate by taking into account recombination, selection pressure, and the microsatellites' mutation rate. After also determining the frequency of the mutations in different populations, the team concluded that the A- variant arose in Africa within the past 3840 to 11,760 years, and the Med allele arose between 1600 and 6640 years ago in the Middle East or Mediterranean.

    Both mutations then spread rapidly through regions where malaria is endemic, despite the fact that they cause anemia. That shows that the mutations were strongly favored by natural selection, suggesting that malaria suddenly had a severe effect on humans at these times, says Tishkoff.

    The dates for the A- variant fit loosely with a well-known hypothesis by geneticist Frank Livingstone of the University of Michigan, Ann Arbor, who proposed in 1958 that sickle cell anemia and malaria took hold when farmers first used iron tools to clear forest and live in dense settlements near mosquito breeding grounds. But Livingstone's dates were just 2500 to 4000 years ago, more recent than Tishkoff's dates. Tishkoff consulted Alison Brooks, an expert on African prehistory at George Washington University in Washington, D.C., who noted that archaeological evidence has been accumulating for a dramatic lifestyle change in Africa starting 12,000 years ago, when a climate shift transformed the Sahara from an arid wasteland into a green savanna with many lakes and ponds. Africans fished and herded animals and for the first time moved into denser communities on the lakeshore—next to the breeding grounds of Anopheles mosquitoes. Conditions were perfect for the parasite. Presumably in response to the disease's devastating effect in this environment, the A- mutation arose and spread rapidly—a vivid example of natural selection in action, says Tishkoff.

    The Med mutation came later in the Middle East or Mediterranean, perhaps in response to the more recent spread north of the deadliest malaria, P. falciparum. Interestingly, the dates for the rapid spread of the Med mutation coincide with the exploration of Greeks and Macedonians, such as Alexander the Great. “It fits beautifully,” says Tishkoff. If the dates hold, she says—and at the moment they have large margins of error—“this is one of the few cases where you can tie in genetics, history, and archaeology.”


    Bulmahn Is Bullish on Science Reforms

    1. Robert Koenig,
    2. Gretchen Vogel

    Edelgard Bulmahn wants to change how the country manages its research institutions, but she's facing vocal opposition from many academics

    BERLIN—Ambitious and controversial, Edelgard Bulmahn has been a major force in German science and higher education since becoming research minister in 1998. A member of the “Red-Green” coalition government, Bulmahn has proposed an overhaul of Germany's university rules—seeking merit pay and “junior professorships” that would free young scientists to pursue independent research—that has polarized the academic community (Science, 6 April, p. 30). She has also pushed efforts to reverse Germany's brain drain by beefing up fellowship programs and other incentives to attract expatriates and woo international scientists.

    But Bulmahn, a political scientist and former member of Parliament, has not restricted herself to academic reforms. She has waded into the debate over embryonic stem cells, emphasizing the use of adult stem cells for now. Bulmahn also is the key negotiator in trying to convince Bavarian state officials to convert a new research reactor to a uranium fuel that could not be diverted by terrorists for making nuclear weapons.

    In a 9 April interview with Science in her Berlin office, Bulmahn discussed these and other topics in laying out her vision for German research. What follows has been edited for clarity and length.

    Correcting the record.

    Bulmahn says opponents “have distorted” the government's proposed reforms.


    Science: On becoming minister, you set some ambitious goals to reform the university system and the science system as a whole. In what areas have you made the most progress?

    Bulmahn: We have started to modernize our research system to handle today's challenges. … We need to give our young scientists and scholars more and better opportunities to do their research independently within the framework of their universities or research institutions. … We haven't finished everything we started, but that is always the case if you want deep reforms. … Science and research have been priorities of this government. We have increased the budget by 12.5% in the last 2 years, even while the overall federal budget has been decreasing.

    Science: More than 3700 professors recently signed a petition complaining about your proposed university reforms, which would allow a performance-oriented salary system, phase out the post-Ph.D. Habilitation requirement to become a professor, and establish “junior professors.” How do you respond to their concerns?

    Bulmahn: People have become used to this German public-service system after more than 100 years, so of course some people are afraid of these changes. You will always face opposition when you try to change traditional systems.

    In their public arguments, opponents have distorted the reform proposals. … They have used the argument that, in the future, young scientists would get a so-called monthly “starting salary” of DM 8500 [US$4200]. That is not true. The starting salary of a young scientist or young professor would be negotiated by them and the university. … As a young scientist, you could get much more money, depending on your performance and on the success of your research.

    The opponents also said that the Habilitation (a process for becoming an independent researcher) would be forbidden. … That is not true. The Habilitation won't be forbidden, but it will no longer be the major factor needed to become a professor. For example, if you have published papers in Science or Nature, that is a much more reliable measurement of scientific excellence than the Habilitation.

    The opponents don't like the junior professorships. They argue that young scientists—people between the ages of about 32 and 35—still need guidance from an older professor. I think they need the older professors as partners, not as their boss who is telling them what to do. That is one of the major discussion points at the moment. A lot of young scientists support these reforms because they think the traditional system works against young scientists.

    Science: When do you think the Parliament will take up your proposal?

    Bulmahn: This year, and I am very hopeful it will be approved.

    Science: Stem cell research is a hot issue now. What role should the government have in overseeing such research?

    Bulmahn: Germany has an Embryo Protection Law, one of the first worldwide. [The law prohibits interference with an embryo's development unless it benefits the embryo.] We should keep this law, at least for the moment.

    Science: What does the law say about importing embryonic stem cells, which some scientists would like to do?

    Bulmahn: It is not expressly forbidden, so it is allowed. The question is whether it should be funded by public money or not. … This question is being decided by the DFG [basic research granting agency].

    I personally think we should emphasize research with adult stem cells, as we already do now, because this is not ethically problematic. It is too early to decide which way may be the most successful. … My opinion is that we should continue doing experiments with embryonic stem cells from animals and look very carefully at the results.

    Science: Will Germany establish a bioethics council?

    Bulmahn: The Chancellor [Gerhard Schröder] has proposed creating one. … We plan to establish this council in May. It will have an advisory role, giving recommendations, but it also will play an important role in the public discussion. Members will include scientists, theologians, philosophers, lawyers, and representatives from important science organizations. The council will make its recommendations to the government.

    Science: You are the main federal negotiator with Bavaria about the question of converting the fuel used by the FRM-II research reactor in Garching from highly enriched to medium- enriched uranium, to meet nonproliferation concerns. Do you expect a compromise that will allow it to go on line this fall?

    Bulmahn: It depends on the Bavarian government. We are willing to come to such a compromise. And if the Bavarian government accepts the change to medium-enriched uranium, then we can reach a compromise. But we have to set a precise date for the conversion, and that can't be in 10 years.

    Science: Twelve years after the fall of the Berlin Wall, there is still a two-tiered system of payment of scientists, with former East Germans being paid less. When will this disparity be phased out?

    Bulmahn: It is a problem even in this ministry, where former East Germans working here get paid less. This very much depends on [the German states]. As the federal research minister, I can do nothing about it. What I did … is to change the salary system and personnel structure for all scientists in the federal government.

    I can't change it directly by saying they should get the same salaries as the West Germans. But I will try to change it indirectly.

    Science: Are you satisfied with the progress of science in the former East Germany?

    Bulmahn: We have made a lot of progress in research, but we need more progress in the application of that research. That is why we started the InnoRegio program to promote the applications of science. … It is difficult for the researchers to find partners in their regions, so they have to find partners in the western region of Germany.

    Science: What has been done to improve the basic research system outside of the universities?

    Bulmahn: We had a systematic evaluation of research organizations and institutions. One suggestion was to improve the cooperation between universities and our research organizations. The Genome Research Network [which will channel $175 million over 3 years into universities, Max Planck institutes, and national research centers (Science, 6 April, p. 29)] will increase this cooperation. I think that it is absolutely necessary to link those who do the basic research in functional genomics with those who are in the medical field. … In awarding increases, we concentrated especially in a couple of fields: biotechnology (in which we increased genome research, for example, by more than 400% over the last 2 years) and information and communication technology. Within Europe, Germany now has the highest number of biotech companies. With regard to public funding in the field of genome research, we are now second worldwide, behind only the United States.


    Science: The most recent evaluation of the national research centers recommended changing the funding and organizational framework and also extending cooperation with German universities.

    Bulmahn: I have been negotiating with the national research centers—transforming the financial and legal framework—for nearly 18 months. We have almost finished the process, and we have agreed to change it to a “program-oriented” funding system, which was recommended by the Science Council. The funding will be by research programs rather than by institutions. The goal is to increase competition among centers in similar research fields and, also, to increase cooperation. In the future, for example, two or three centers can apply together for a common program. … We will also make it easier for these institutions to spin off new research companies.

    Science: Are there any major problems in the relationship between Germany and the United States with regard to science and higher education?

    Bulmahn: We have good relations in science and research. But there is one major issue: Germany continues to contribute to the scientific strength of the United States by sending it so many talented researchers. I would like to see more of a two-way street, with more American students coming to study here.


    New Trips Through the Back Alleys of Agriculture

    1. Kathryn Brown

    When did hunter-gatherers begin trading the wild life for farming and herding? Creative techniques offer new answers to agriculture's oldest question

    The stuff of early agriculture is garbage. Charred plant seeds, buried animal bones, starch grains stuck to stone tools: These throwaways are treasures in disguise, clues to our earliest farming days. Increasingly, scientists are sifting through the litter to uncover history in new ways.

    Their questions, if not work habits, are simple: When and where did hunter-gatherers begin to settle down, content to grow plants and tend animals at home? Over the past decade, with advances in molecular biology, accelerator mass spectrometry (AMS) dating, and other techniques, the picture of early agriculture has grown richer, like an outlined portrait being colored in (Science, 20 November 1998, p. 1446). Between 10,000 and 5000 years* ago, current theory goes, people in at least seven places, including Mexico, the Near East, and South America, independently domesticated crops and creatures. But big gaps in the historical picture remain.

    To fill in those gaps, scientists are expanding their search in creative ways. Some are taking a closer look at museum collections, previously studied and shelved. Others are exploring tiny plant remains trapped beneath ancient food residue or stuck to excavated tools and teeth. “A lot of the answers are right under our noses, hiding in plain sight,” remarks archaeologist Melinda Zeder of the Smithsonian Institution in Washington, D.C., who points to new data on the domestication of cattle, goats, and maize.

    Boning up

    This week in Nature, for instance, scientists retrace the early days of European cattle—a tale told, in part, by ancient ox bones borrowed from small British museums. Researchers have suspected that cattle were first domesticated in the Near East some 8000 years ago. Early herders then made their way across Europe, bringing cattle along. But did these imported eastern cattle alone give birth to today's European varieties? Or did Europeans catch on to the domestication idea and begin breeding wild oxen about the same time?

    To find out, a team of scientists led by geneticist Daniel Bradley of Trinity College in Dublin first collected mitochondrial DNA (mtDNA) from 392 modern cattle in Europe, Africa, and the Near East. Comparing mtDNA sequences, the researchers found striking genetic similarity between today's predominant Bos taurus cattle in Europe and those in the Near East, suggesting that today's European cows descended from the Near East variety.

    To shore up the evidence, the researchers went knocking on the doors of British museums, looking for fossils of extinct British wild oxen. If Europeans did not domesticate this ox, the team reasoned, its ancient genetic profile would be unique—isolated and starkly different from modern cattle. “It turns out that some small, regional museums in England just happened to house the key material to look at these questions,” remarks co-author Andrew Chamberlain, a biological anthropologist at the University of Sheffield, U.K.

    Chamberlain helped select four wild ox fossils (3700 to 7500 years before present) from the museums. And sure enough, their DNA was distinctly unique. Comparing a 201-base-pair region of wild ox mtDNA, the researchers found eight characteristic mutations that are absent or rare in modern cattle. “This particular wild ox species in Britain has different DNA from all the modern breeds,” says Chamberlain. The team concludes that migrating Eastern herders get full credit for domesticating cattle in Europe, although they caution that paternal cattle DNA is needed for proof.

    Getting your goat

    Ancient ox fossils aren't the only collections getting a closer look. The Smithsonian's Zeder recently rediscovered a trove of modern bones that hold the history of another animal: the goat.

    Like cattle, goats were first domesticated in the Near East—specifically, in the eastern Fertile Crescent, a broad arc of land that stretches from present-day Turkey to Iran. The Fertile Crescent hosted the world's earliest agricultural society, but questions dot the calendar. When, for instance, did precocious farmers first bring wild goats home to keep?

    Researchers have hoped to pinpoint changes in goat populations between the late Pleistocene era—when goats were known to be wild in the Fertile Crescent—and the early Holocene, when domesticated goats thrived. The transition from hunting to herding, the logic goes, should be marked by a clear shift in the age and sex of goats killed. Before the animals were domesticated, hungry hunters would primarily kill adult males; later, herders would instead target younger males, keeping females and a few adult males alive much longer for breeding.

    Until now, however, researchers have lacked the fossil collections and analytical techniques to detect this demographic shift. Part of the problem, Zeder says, is that for decades, zooarchaeological collections were like baseball cards: traded between museums, forgotten in dark drawers as history collected dust.

    Working on a project at the Field Museum in Chicago 5 years ago, Zeder realized that she was looking at a pivotal piece of goat chronology: bones from 37 modern wild goats collected in Iran and Iraq during the 1960s and 1970s. This collection provided a benchmark population: a baseline set of measurements that enabled Zeder to distinguish male from female goats, based on bone size. With these measures, Zeder could finally build demographic profiles of goats killed thousands of years ago in the same eastern mountain valleys.

    Working back through collections of ancient bones, she calculated sex-specific age profiles of goats in the Fertile Crescent through time. And in fact, the results did show the striking demographics of herd management in the Zagros Mountains about 10,000 years ago, as Zeder and co-author Brian Hesse, an anthropologist at the University of Alabama, Birmingham, reported last spring in Science (24 March 2000, p. 2254). “There is a distinct kill-off pattern,” Zeder says, “with young males and old females disappearing from the record.”

    Today, major museums curate zooarchaeological collections more carefully—and the cattle and goat studies, Zeder says, show that the effort is worthwhile. “Now, with a sharper perspective for early agriculture, we can glean a lot more information from collections we've had for some time,” she adds.

    Seven wonders.

    In at least seven places, scientists say, early farmers and herders independently domesticated plants and animals.


    Sticky situations

    Fresh techniques are also taking root in the field. For years, Dolores Piperno, an archaeobotanist with the Smithsonian Tropical Research Institute in Balboa, Panama, has argued that lowland tropical forests were home to some of the earliest farmers. But evidence has been elusive, because the standard macrofossils—say, squash rinds or maize kernels—quickly rot in the sultry climate. Piperno has turned instead to microfossils, tiny remains of organisms that must be studied under a microscope. As described in a study published last fall in Nature, she and her colleagues dug up stone tools from a Panamanian cave site known as the Aguadulce Shelter, which dates back about 12,700 years.

    In the lab, the team recovered starch grains stuck to the tools' grinding edges. Analyzing the shapes and features of the grains, the researchers concluded that they were remnants of arrowroot, yam, maize, and manioc plants; radiocarbon dating of surrounding soil suggests these remnants are 7800 to 5500 years old. “These starch analyses and other microfossil strategies add empirical data, indicating that lowland tropical forests were indeed crucial to the origin and early spread of domesticated crops,” Piperno says.

    But new techniques bring new caveats. Unlike bigger plant remains, such as whole seeds or bits of corn cobs, individual microfossils cannot be directly dated, leaving their age open to some debate. It's also hard to distinguish wild from domesticated varieties by simply comparing the characteristics of individual starch grains, which naturally vary, says archaeobotanist Lee Newsom of Southern Illinois University in Carbondale. “I think these microbotanical techniques have a lot of potential,” says Newsom. “There are still some complexities, however, that need to be worked out.”

    Creative cooking may offer one solution. Thousands of years ago, villagers often cooked meals in the same pot repeatedly, without scrubbing out their cookware. Yesterday's charred and crusted leftovers built up inside the pot, trapping microfossils from cooked plants beneath the baked-on food. Robert Thompson, an archaeologist at the University of Minnesota, Minneapolis, calls these food residues, which are large enough to be directly dated using AMS, “the ideal untapped environment for studying microfossils.”

    Thompson has used food residues to probe a classic crop mystery: the early days of maize. Most researchers agree that Mexico was the birthplace of maize, a crop first cultivated from the wild weed teosinte. The question is when. In the 13 February Proceedings of the National Academy of Sciences, Piperno and archaeologist Kent Flannery of the University of Michigan, Ann Arbor, reported that AMS-dated maize cobs found in an Oaxaca, Mexico, cave are roughly 6250 years old—the oldest cobs on record in the Americas. This finding calls into question some of Piperno's earlier work suggesting that maize microfossils found in Ecuador could be much older, dating 7000 to 8000 years back.

    To help clear up the clockwork, Thompson teamed up with University of Illinois, Chicago, anthropologist John Staller. In the mid-1990s, Staller excavated a coastal Ecuadorian settlement, La Emerenciana, that included an ancient ceremonial center used for funerals and ritual offerings. At the site, Staller's team discovered sherds, or fragments, of cookware and four partial human skeletons. Thompson and Staller have now reexamined 10 pottery sherds and the teeth of three skeletons in search of maize phytoliths—tiny chunks of leftover plant silica—trapped in food residue.

    Working with the pottery, the researchers scraped away and chemically treated food residues. Three of the pottery sherds did, indeed, reveal phytolith assemblages, or groupings, similar to those found in maize chaff. The researchers also found maizelike phytolith assemblages in tartar on the teeth of two skeletons, evidence that the maize was eaten. After using AMS to date the food residues and doing related experiments, Thompson and Staller concluded that maize was present in coastal Ecuador about 4200 years ago. That timing supports the idea that maize was first domesticated in Mexico, as Piperno and Flannery reported in February, and subsequently spread, fairly recently. The study will be published this fall in the Journal of Archaeological Science.

    Bruce Smith, director of archaeobiology at the Smithsonian Institution and author of the book The Emergence of Agriculture, calls the new study a “significant breakthrough” in tracing the movement of corn from southwestern Mexico into the Neotropics. “Not only can AMS dates on the residues be tightly linked to the maize cob phytoliths, but the phytoliths in turn are from the plant part actually eaten and are recovered from cooking residues, making a very compelling case,” says Smith, who helped provide funds to date the residues.

    Archaeologists agree that our earliest farming days—and decisions—still hide in history's shadows. “Why did [one group] do this, when the people in the next valley didn't?,” asks Flannery. “It almost requires climbing into a time machine and traveling backward to find out.” Until that time machine is invented, scientists will keep digging, as creatively as they can.

    • *All dates are calendar years.

  15. Myriad Ways to Reconstruct Past Climate

    1. Erik Stokstad

    How fast can climate change? How drastic are the swings? What parts of the world will be hit with typhoons or drought? To answer questions like these, climate scientists look at records of past climate. Direct measurements, such as thermometer records, extend back about 2 centuries. Humans have also noted aspects of climate change for about 1000 years in historical records of cherry blossoms in Japan and grape harvests in Europe, and Egyptian hieroglyphs tell of 4000-year-old droughts. For older evidence of past climate—such as the so-called Last Glacial Maximum depicted on this map, roughly 20,000 years ago—a wide variety of records span different times and areas. This illustration presents a sample of them and their uses. Further information can be found at and

    Information: Temperature and rainfall—even seasonal changes—from ring width and density; records contain patterns of cycles such as El Niño and the Pacific decadal oscillation; ring scars can be used to reconstruct frequency and area of wildfires.

    Resolution: Annual.

    Dating: Counting of rings; radiometric carbon; correlation between trees, as shown in 700-year-old Douglas firs (left) from El Malpais National Monument in New Mexico.

    Comments: Only terrestrial record with widespread and continuous annual resolution. Limited use in tropical and subtropical regions, where trees don't form well-defined rings. Interpretation complicated because tree growth is influenced by many local factors.

    Time Range: Typically 500 to 700 years ago to present. In a few cases, 11,000 years ago to present. One 1200-year record extends back to 50,000 years ago.

    Areas studied:

    Information: Shifts in vegetation patterns can reveal temperature and moisture.

    Resolution: Typically 50 years, depending on deposition rate, down to subannual in some places.

    Dating: Radiocarbon in lake sediment or wetlands; volcanic ash layers or oxygen isotope correlation in marine sediments.

    Comments: Pollen grains such as this Tilia (left) are very good for revealing temperature. Useful only as far back as analogies to modern vegetation hold. Can work on a variety of scales from microclimate of a small lake to average conditions over an entire continent.

    Time range: Present to several million years ago.

    Areas studied:

    Information: Many aspects of past climates can be traced in the shape of the landscape. The extent of glaciers and ice sheets, for example, is revealed by erratic boulders, rounded valleys, or long piles of gravel and rocks, called moraines, like these (left) from the Last Glacial Maximum of New Zealand. Sea level can be reconstructed from ancient wave-cut terraces on coastal hillsides and from the shape of sea bottom.

    Resolution: Varies.

    Dating: All sorts, including radiocarbon dating of wood in moraines or coral stranded on hillsides after sea level dropped.

    Time range: From glaciation 2.9 billion years ago to the Little Ice Age of the 19th century.

    Areas studied: Worldwide.

    Information: Volume of continental ice from oxygen isotopic composition of the oceans; levels of CO2 and methane in the atmosphere from trapped gas bubbles; wind strength and source from dust, sea salt, pollen; surface temperature from isotopic ratios in ice, borehole temperatures, gas fractionation, melt layers; snow accumulation rates from thickness of annual layers; sunspot cycles from isotopes formed by solar cosmic rays.

    Resolution: Subseasonal to decadal; highly accurate to 40,000 years.

    Dating: Counting of annual layers, such as these (left) from Greenland; correlation to other cores; ice-flow models.

    Comments: These cores provide a direct sample of the atmosphere. Cores also contain information about places ranging from the local environment to distant deserts, which helps scientists figure out which aspects of climate change at the same time.

    Time range: 440,000 years ago to present.

    Areas studied:

    Information: Sea surface temperature from oxygen isotopes and elemental ratios, also salinity. River discharge and precipitation cycles on land from isotopes. Records reveal El Niño frequency, impacts, and relation to background climate; sea level from dating of coral. Oxygen isotopes in coral from Kenya (left) show a connection to El Niño in the Pacific.

    Resolution: Typically months; weekly in exceptional cases.

    Dating: Annual banding from coral density, stable-isotope ratios, or elemental ratios.

    Comments: One of the few tropical records that show seasonal changes in ocean systems. Accurate multivariate data sets. Disadvantage: hard to find records that are 400 or more years long.

    Time range: Continuous records to about 400 years. Large fossil corals give short time intervals about 130,000 years ago.

    Areas studied:

    Information: Isotopes in microfossils reveal temperature, salinity, ice volume, atmospheric CO2, and ocean circulation. Sands can reveal ocean currents, dust storms, and iceberg calving.

    Resolution: Typically thousands of years to centuries; in rare settings, seasonal.

    Dating: Radiometric for fossils as old as about 40,000 years. Correlated by stable isotopes to regular changes in Earth's orbit going back millions of years.

    Comments: Marine sediments cover much of Earth and provide continuous records that are often protected from erosion, although some layers have been mixed by burrowing animals or are subject to dissolution or diagenesis. Cores of sediment contain proxies for both local and global changes, synchronizing different events.

    Time range: As far back as 180 million years in some places. Higher resolution records are increasingly available from shorter, more recent intervals, such as the last 20,000 years.

    Areas studied:

  16. The Tropics Return to the Climate System

    1. Richard A. Kerr

    Long neglected by paleoclimatologists, the tropical oceans, and especially the tropical Pacific, are being given their due as participants in and possibly dominant drivers of long-term climate change, from the global warming of the past century to the ice ages of 30,000 years ago

    The tropical ocean spent years relegated to the backwaters of paleoclimate research, but it's experiencing a resurgence. Back in the 1970s, a major study of global climate found that the tropical oceans had apparently cooled little in the last ice age while the rest of the world slipped into a deep freeze. Thinking that such climatological inertia implied a minor role in climate change, researchers focused instead on the high latitudes, especially the North Atlantic, as the place where all sorts of long-term climate changes get their start.

    Tropical updrafts can change climate far to the north and south.


    But a variety of new paleoclimate indicators is now putting the tropical oceans square in the midst of the climate action. The tropics not only cooled along with the rest of the planet in the last ice age, these new indicators show, but were right in step whenever the world briefly warmed or cooled during glacial times and when the ice age ended. They even seem to have been deeply involved in the warming of the past 50 years. No one has yet fingered the tropics as a prime mover in climate change, but, says glaciologist Richard Alley of Pennsylvania State University, University Park, “when we solve this it's got to involve the tropics.”

    Until the new data, paleoclimate researchers' cool view of the role of the tropics was based on the Climate Long-Range Investigation, Mapping, and Prediction (CLIMAP) study, published in 1976 in Science. In that study, paleoceanographers extracted ocean temperatures from the mix of single-celled animals called foraminifera that lived in surface waters at the peak of the ice age 21,000 years ago. From the proportions of foram skeletons of cold-tolerant and warmth-loving species left in bottom sediments, CLIMAP workers deduced that tropical waters had cooled only about 1°C while the far North Atlantic had chilled 10°C or more.

    That reinforced many paleoceanographers' belief that the North Atlantic was much better placed than the tropics to play a pivotal role in climate change. For years, paleoclimate data from around the North Atlantic have pointed to the region as the site of the most dramatic climate shifts, notes Alley. It's near the high-latitude continents where the great ice sheets are born, grow, and decay, and its far northern reaches include the unique and climatically sensitive turning point of the ocean's “conveyor belt.” The conveyor consists of surface currents that carry heat from the south to the northern North Atlantic before sinking into the deep sea and turning back southward. With so much going on there and an obvious potential for switching the heat-laden conveyor on and off, the North Atlantic seemed ideal as a prime mover of climate change. As paleoclimatologist Gerald Haug of the Swiss Federal Institute of Technology, Zurich, puts it, the paradigm among paleoceanographers was: “Everything comes from the North Atlantic.”

    The tropical oceans, it turned out, weren't going to take the North Atlantic hegemony lying down. In the past 5 years, paleoclimate researchers have begun applying new ways of determining past temperatures, from the noble gases dissolved in groundwaters to the altitude of past mountain snowlines. They have also developed a better understanding of forams' temperature preferences. When paleoceanographer Alan Mix of Oregon State University in Corvallis and his colleagues reanalyzed the CLIMAP data in light of this new information, they found a 3°C to 4°C glacial cooling of the equatorial Pacific, not 1°. And when David Lea of the University of California, Santa Barbara, and his colleagues analyzed the chemical composition of skeletons of forams, which substitute magnesium for calcium when the temperature rises, they also found a tropical Pacific cooling of about 3°C. “Clearly, the case is strengthening for cooler glacial tropics,” says Alley. “Everyone is drifting in that direction.”

    For the meteorologically inclined, that's comforting, if expected, news. “Cooling the rest of the world and not the tropics doesn't make sense,” says El Niño modeler Mark Cane of Columbia University's Lamont- Doherty Earth Observatory in Palisades, New York. Meteorologists have long regarded the tropics as the “firebox” of the planetary weather system: the place where the bulk of the solar heat enters to drive day-to-day weather changes in both hemispheres as the heat makes its way toward the poles. It seems only natural that the tropics would be at the heart of changing climate from decade to decade or millennium to millennium as well, says Cane. The North Atlantic, on the other hand, would have a hard time triggering climate change in the other hemisphere, as happened during the last ice age, he says. It faces a barrier in the atmosphere that lets the tropics reach into high latitudes to change atmospheric circulation but blocks a northern influence to the south. In the ocean, a warming in the North Atlantic would simply come at the expense of the Southern Hemisphere.

    As the North Atlantic's shortcomings as a driver of global climate change have become obvious, so has the tropical Pacific's potential. “We know when you change things in the tropics, you get global changes,” says Cane. “El Niño is one example.” And the tropical Pacific could be just as sensitive a climate switch as the North Atlantic, climate modeler Raymond Pierrehumbert of the University of Chicago has pointed out. Whereas the heat-laden conveyor belt is “tippy” where it sinks into the deep sea—its sinking can be turned on or off by a small change in the temperature or salt content of far North Atlantic waters—the tropical Pacific's tippiness lies in its overlying atmospheric convection. Slight changes of surface ocean temperature, as happen when the tropical Pacific swings between the warmth of El Niño and relative cold of La Niña, can determine whether moisture-laden air rises in towering rain clouds. They, in turn, can redirect jet streams in either hemisphere and thus change climate from the Antarctic to far northern Alaska.

    That El Niño-like swings in the tropical Pacific could change climate around the world for more than the usual year or two was demonstrated earlier this month in work by meteorologist Martin Hoerling of the National Oceanic and Atmospheric Administration in Boulder, Colorado, and colleagues. In the 6 April issue of Science (p. 90), they reported that in their climate model, the warming of the tropical oceans seen in recent decades, especially the tropical Indian and Pacific oceans, can drive the equally gradual shift seen in the climate regime known as the North Atlantic Oscillation (Science, 7 February 1997, p. 754). That drift has, in turn, driven much of the warming over the continents that is taken to be the first signs of greenhouse warming. It seems most likely, say Hoerling and colleagues, that increasing greenhouse gases are driving much of global warming by way of the tropics.

    Farther back in time, in paleoclimate records, modelers have not made a cause-and-effect connection between the tropics and global climate change, but in recent months, groups have reported many examples of tropical ocean climate shifting in time with distant climate. According to analyses of the oxygen isotopes in forams, the pool of warm water in the far western equatorial Pacific that fuels El Niño warmed and cooled in step with millennial-scale warm snaps during glacial periods, as recorded in ice cores in Greenland.

    A match.

    Rains washing iron off South America (top) surged in time with warmings recorded in a Greenland ice core (bottom)


    Oxygen isotopes in a cave stalagmite from eastern China reveal that the east Asian monsoon—which draws on the moisture of the western Pacific's warm pool—waxed and waned in time with millennial-scale Greenland climate change during the most recent ice age. And drilling into a former lake bed in the tropical Andes of South America has shown that precipitation there varied on millennial time scales in step with cold glacial episodes in the northern North Atlantic. During the biggest climate shift of all, deglaciation 15,000 years ago, the tropics and other regions actually began warming before the ice in the north started to melt; at Lea's east and west sites in the tropical Pacific, warming started 3000 years ahead. “We don't have evidence for the tropics as the driver” of long-term climate change, says Lea, but “at the least, the tropics had to be participating in large-scale climate change.”

    Can-do tropics.

    Changing only a model's tropical ocean temperatures can change the high-latitude atmosphere (bottom, change in temperature) much as the real atmosphere changed (top).


    To pin down any one component of the climate system—such as the tropics—as the dominant driver, researchers would have to catch it leading all other parts into a climate transition and show how it could trigger major change. That's a tall order in paleoclimatology, but researchers are getting a start. Paleoceanographer Larry C. Peterson of the University of Miami and his colleagues recently reported that the amount of rain draining down Venezuelan rivers into the offshore Cariaco Basin increased during warm episodes recorded during the ice age in Greenland ice cores, as judged by the varying amount of iron and titanium washed into the basin. Peterson and his colleagues suggest that the shift to raininess would have pumped more moisture east to west out of the Atlantic, across the low-lying Isthmus of Panama, and into the Pacific. That would have left Atlantic waters saltier, more likely to sink at the end of the conveyor, and therefore able to carry more heat into the north to reinforce the warming.

    The Cariaco record may offer a mechanism for a tropical driver of high-latitude climate change, but getting the timing right could be even tougher. El Niño changes climate halfway around the world from one year to the next, while the resolution of most paleoclimate records is still decades at best. “It seems things are coupled at time scales so short, I'm not convinced we'll ever be able to say ‘A' caused ‘B,'” says paleoceanographer Konrad Hughen of Woods Hole Oceanographic Institution in Massachusetts. “We have a hard time doing that with modern El Niños. The tropics are clearly involved. Are they the sole driver or is something driving them·”

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution