News this Week

Science  20 Nov 1998:
Vol. 282, Issue 5393, pp. 1390
  1. BIOTECHNOLOGY

    Claim of Human-Cow Embryo Greeted With Skepticism

    1. Eliot Marshall*
    1. With reporting by Elizabeth Pennisi.

    A small, privately held company in Worcester, Massachusetts—Advanced Cell Technology Inc.—startled the scientific world last week by announcing that it had fused human DNA with a cow's egg to create a new type of human cell. Company leaders say that a colony of these fused cells—created in 1996, kept alive for 2 weeks, and discarded—looked like a cluster of human embryo cells. On this basis, the company declared that it had “successfully developed a method for producing primitive human embryonic stem cells.”

    The claim, announced in a front-page news story in The New York Times on 12 November, came just 6 days after two groups of researchers reported in Science and the Proceedings of the National Academy of Sciences that they had used traditional techniques to culture human embryonic stem cells—“undifferentiated” cells that have the potential to grow into any cell type (Science, 6 November, pp. 1014 and 1145). It added to the concerns already raised among ethicists and government officials. On 14 November, President Clinton sent a letter to Harold Shapiro, chair of the National Bioethics Advisory Commission (NBAC), saying he is “deeply troubled” by news of the “mingling of human and nonhuman species.” The president asked NBAC to give him “as soon as possible … a thorough review” of the medical and ethical considerations of attempts to develop human stem cells. And a Senate committee may review the company's claim at a hearing on stem cell technology planned for 1 December.

    Scientists, however, were startled for another reason: They were amazed that Advanced Cell Technology (ACT) broadcast its claim so widely with so little evidence to support it. Some were puzzled that the company had tried to fuse human DNA and cow eggs without first publishing data on the fusion of DNA and eggs of experimental animals. Many doubted that ACT's scientists had created viable human embryonic stem cells. And most were left wondering why the company chose to go public now with this old experiment.

    The company had inserted DNA from adult human cells into cow's eggs using a nuclear transfer technique similar to the one used to clone Dolly, the first mammal cloned from an adult cell. ACT's top researcher and co-founder—developmental biologist James Robl of the University of Massachusetts, Amherst —says an early version of the experiment was performed in his UMass lab “around 1990.” A student carrying out nuclear DNA transfer in rabbits had run out of donor cells, Robl recalls, and, almost as a lark, took cheek cells from a technician and transferred their DNA into rabbit oocytes. “I didn't even know about it,” Robl says. To everyone's surprise, the cells began to divide and look like embryos. “I got very nervous” on learning about it, Robl says, and shut down the experiment.

    Robl and his former postdoc Jose Cibelli, now a staffer at ACT, returned to this line of experimentation in 1995 to '96, when they were working with cow embryos on other projects. They remembered that the human DNA-animal oocyte combination worked before, and “we thought, ‘Maybe we can get a cell line’” this way. Cibelli transferred nuclear DNA from 34 of his own cheek cells and 18 lymphocyte cells into cow oocytes from which the nuclei had been removed. Six colonies grew through four divisions, according to Cibelli, but only one cheek cell colony grew beyond that stage—reaching 16 to 400 cells. Robl says they didn't follow up on the work because “we had about 15 other things we were doing,” and developing human stem cells was not at the top of the list. But the university did file for a patent on the technique, granting an exclusive license to ACT.

    Robl concedes that the experiment did not yield publishable data. He says he classified the cells as human stem cells based on his experience of “look[ing] at hundreds and hundreds” of cell colonies. But Robl offered no other data to support this conclusion.

    Other researchers agree that the cells may have had human qualities, because they continued to divide after the cow's nuclear DNA had been replaced with human DNA. But Robl and Cibelli didn't do any of the tests normally done to show that these cells were human or that they were stem cells, such as looking for expression of human proteins or growth of specialized tissues. James Thomson of the University of Wisconsin, Madison, lead author of the Science paper, says that ACT's cells “meet none of the criteria” for embryonic stem cells. And Gary Anderson of the University of California, Davis, who has isolated a line of embryonic pig cells, comments: “Just because someone says they're embryonic stem cells doesn't mean they are.”

    A few researchers—including Robert Wall, a geneticist at the U.S. Department of Agriculture in Beltsville, Maryland—were willing to suspend their disbelief, however, if only because they respect Robl. He is “a top-notch, very solid scientist,” says Wall, who adds that anyone who has examined a large number of embryonic cells can distinguish real ones from impostors.

    But others are less charitable. “This may be another Dr. Seed episode,” says Brigid Hogan, an embryologist at Vanderbilt University in Nashville, Tennessee, referring to Chicago physicist Richard Seed, who caused a furor early this year when he announced that he planned to clone humans. Although Seed didn't have the means to carry out his project, Congress quickly drafted a criminal ban on many types of cloning research. Congress set that debate aside last spring but indicated it might take it up again later (Science, 16 January, p. 315 and 20 February, p. 1123). Hogan, a member of a 1994 National Institutes of Health (NIH) panel that proposed guidelines for human embryo research, agrees that “it's theoretically possible” to do what ACT claims to have done. But the company's announcement reminds her of the Seed case because “it smells to me of sensationalism” and seems “likely to inflame an uninformed debate.”

    Why did ACT publicize this experiment now? Some observers think the company wanted to ride the PR bandwagon created by the 6 November announcements by the labs that had isolated human embryonic stem cells using more traditional culture techniques. One group, led by developmental geneticist John Gearhart at The Johns Hopkins University, extracted primordial germ line cells from fetal tissue and kept them growing through 20 passages (transfers from one plate to another) for more than 9 months. The other group, led by Thomson at the University of Wisconsin, established a culture of stem cells derived from early human embryos. Thomson, whose cell line has survived 32 passages over 8 months, published molecular data suggesting that the cells may continue dividing “indefinitely.”

    Michael West, president and chief executive officer of ACT since October, says it is “pure coincidence” that ACT's news came out within a week of these announcements. West—noting that ACT won't benefit immediately, for it doesn't sell public stock—says that after becoming ACT's CEO last month, “I learned about the work that had been done in 1996 … and I wanted to develop this technology.” But he says he “didn't feel comfortable” moving ahead with nuclear DNA transfer experiments without getting a reading on how future U.S. laws and regulations might affect the field. “So I decided, ‘Let's talk about the preliminary results,’” says West. “Let's get NBAC to help clear the air.”

    West notes that some information on ACT's mixing of human and cow cells was already public. In February, the World Intellectual Property Organization in Geneva had published Robl's application for a patent on “Embryonic or Stem-like Cell Lines Produced by Cross Species Nuclear Transplantation” (WO 98/07841). It describes the Robl-Cibelli experiment of 1996 and stakes broad claims to stem cell technology based on transferring human or animal DNA into an animal oocyte. After being approached by the staff of CBS's news show 48 Hours, West says, he arranged to discuss the research in exclusive but simultaneous releases to The New York Times and CBS. The CBS report aired on 12 November.

    Robl confirms it was West, and not the scientific staff at ACT, who initiated the announcements. “I wouldn't have had the guts to do it,” Robl says, although he agrees it is important to debate ethical concerns that might impede the technology.

    These ethical concerns may get an airing next month. Senator Arlen Specter (R-PA), chair of the appropriations subcommittee that approves the budget for NIH, is planning a hearing on 1 December. There, NIH director Harold Varmus and developers of new human cell technologies are expected to testify about federal restrictions on the use of embryonic and fetal tissue and their impact on biomedical research. That discussion may now be expanded to include questions about ACT's single experiment.

  2. RUSSIAN SPACE SCIENCE

    Station Launch Hides Lingering Woes

    1. Richard Stone

    MoscowValery Bogomolov welcomes the scheduled launch today of the first piece of the international space station as a sign of the world's commitment to space exploration. But the launch is also a bitter reminder to Bogomolov, deputy director of Russia's premier space biology facility, the Institute for Biomedical Problems (IBMP), of his country's recent decision to sell NASA thousands of hours of station time earmarked for research by Russian cosmonauts for the $60 million needed to complete a key station component (Science, 9 October, p. 206). “It was very sad for us, and for Russian science,” says Bogomolov, whose institute is scrambling to plan experiments on the ground that were meant to be done in space. “We had no warning.”

    As the rest of the space community readies its payloads for the $50 billion international space station, Bogomolov and his Russian colleagues must resign themselves to a limited role until at least 2003, when they will vie for a share of research time aboard the completed station. And the lost opportunity is only one of several continuing crises for Russian space science. The launch of the Russian-backed Spectrum-X-Gamma spacecraft, a $500 million international effort to study x-rays, is running almost a decade behind schedule. Even a last-ditch effort to postpone the dismantlement of the Mir space station, allowing some biology to continue, may not survive in Russia's harsh fiscal environment. Russia is propping up “a Potemkin space program,” asserts Houston-based space consultant James Oberg. “It's a hollow shell.”

    In an effort to keep some life in that shell, the Russian Space Agency (RKA) last week backed off plans to bring the 12-year-old Mir space station down to Earth next summer. Russian government officials and legislators are now hammering out a proposal for the 1999 budget, due out next month, that would seek to fund both Mir and international space station operations. “What if there is a problem with the international space station?” asks Bogomolov. “We're very interested in keeping [Mir] as an option for research.” Adds Sergei Shaevich, space station manager at the Khrunichev Research Center in Moscow, “There's $100 million worth of new equipment on Mir now.”

    Western experts see some merit in that argument, pointing to solid NASA-funded Russian research aboard Mir. The 3-year, $20 million program, which wrapped up last year, served as a test-bed for research on the international station, funding peer-reviewed work involving 60 institutes. The program also “sustained scientists through a difficult period,” says Dick Kline, director of the ANSER Center for International Aerospace Cooperation, a nonprofit think tank in Arlington, Virginia.

    But Kline and others don't see how Russia can afford the estimated $100 million to $200 million needed next year to operate and supply Mir along with the roughly $130 million that Russia is supposed to contribute to the international space station. “It's a good idea, if not for the fiscal realities,” says Kline. NASA officials are hoping to persuade the cash-strapped RKA not to divert funds to Mir. “We want them to devote their resources only to the international space station,” says NASA spokesperson Dwayne Brown.

    Some of the projects begun aboard Mir, including monitoring the physiological stresses on cosmonauts performing heavy labor in space, were slated to continue aboard the service module, a Russian-built station component to be launched next summer. But now that Russia has given up its research time, “we won't be able to perform these experiments,” says IBMP chief scientist Lyudmila Buravkova. In the meantime, she says, IBMP staff members are designing ground-based surrogates. But even these may have trouble finding funds in next year's budget.

    A financial miracle also may be needed to save Spectrum-X-Gamma. Slow delivery of key components has delayed the astrophysics observatory, originally planned for launch in 1992. Now the question is whether Russia can afford the Proton rocket needed to put it into space. If Spectrum-X's debut were to slip much beyond 2001, asserts Roald Sagdeev, a Russian space expert at the University of Maryland, College Park, it would be so eclipsed by three other observatories scheduled to be launched over the next 3 years—the United States' AXAF, Europe's XMM, and Japan's Astro-E—that “it would make no sense at all” to put it up.

    Project officials disagree. “We believe Spectrum-X still has a role to play,” says Alan Wells, director of the Space Research Center at the University of Leicester in England, pointing to its unique polarimeter for studying binary x-ray pulsars and supernovas and EUVITA, two telescopes that will explore the largely uncharted far-ultraviolet region. “Our concern is just to get it up there.”

    There's a glimmer of hope for space biologists, says Sagdeev: NASA could invite Russians to collaborate on U.S.-funded station projects. But one agency official complains that RKA's refusal to join a multilateral space life sciences working group has impeded joint studies. RKA officials declined to comment.

    NASA, meanwhile, hasn't yet divvied up the spoils from its deal, which doubles the 5000 hours available for research during the 5 years of station construction. “It's awful to take advantage of someone else's disadvantage, but this is a unique opportunity for us to improve our science,” says NASA's Neal Pellis, a station biology manager keen to study how microgravity influences gene expression.

    The careers of many Russian scientists will hang in the balance as Russia decides the fate of Mir and RKA and NASA debate the terms of joint research. A lengthy delay will also threaten the Russian program's decades of expertise. As Kline puts it, “You can't suddenly say, ‘Let's have world-class research again.’”

  3. JAPAN BUDGET

    Science Gets Share of Stimulus Package

    1. Dennis Normile

    TokyoA new housing complex for exchange students, renovated research labs and equipment, and a faster track for some big new science projects are expected to be elements in Japan's latest, and largest, attempt to spend itself out of a prolonged recession.

    The $195 billion package, the outlines of which were approved by the cabinet on 16 November, includes $145 billion in stimulus spending and another $50 billion in tax breaks. It eclipses the $138 billion stimulus package enacted just last April (Science, 1 May, p. 669). In reality, however, both packages are likely to fall short of those totals because they depend in part on loans to consumers and small businesses and contributions from financially strapped local governments. And many of the details of the latest package, including amounts for science-related projects, are yet to be worked out. The Science and Technology Agency (STA) has requested $2.3 billion.

    One new project high on the list is a $2.9 billion International University Village, a joint effort of STA and the ministries of Education (Monbusho) and International Trade and Industry to encourage more international exchange students and scholars to spend time in Japan. The collection of midrise buildings on a Tokyo site will include housing, a library, and other amenities for international exchange students and visiting researchers. It will also feature laboratories for venture businesses and for such research schemes as STA's ERATO program, under which research teams are assembled for 5 years. The three agencies are hoping for as much as half the total construction cost of the project, to be completed in early 2001.

    Both Monbusho and STA have also requested significant amounts from the stimulus package to upgrade lab equipment and refurbish laboratories, as well as to accelerate big science projects already under way. The Institute of Physical and Chemical Research (RIKEN), an STA affiliate just outside Tokyo, could get as much as $52.5 million for its Radioactive Isotope Beam Factory, a $200 million facility with a superconducting synchrotron that would produce the world's most intense beams of unstable nuclei.

    Yasushige Yano, the RIKEN physicist heading the project, says that the extra funding would restore the project's completion date of 2003 after cuts in this year's regular budget pushed that timetable back by 2 years. “It means we can meet our original completion plans,” Yano says. Ocean research is another big winner. STA's wish list includes a proposed $113 million for the deployment of instrumented buoys and the addition of various instruments to Japan's fleet of research vessels to facilitate studies of global climate change and to monitor sea-floor seismic activity.

    But the largess doesn't stretch to big projects still in the planning stages. For example, the Japan Hadron Project at the High-Energy Accelerator Research Organization (KEK), the former National Laboratory for High-Energy Physics, is not in line for any of the stimulus spending because it requires further development and testing before it can move into the construction phase. “It's extremely disappointing,” says KEK Director-General Hirotaka Sugawara. But KEK officials aren't standing still: They are looking for help in the regular 1999 budget, which will be finalized in the next 6 weeks.

  4. AIDS VACCINES

    India Prepares to Join U.S., World Teams

    1. Pallava Bagla*
    1. Pallava Bagla is a correspondent in New Delhi.

    New DelhiIndia is drawing up plans to participate in global efforts to develop and test vaccines against AIDS. The decision, made at the end of a meeting here earlier this month of AIDS scientists and government officials from India and the United States, represents a major step for a country traditionally very sensitive about its status in international medical research projects. But Indian officials say it will likely take a few years to decide how to marshal the country's R&D resources and link them with ongoing activities around the world.

    “A good collaboration will really cut [development] time,” says Seth Berkley, president of the New York-based International Aids Vaccine Initiative. India, he says, is one of only a handful of countries in the developing world that has both the scientific base and the technological capability to produce vaccines commercially. In addition, Berkley says, India is facing “a real emergency” based on a rising number of reported cases of HIV and AIDS.

    A low-cost vaccine is seen as the only realistic way to combat AIDS in countries that cannot afford the expensive multidrug treatments now available in the industrial world. “Vaccines are absolutely essential to interrupt this epidemic in developing countries,” says Anthony Fauci, the head of the U.S. delegation and director of the National Institute of Allergy and Infectious Diseases. “India should definitely take a leadership role in this area,” he adds, estimating that it might be 3 to 5 years until a vaccine suitable for India is ready to be tested.

    Toward that goal, Fauci and other National Institutes of Health (NIH) officials invited Indian scientists to participate in two upcoming grants competitions for vaccine clinical trials, as well as to take advantage of existing U.S.-Indian agreements for collaborative research. Indian officials pledged their “deep commitment” to such joint efforts, adding that they hope NIH will provide much of the funding once they draw up a detailed plan. “We can take advantage by learning from the failures of others,” says J. V. R. Prasad Rao, project director for the National AIDS Control Organization (NACO) of India.

    The most advanced trials of a candidate vaccine, performed by Vaxgen of San Francisco, began at 15 U.S. sites this summer. Two other candidate vaccines also produced in the developed world are being tested for safety in Thailand and Uganda. Indian officials say their participation in future vaccine development is predicated upon getting in on the ground floor. “Unless India is made a full and equal partner in the development of a vaccine, and unless the candidate vaccine has been developed collaboratively, India will never allow the testing of a vaccine,” says Manju Sharma, secretary of India's department of biotechnology. Officials also want to ensure that the vaccine protects against strains of the virus common in India rather than in Europe or North America.

    U.S. and Indian scientists are already collaborating on a $750,000 project involving India's National AIDS Research Institute in Pune and Johns Hopkins University in Baltimore. Researchers are collecting baseline data that could be used as part of a larger vaccine trial at Pune and other sites in India. “We are willing and enthusiastic about accepting Indian collaborations in vaccine development [in the hope that] it might lead to a quicker solution,” says Fauci. Its absence, he adds, “will surely slow down” the global effort to control AIDS.

  5. AIDS THERAPY

    Can IL-2 Smoke Out HIV Reservoirs?

    1. Nigel Williams

    New DelhiPotent cocktails of anti-HIV drugs have been enormously successful in keeping AIDS at bay in HIV-infected people. But although these combination therapies can knock the virus back to undetectable levels in patients' blood, HIV continues to lurk in “reservoirs”—cells that harbor the virus where antivirals cannot get at it. Now, new studies by a team at the National Institute of Allergy and Infectious Diseases (NIAID) in Bethesda, Maryland, indicate that a natural immune system regulatory molecule called interleukin-2 (IL-2), if given to patients along with combination therapy, can flush HIV from at least one reservoir out into the open. The finding raises hope that it may one day be possible to rid people of HIV entirely. “It's a courageous approach and the results are very intriguing,” says immunologist Robert Siliciano at the Johns Hopkins Medical Center in Baltimore.

    One known HIV reservoir is in T cells—immune cells that are HIV's primary target. When infected T cells are active, any HIV they harbor is also active and begins to replicate, making it open to attack by combination therapy. But T cells also have a quiescent state, during which their latent cargo of HIV is dormant and invisible to antiviral drugs for years at a time. Because IL-2 has a potent ability to activate a number of immune cells, including T cells, NIAID director Anthony Fauci and his colleagues decided to give patients IL-2 to see if it would wake up their resting T cells and the HIV they contain and make it vulnerable to attack.

    Fauci reported at the International Congress of Immunology here earlier this month that the NIAID team studied a group of 26 HIV-infected patients: 12 received a combination of at least three antiretroviral drugs for 1 to 3 years and 14 received similar combination therapy plus IL-2, given repeatedly but with a minimum of 8 weeks between treatments. After treatment, all 26 had undetectable levels of HIV in their blood. Also, Fauci's team could not detect any HIV capable of replicating in resting T cells cultured from the peripheral blood of six of the 14 subjects who had received IL-2. Even when they cultured a much larger sample of resting T cells—up to 330 million cells—from each of those six, they still could find no live virus in three of them. In contrast, the team found live HIV in the T cells from all of the 12 patients receiving combination therapy alone.

    Fauci's team went on to perform a lymph node biopsy on one of the three patients who showed no sign of virus in their T cells. Again, they could find no HIV capable of replication in the lymph node tissue, Fauci says. Although the new results raise hopes that eradication of HIV may be a possibility, “we cannot yet conclude we've got eradication of the virus,” Fauci says.

    Joep Lange, a clinical researcher at the University of Amsterdam who is also carrying out experiments to purge HIVinfected patients of virus using a cocktail of five anti-HIV drugs plus IL-2 and an antibody against T cells, says Fauci's results are “interesting but not yet definitive.” HIV may still be lurking in other known reservoirs, such as the brain, testes, gut, and within other immune cells such as macrophages. “The final proof of the feasibility of effectively controlling HIV in latently infected cells will be the discontinuation of combination drug therapy and long-term follow-up,” Fauci says, adding that such trials are planned to begin early next year.

  6. CELL BIOLOGY

    A Possible New Partner for Telomerase

    1. Elizabeth Pennisi

    Cell biologists have discovered what may be a key switch in the control of cellular aging. In most tissues, the telomeres, repetitive DNA sequences that cap the ends of chromosomes, shorten each time the cell divides, until the chromosomes are so frayed that the cell becomes senescent. But in a few normal cells, including those that make eggs and sperm, and in cancer cells, an enzyme called telomerase rebuilds the telomeres after each division, keeping the cell immortal. Now researchers have found a second enzyme that may enable telomerase to do its work.

    On page 1484, Susan Smith, Titia de Lange, and their colleagues at The Rockefeller University in New York City describe the discovery in human cells of a protein they call tankyrase. The Rockefeller team's evidence suggests that tankyrase controls whether telomerase can do its job by removing another protein that otherwise blocks telomerase's access to the chromosome ends.

    If the new enzyme does play this role, the way might be open to developing compounds that would exploit tankyrase to control cell life-span. Compounds that activate it could turn on telomerase activity in cells used for gene- or cell-based therapies, extending their lives. Conversely, new anticancer agents might work by inhibiting tankyrase, thereby blocking telomerase activity and making cancer cells mortal again. “Who knows, 5 years from now, tankyrase inhibitors may be as important as telomerase inhibitors,” notes Tomas Lindahl, a biochemist with the Imperial Cancer Research Fund in London. “[This discovery] could open up a whole new field.”

    The discovery of tankyrase by de Lange and her colleagues is an outgrowth of work in which these researchers have been looking for proteins that bind specifically to the telomeres and might therefore be important to telomere maintenance and function. They came upon the first telomere-specific DNA binding protein (TRF1) in the early 1990s. Since then, de Lange and her colleagues have shown that TRF1 somehow plays a role in regulating the overall length of the telomere, presumably by interfering with telomerase activity. To find out more about how TRF1 might contribute to the regulation of telomere length, Smith decided to look for other human proteins that link with TRF1. That screen has now turned up tankyrase.

    The protein's structure provides some clues to how it may work. It has 24 so-called ankyrin repeats, which in other proteins are involved in protein-to-protein interactions. And another section of tankyrase looks like the catalytically active region of an unusual enzyme called PARP, for poly(adenosine diphosphate-ribose) polymerase. PARP plays a role in DNA repair, apparently by modifying itself and other proteins in the molecular complex that generates new DNA.

    PARP acts by removing ADP-ribose from a small molecule called NAD+ and then adding it to the target proteins. To see whether tankyrase displays a similar catalytic behavior, Smith and de Lange did test tube studies in which they mixed the enzyme with NAD+ and TRF1. They found that tankyrase adds ADP-ribose both to itself and to TRF1. Typically, TRF1 exists bound to DNA, but when tankyrase is present, “TRF1 comes off the DNA,” says Smith.

    De Lange cautions that they still need to demonstrate that what they see in their test tube studies occurs in living cells. But she and her colleagues suspect that TRF1 normally sits on the telomere, thereby inhibiting telomerase activity. During, or perhaps after, DNA replication, tankyrase modifies TRF1 such that it leaves the telomere, enabling telomerase to replace DNA lost during replication in blood cells, germ cells, or tumors. “It looks like [tankyrase] could be an important component in a regulatory or signaling pathway,” says Tom Cech, a biochemist at the University of Colorado, Boulder.

    Tankyrase is the second protein linked to DNA repair that has now been found to be associated with the telomeres; the first was a protein called Ku that binds to broken DNA. That suggests that DNA repair and telomere synthesis may have some common components. But until researchers pin down tankyrase's exact function, it's “too early to tell whether [the enzyme] is a good target for drug discovery,” says telomere expert Calvin Harley of Geron Corp. in Menlo Park, California. Nevertheless, he says his company is thinking about procuring rights to pursue this possibility—Rockefeller University has filed a patent application on tankyrase—in hope of finding ways to extend the life-span of cells.

  7. NEUROSCIENCE

    fMRI Provides New View of Monkey Brains

    1. Marcia Barinaga

    Los AngelesNeuroscientists who want to map brain activity in monkeys have many options that aren't available with human subjects. But a favorite method for studying human brains, functional magnetic resonance imaging (fMRI), which delivers pictures of brain activity by measuring increases in local blood flow, has not been well suited to monkey studies until now. Aided by a specially designed magnet, Nikos Logothetis of the Max Planck Institute for Biological Cybernetics in Tübingen, Germany, has made high-resolution fMR images of the brains of both anesthetized and awake monkeys.

    Those who saw his presentation here last week at the 28th annual meeting of the Society for Neuroscience say the work is a marked advance over the only other published fMR images of monkey brains, reported last summer by two teams (Science, 10 July, p. 149). Some researchers worried that the several-millimeter resolution of those images was not good enough to see some of the structures in monkey brains, which are much smaller than those of humans. And because those experiments, like all other fMRI studies done on monkeys until now, used a horizontal magnet designed for supine human patients, the animals had to crouch in an awkward position that promised to make it hard to perform the choice-based tasks on which many brain activation experiments are based. Also the monkeys were awake because previous work had failed to obtain fMR images on the anesthetized monkeys used for many other types of neurophysiological studies.

    Logothetis, however, produced images from anesthetized animals with a resolution of less than a millimeter, close to the theoretical limits of the technique, which detects magnetic signals from oxygenated blood, says fMRI expert Robert Turner of University College London. Indeed, he calls the work “a technical tour de force.” In the specialized setup, Logothetis also got crisp images from awake monkeys sitting in the position to which they were accustomed and performing choice tasks. The results will convince many researchers of the usefulness of monkey fMRI, says neuroscientist Leslie Ungerleider of the National Institute of Mental Health. “I thought it would be 10 years before imaging in the monkey reached this level of sophistication,” she adds.

    Such images have been eagerly awaited because neuroscientists want to compare fMR images of monkey brain activity to data gathered by sticking electrodes directly into monkey's brains and use that information to infer more about the neuronal activity that underlies human fMR brain images. They also would like to use fMRI to identify new areas of activity in monkey's brains that they can then study with electrodes. And being able to study anesthetized animals is a plus, because sensory stimulation of unconscious animals is a widely used method for mapping areas of brain activity.

    To develop his system, Logothetis turned to Bruker Medical Instruments in Ettlingen, Germany, to design a vertical, monkey-sized magnet with a field three times stronger than those of the magnets commonly used on humans, a change that improves spatial resolution. He also refused to accept reports that fMRI wouldn't work on anesthetized animals. Researchers had blamed earlier failures on effects the anesthesia might be having on blood flow response to brain activity, but Logothetis felt that shouldn't be the case if the monkey were monitored as carefully as a human patient.

    Consequently, he hired a professional anesthesiologist who anesthetized the animals while keeping parameters such as pulse, blood gases, blood pressure, and blood volume “absolutely within the normal range.” He then tested his method by recording fMR images of the anesthetized monkeys viewing visual images, such as moving patterns, and got high-resolution images of activation in the monkeys' visual cortex.

    Logothetis notes that his team's results remain just a demonstration that the technique works and have not yet answered any research questions about brain function. In that respect, monkey fMRI has some catching up to do. Japanese researchers have for several years been studying brain activation in monkeys with positron emission tomography (PET), which uses radioactive tracers to detect blood flow changes. For example, using a dedicated monkey PET scanner at Hamamatsu Photonics in Hamakita, Japan, Hirotaka Onoe's team at the Tokyo Metropolitan Institute for Neuroscience last year discovered a new site of color processing in the monkey visual system. PET in monkeys has better resolution than it does in humans, says Ichiro Fujita of Osaka University, who has been doing monkey PET studies at Hamamatsu. That's because monkeys can be scanned repeatedly, while the risks of the radioactive exposure limit human subjects to a few scans. This forces researchers to average the data from several different subjects, reducing the image quality.

    But even monkey PET cannot match the temporal and spatial resolution of fMRI, says Akichika Mikami of Kyoto University, who also does PET studies on monkeys. He predicts PET will be eclipsed by fMRI, except in specialized niches such as using radioactive neurochemicals to visualize their receptors in the brain, which is not possible with fMRI. Turner agrees and expects a rapid research boom in monkey fMRI: “I've been taking bets on how many presentations there will be at next year's meeting,” he says. “I'm going to put money on there being more than fifty.”

  8. EXOBIOLOGY

    Requiem for Life on Mars? Support for Microbes Fades

    1. Richard A. Kerr

    Signs of ancient life in a martian meteorite startled the world, but after 2 years of research the evidence has dwindled, and few scientists now believe the claim

    Houston—Just over 2 years ago, NASA Administrator Dan Goldin rushed to the White House to brief the president and vice president on a discovery that was about to rock the world: signs of ancient life on Mars. “We are not talking about ‘little green men,’” said Goldin, but even so, little gray worms from a meteorite that was once a chunk of Mars made the front page day after day. The microscopic features helped jump-start planetary exploration as well as the field of exobiology and left everyone from priests to pundits wrestling with the implications of learning that life on Earth is not unique.

    Since then, ALH84001, a martian meteorite scooped from the Antarctic ice cap, has become the most intensively studied 2 kilograms of rock in history. With $2.3 million in funds from NASA and the National Science Foundation, scientists have sectioned it, imaged it, identified its minerals, measured its isotopes, and analyzed its organic matter. All this effort was aimed at testing and, if possible, extending each of the original four lines of evidence for life: mineral shapes that look like fossilized bacteria, traces of organic matter, rosettes of minerals perhaps formed through bacterial action, and grains of a magnetic mineral resembling those produced by bacteria. But at a NASA workshop here early this month,* scientists concluded that all the effort has not strengthened the claims. Indeed, key parts of the original case have been scaled back. Most researchers agree that the case for life is shakier than ever.

    “The case has weakened dramatically,” says meteoriticist Horton Newsom of the University of New Mexico, Albuquerque. “It was plausible and a good piece of work” when published in Science in August 1996 (16 August 1996, p. 924), but after intensive study, “a number of lines of evidence have gone away.” Paleontologist Andrew Knoll of Harvard University agrees that the hypothesis “has not fared well. You would have a hard time finding even a small number of people who are enthused by the idea of life being recorded in this meteorite.” If there ever was life on Mars, ALH84001 offers no persuasive evidence of it, these researchers say.

    But the originators of the life-on-Mars hypothesis are not ready to call it quits. “This hypothesis is still alive and kicking,” three of them wrote in a position paper prepared for the workshop. The three—geologist David McKay of the Johnson Space Center in Houston, who was the lead author on the Science paper, geochemist Everett Gibson of JSC, and microscopist Kathie L. Thomas-Keprta of Lockheed Martin Space Mission Systems and Services in Houston—say they “are more confident than ever that these meteorites likely contain traces of ancient life on Mars.” They point to new data highlighting the similarity between some magnetite grains in the meteorite and those produced by earthly bacteria. Still, “we haven't solved the question,” admits McKay. “These rocks are a lot more complicated than anyone could have imagined.” Firmly identifying reliable biomarkers—anything that can be taken as a sign of past life—is “our work for the next 5 to 10 years,” he says.

    McKay's team has already withdrawn much of its most dramatic evidence. In the Science paper, they had suggested that spheroidal and tubular objects found in fractures within the meteorite could be fossilized extraterrestrial microbes. At the NASA press conference on the finds, they presented more possible martian bacteria, including a striking example dubbed “The Worm” as well as swarms of smaller structures that looked like armies of wormlike creatures marching in formation.

    Just tens of nanometers across, most of these objects are much smaller than the smallest earthly bacteria. Indeed, they're well below the size cutoff embraced at a National Academy of Sciences (NAS) workshop last month, where participants concluded that even the most basic molecular machinery of life would take up a volume equal to a 200-nanometer sphere (see sidebar). And at the Houston workshop, McKay went part of the way toward accepting that limit. Anything smaller in volume than a 100-nanometer sphere “we simply don't believe is indicative of bacteria,” he said. That criterion eliminates the objects in the Science paper as well as “The Worm,” which is 250 nanometers long but too slender to make the cut. McKay also ruled out the phalanxes of worms, agreeing that, as another group argued last year, they are merely the jutting edges of mineral crystals reshaped by a coating used to prepare the sample for the scanning electron microscope (Science, 5 December 1997, p. 1706).

    But McKay wouldn't write off the bacteria-like forms completely, saying that they “may very well be parts of bacteria from Mars.” And as for intact bacteria, “we think there are large objects that are still candidates,” he said, although he declined to offer any examples. “Until we get our data straightened out, that's all I want to say.”

    Even if McKay's team does come up with larger examples, few researchers are likely to be persuaded by simple bacteria-like shapes. Inorganic deposition can take such lifelike forms that shape alone proves little, say paleontologists and mineralogists. “Unfortunately, nature has a perverse sense of humor,” explained microscopist John Bradley of MVA Inc. in Norcross, Georgia, as he showed the NAS workshop examples of lifelike micrometer-scale minerals grown inorganically and a picture of a wormlike structure found in comet dust. “It's easy to be fooled by shapes,” agrees mineralogist Allan Treiman of the Lunar and Planetary Institute in Houston. That's especially true at the nanometer scale. “When you get to this size range, there are a lot of things we don't understand,” says Treiman.

    Attendees at the Houston workshop also gave short shrift to a second line of evidence—the presence of a distinctive type of organic matter. McKay et al.'s Science paper had argued that polycyclic aromatic hydrocarbons, or PAHs, found at parts-per-million concentrations in the meteorite's fractures, could be the decay products of ancient martian life. Then in January 1998, other researchers reported finding considerable amounts of the short-lived isotope carbon-14 in ALH84001 organic matter. Because carbon-14 would have long since vanished from the original sample, that indicated heavy contamination with terrestrial organics during the thousands of years the meteorite lay on the Antarctic ice. But in a July paper in Faraday Discussions, chemist Simon Clemett of MVA, one of the Science authors, reported that other Antarctic meteorites exposed for much longer than ALH84001 did not contain the same kind of PAHs. The finding implied that although much of the organic material in the fractures was terrestrial, the traces of PAHs were probably from Mars.

    Even so, no one considers the existence of PAHs to be credible evidence for life. As meteoriticist John Kerridge of the University of California, Los Angeles, pointed out, the PAHs could just as easily be from Mars's own “primordial soup” that never achieved life. “That is why I don't think for a moment PAHs will ever figure in our list of biomarkers,” he said. No one at the workshop, including Clemett, disagreed.

    A third line of evidence—50-micrometer “rosettes” of carbonate in the meteorite's fractures—sparked sharp debate over the past 2 years, but like PAHs they now seem unlikely to yield persuasive evidence of life. The Science authors suggested that the rosettes may have formed under the chemical influence of bacteria. But rosettes are not a persuasive sign of life and might be inorganically produced, noted Henry Chafetz of the University of Houston.

    Still, the carbonates of ALH84001 have provoked other lively arguments, focused on whether they were deposited from mineral-laden fluids at high or low temperatures. High temperatures, clearly above the 113ºC known limit of life on Earth, would suggest that no martian life was around when they formed (Science, 4 April 1997, p. 30). The first temperature estimate, based on the suite of minerals present, gave temperatures on the order of 700ºC. Next, another team suggested a low temperature—perhaps less than 100ºC—because the mineral-forming process had not disturbed the rock's ancient magnetization. Then, different isotopic analyses pointed to both low and moderately high temperatures, depending on the minerals analyzed.

    After bouncing around in this fashion for 2 years, at the workshop estimates seemed to be staying under 300ºC. But the possible range still runs from 0º to 300ºC or so, with little prospect of an imminent resolution. And even a low-temperature carbonate formation wouldn't prove that martian life existed; it would only mean that life cannot be ruled out.

    Magnetite mysteries remain

    The one line of evidence that still holds some promise of support for martian life is the tiny grains of the iron oxide mineral called magnetite. These grains are found throughout fractures in the meteorite, especially in the fine-grained rims of the carbonate rosettes. In the Science paper, McKay and his colleagues noted that in size—typically 50 nanometers in length—and shape, the magnetite resembles that produced by terrestrial bacteria. Bacteria use internally produced magnetite as magnetic compasses and externally induced, more irregular grains as a dump for their excess iron. But critics soon noted that not all the magnetite in ALH84001 looked lifelike.

    Thomas-Keprta and her colleagues now offer a new take on the magnetite. They concede that 75% of it could have been produced inorganically, but they have found that about 25% of the grains in the rims have an elongated shape that is hexagonal in cross section, which is just the shape produced internally by certain terrestrial strains of bacteria. “We don't know of any inorganic source that will produce these,” said Thomas-Keprta. “Their presence strongly suggests previous biogenic activity. We believe these may indeed be martian biomarkers.”

    Many other researchers have reservations. “We haven't been able to grow [this kind of crystal] inorganically,” concedes rock magnetist Bruce Moskowitz of the University of Minnesota, Twin Cities. “But we haven't looked in detail at lava flows” and other geologic settings where such magnetite might grow. “I was not persuaded,” says microscopist Peter Buseck of Arizona State University in Tempe, who has studied both biogenic and inorganic magnetite. It's the same problem as with the carbonate rosettes, he says: “At this point, we don't know how to tell whether a given [magnetite] crystal has been formed organically or inorganically.”

    Although most of the evidence appears to be fading away after 2 years and millions of dollars of research, McKay and colleagues still say that it adds up to a good case. As they concluded in Science: “None of these observations is in itself conclusive for the existence of past life. Although there are alternative explanations for each of these phenomena taken individually, when considered collectively … we conclude that they are evidence for primitive life on early Mars.”

    Even 2 years ago, many researchers were unimpressed with that holistic argument. “I never bought the reasoning that the compounding of inconclusive arguments is conclusive,” says petrologist Edward Stolper of the California Institute of Technology in Pasadena. And it was clear at the workshop that now, as pieces of the argument weaken, it is losing its grip over the rest of the community.

    Indeed, workshop participants doubted whether the martian life issue will ever be resolved by studying ALH84001. The McKay hypothesis is “very hard to disprove,” says Stolper. If living Martians pop out of the first sample return from Mars, scheduled for 2008, the issue will be settled. Otherwise the life claims for ALH84001, not proven but never conclusively refuted, may just fade away.

  9. EXOBIOLOGY

    Finding Life's Limits

    1. Gretchen Vogel

    Washington, D.C.—Everything in life is getting smaller, it seems—computers, telephones, camcorders. Even cells seem to be shrinking, with reports of tiny, ancient microbes on Mars (see main text) and claims of so-called “nannobacteria” on Earth—putative cells occupying only 0.01% of the volume of a typical Escherichia coli bacterium. But unlike computer chips that shrink as they are reinvented with new materials, all life on Earth seems to use the same standard parts. Those components—DNA, RNA, and the ribosomes that help translate the genetic code into proteins—have a fixed size.

    That puts a limit on how small a self-replicating cell can be, according to a group of experts from physics, biochemistry, ecology, and microbiology who gathered at the National Academy of Sciences last month* to discuss the limits of life at the tiniest level. Assuming that a cell needs DNA and ribosomes to make its proteins, a spherical cell much smaller than about 200 nanometers (nm) in diameter—about one-tenth the diameter of an E. coli—“is not compatible with life as we know it,” says cell biologist Christian de Duve of the Christian de Duve Institute of Cellular Pathology in Brussels and The Rockefeller University in New York City. Only radical new biology could relax these size limits, he and his colleagues concluded.

    One of the claims prompting this exploration of the limits of small came from geologist Robert Folk of the University of Texas, Austin. He reported finding tiny bacteria-like objects as small as 30 nm across—which he calls “nannobacteria”—in everything from tapwater to tooth enamel (Science, 20 June 1997, p. 1777). Then there were the putative martian microbes, as small as 20 nm by 100 nm, and a Finnish report of bacteria smaller than 100 nm—their term is “nanobacteria”—cultured from human and cattle blood.

    Such dimensions do not leave enough room for the basic set of genes needed for life, said workshop participants. By identifying the genes shared by the simplest organisms, researchers have recently concluded that at least 250 or so are required for survival as a self-replicating cell. That's about half the number present in the smallest known bacterial genome. (Viruses, which can't replicate on their own, can be smaller.) The DNA needed for 250 genes would just about fill a sphere 100 nm in diameter; add enough room for ribosomes (each of which is 20 nm in diameter), for the DNA to unwind for replication, and for chemical reactions of the cell, and 200 nm is needed. At 50 nm in diameter, biochemist Michael Adams of the University of Georgia, Athens, calculated, a spherical cell would have room for two ribosomes, 260 proteins, and only eight genes' worth of DNA.

    Even with 250 genes, cells would have to be parasites, relying on ready-made nutrients from their hosts. “By the time you get to an organism of this size,” said biochemist Peter Moore of Yale University, “you are either an obligate parasite or you have a standing order at [the biological supply house] SIGMA.” If the putative fossils from Mars were free-living, they “would have to reflect a biochemistry unlike any we know,” he said.

    But biochemist and physician Olavi Kajander of the University of Kuopio in Finland says he has organisms that prove the theoretical limits wrong. He says that although the nanobacteria he has cultured from blood and urine are mostly between 200 and 500 nm, he can get viable cultures after passing them through a 100-nm filter. He also claims to see spheres as small as 50 nm in diameter under the electron microscope.

    Kajander argues that his smallest particles, if not viable alone, might join together to make a reproducing organism. He adds that perhaps they can get along with less, growing so slowly that they need few ribosomes, for example. As for Folk, he is still finding 40-nm, bean-shaped “cells” in electron micrographs that he believes are biological, although they may be “some sort of new life-form.”

    Other scientists at the meeting were doubtful. “It's really easy to get fooled,” says microbiologist Don Button of the University of Alaska, Fairbanks. Cells larger than 100 nm might have squeezed through Kajander's filter, he says, and the preparation process for electron microscopy sometimes shrinks cells. Paleontologist Andrew Knoll of Harvard University agrees. “I think everyone pretty much agreed that … nothing much smaller than 200 nm is likely to be viable.”

    Still, researchers admit that unknown kinds of life-forms might not face the same limits. Before DNA, ribosomes, and proteins, there must have been simpler life-forms. With a single molecule capable of both replicating itself and catalyzing reactions—such as RNA, for example—a cell would need far less space, several scientists told the meeting. A sphere 50 nm across could comfortably contain the 50 or so catalytic “genes” necessary for self-replication and basic metabolism, with plenty of room left over for chemical reactions, chemist Steve Benner of the University of Florida, Gainesville, said.

    If such primitive life-forms once existed here, they were outcompeted by the more complex forms now populating Earth, but they might exist elsewhere. “We really have no assurance that our biology exhausts the possibilities for life in the universe,” says Knoll. But until a persuasive sample of such life is discovered, earthly and extraterrestrial nanocandidates will face tough scientific scrutiny.

    • * Workshop on Size Limits of Very Small Microorganisms, 22–23 October.

  10. SWEDEN

    Buffeted Community Braces for More Change

    1. Annika Nilsson
    1. Joanna Rose*
    1. Nilsson and Rose are science writers in Stockholm, Sweden.

    In the midst of a fierce debate over who controls Swedish science, a new review advocating increased support for basic research gets a mixed report card

    StockholmSwedish science is in a state of upheaval. The country's research is renowned for its quality and receives a larger slice of the national income than in any other European country—3.7% of gross domestic product, twice the European Union average. But recent funding changes have brought unrest among researchers to the boil. Sweden's faltering economy during the 1990s forced the government to cut back sharply on state funding of basic research and environmental science, after having created independent foundations focusing on applied research. The result: a marked shift in funding from basic to applied research, and steady growth in the foundations' influence over science policy. All this has prompted a heated debate over who controls research.

    To untangle this mess, the government last year set up a committee, mostly made up of parliamentarians, to carry out a thorough appraisal of the whole structure of Swedish research funding and its role in furthering the competitiveness of industry. Its conclusions, announced on 6 November, came as a big surprise to many: It recommended a complete change of emphasis, back toward support for basic science. “State-financed research should be steered by the priorities set by the scientific community,” says physicist Stig Hagström, committee chair and head of the National Agency for Higher Education. The committee said basic research is fundamental to a knowledge-based society, and that it has the best long-term potential to support economic development.

    The shift in priorities would be accompanied by a radical overhaul of the structure of science agencies. The committee suggests that the existing basic research councils, which fund academic researchers, should be scrapped along with a large number of government agencies that fund mission-oriented research. Instead, both basic and applied science should be supported by four new research councils under the Ministry of Education, covering humanities and social sciences, medicine, natural sciences, and technology. The committee also suggests creating an agency to promote interdisciplinary science and cooperation among the new councils. The independent foundations would remain in place, but Hagström says they should be brought under stronger political control. “If the money cannot come to politicians, the politicians will come to the money,” he says.

    The government is now canvassing opinions on the report before deciding whether to implement its recommendations. Reaction so far has been mixed. Many researchers have welcomed the recommendations, viewing them as promising a return to the halcyon days of unfettered support for basic research. “The report demonstrates that the politicians are now taking a step back and leaving the decisions to the scientific community,” says Gunnar Öquist, secretary-general of the Swedish Natural Science Research Council, the largest of the existing councils. But the status quo also has strong support. “The mission-oriented government agencies have played a very important role in shaping new fields of research that have later become strategic for Swedish industry, such as information technology and materials science. The report names no valid reasons to incorporate them into the research councils,” says Ulf Sandström, head of the Research Policy Group at Linköping University.

    The problems for Swedish science began in the early 1990s. The economy was in recession and, in 1991, the Social Democratic government was ousted by a coalition of center-right and liberal parties. For ideological reasons the new administration dissolved the “wage-earner” funds—pools of money derived from a tax on industrial profits that were intended to balance industrial relations by giving trade unions more economic power—and in 1994 it used the money to create a number of independent science foundations. The largest of these, the Foundation for Strategic Research (SSF), started life with $800 million in its coffers, a sum that has since grown with the stock market boom.

    The SSF began distributing grants in 1995 in areas it defined as strategic for Swedish industry, such as bioscience, information technology, materials science, forest science, and energy research. The statutes of the foundations were written to minimize political control over their funding. With new money flowing from the foundations, the government cut the research councils' 1997 budgets by 14%. The foundations intend to spend $200 million a year over the next 10 years, while the research councils had $220 million at their disposal in 1998. This shift in funding power has given the foundations considerable influence over research priorities at the universities.

    Basic researchers, particularly those who could not meet the industrial relevance criteria set by the foundations, have welcomed the Hagström committee's call for a return to basic research. But that support has not been unqualified. Jan S. Nilsson, president of the Royal Swedish Academy of Sciences, thinks the proposed funding structure should have been more adventurous: “The new councils do not have a natural connection with problems that arise in society, which are often interdisciplinary in nature,” he says. Arne Jernelöv, secretary-general of the Swedish Council for Planning and Coordination of Research, agrees that the conventional research council structure has difficulty accommodating some modern fields such as environmental research. And, he says, the proposed structure may also make it more difficult to transfer new scientific developments to industry: “The mission-oriented agencies have created a natural platform for dialogue between the researchers and those needing the new knowledge.”

    Harsher criticisms of the proposed council structure have come from Linköping's Sandström. He argues that putting scientists in charge of research policy would favor those who are already in the system.” ‘Old boys’ get financed much easier,” he says, “while other groups such as women, younger scientists, and researchers in emerging fields face much tougher hurdles to winning funds.”

    Social scientists are also unhappy because the report suggests cutting back their funding to support research in technology. Anders Jeffner, secretary-general of the Swedish Council for Research in the Humanities and Social Sciences, says that he personally does not agree with the logic of pitting social sciences and technology against each other. “If you increase technological knowledge, there also has to be an increase in knowledge about how to handle technology,” he says. Boel Flodgren, rector of Lund University, says such a choice is regrettable, but increasing efforts in natural sciences and technology will be vital for Sweden, with its reliance on heavy industry. “We have realized that we are lagging behind in using and generating new knowledge on our own. We can't live off giving out Nobel Prizes,” she says.

    As for the independent foundations, the Hagström committee's options were limited: The government cannot close them down because they are protected by statute. But the committee suggests that their political independence be sharply reduced. The report advocates replacing the current boards—which are made up of a mix of academics, industrialists, and politicians—with boards that consist entirely of parliamentarians. That idea has drawn mixed reviews from the scientific community. A number of researchers who spoke to Science were guardedly supportive, as long as projects are also peer reviewed. Balancing resources among different fields should be the responsibility of politicians, says zoologist Dan-E. Nilsson of Lund University, the driving force behind an informal council of professors dedicated to preserving Swedish basic science.

    The Hagström report is now being sent to interested parties for several months of consultation. If the initial reactions are anything to go by, the newly installed Social Democratic minister of education, Thomas Östros, will have plenty of opinions to work with when he draws up his plans this coming spring. The government hopes to put any changes in the structure of research funding into effect by January 2001.

  11. SCIENCE FUNDING

    Brazil's Budget Crunch Crushes Science

    1. Cássio Leite Vieira*
    1. Cássio Leite Vieira is a science writer in Rio de Janeiro.

    Two cuts in the 1998 science budget, followed last week by an announced cut in the 1999 budget, have brought many science projects to a halt

    Rio de JaneiroIt's more than 90ºF outside. But no one turns on the air conditioner in the stuffy room in the Brazilian Center for Physics Research (CBPF) here, where representatives of Brazil's science establishment are meeting to discuss how to save the nation's scientific institutions from collapse in the midst of Brazil's economic downturn. With a budget deficit expected to reach 900,000 reais ($750,000) in January, the research center can't afford to cool its offices or conference rooms. “Our fear is that we will have to pay our bills out of next year's budget allocation,” says João dos Anjos, CBPF's assistant director.

    The physics center is not alone. Cutbacks in electrical use are common in cash-starved Brazilian universities and research centers these days. To meet the demands of the International Monetary Fund and other foreign lenders, who last week approved a $41.5 billion loan package, the free-spending Brazilian government cut its 1998 budget this fall, and it has agreed to slash its 1999 budget by $7.3 billion. As a result, on 10 November, the government announced that next year (the fiscal year begins in January), the science ministry will receive $619.4 million—18.7% less than it had requested.

    The spiral began in earnest on 8 September, when the treasury department cut $160 million from the science ministry's already tight 1998 budget of $747 million. A second decree, issued on 30 October, trimmed another 5%. These cuts have pushed many universities and laboratories to the brink of insolvency, and with next year's budget now set well below the original 1998 level, little relief is in sight. The Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq), the country's principal science funding agency, has distributed no new money for research in 1998. The only research funds available have come from the science ministry and from now-depleted state agencies. Only wealthy São Paulo still funds research, and the federal science ministry's direct support is limited to $50 million for multiyear, multisite projects.

    Several ministers have rallied congressional support to minimize cuts in their 1999 budgets, but the science minister has not been among the lucky ones. “It's at times like this that science loses out, because we have no lobby,” says Otávio Velho, anthropology professor at the National Museum of the Federal University of Rio de Janeiro.

    Particularly hard hit is the CNPq. The science ministry slashed CNPq's 1998 budget from $479 million to $361 million—a 25% drop. It's been 2 months since the CNPq has paid bills for electricity, water, cleaning and security services, and rent for its headquarters in Brasília, capital of Brazil.

    The agency oversees 10 scientific institutes, and no program has been spared the knife. The National Observatory expects to end the year with a debt of $210,000, including unpaid utility bills. Brazil's observation time on the La Silla telescope in Chile is scheduled for December, and astronomers are planning to pay travel and lodging out of their own pockets. Failure to show up could break Brazil's agreement with the European Southern Observatory, which administers the telescope, and cost Brazil the right to take part in the project. The observatory lacks the money even to pay for the gasoline needed to travel by car to a local telescope.

    View this table:

    The situation is not much better in many of Brazil's universities. The Federal University of Rio de Janeiro, the second largest public university in the country, cannot pay its telephone and electricity bills. And other public universities report similar straits.

    Especially galling to many scientists was a directive, issued on 16 October by CNPq's president, José Galizia Tundisi, freezing funds for most new research and postgraduate fellowships and requiring the return of airline tickets that had already been issued. The agency also canceled funding for about 30 scientific meetings planned for the coming months. These measures drew sharp protests from the scientific community, prompting the science ministry to issue a statement on 5 November to try to calm things down. “We guarantee the same number of fellowships in 1999 as we had this year, and grant payments will continue to be made on time,” promised Lindolpho de Carvalho Dias, interim minister for science and technology. (Science minister Israel Vargas was out of the country.)

    While researchers throughout most of Brazil are tightening their belts, they are casting envious glances at their colleagues in the state of São Paulo. The richest state in Brazil, São Paulo gives 1% of its state tax receipts to the Foundation for Support of Research of the State of São Paulo (known as Fapesp). As a result, from 1998 to 1999, Fapesp's budget will increase $16.8 million to about $295 million—the equivalent of 45% of the federal science ministry's entire 1999 budget. This has led some researchers to argue for a sharp reduction in São Paulo's share of federal postgraduate and research fellowship funds. But Carvalho Dias promised last week that São Paulo's fellowship funds will not be raided.

    Carvalho Dias also offered some solace to scientists working in un-air-conditioned offices. Unpaid utility bills at the nation's premier research institutes and their libraries will be covered by the end of the year, he said. If so, the group that meets at CBPF each Tuesday to discuss the financial crisis afflicting the country's science may at least get some relief from Brazil's summer heat.

  12. MICROBIOLOGY

    Training a Molecular Gun on Killer E. coli

    1. Richard A. Lovett*
    1. Richard A. Lovett is a writer in Portland, Oregon.

    Scientists are hoping to add a vaccine to a thin arsenal against O157:H7, a bacterium that kills scores of people every year in the United States

    Researchers at the National Institutes of Health (NIH) are closing in on the development of the first vaccine against Escherichia coli O157:H7, a pathogenic version of the common gut bacterium. Tests of an experimental vaccine showed promise in adults earlier this year, and the researchers are about to apply for approval to test it in young children. If the trial gets the go-ahead and the preparation passes further tests, experts say, a vaccine for people and one for livestock could be available early next century.

    First identified in 1982, O157:H7 made headlines 5 years ago when contaminated hamburger meat sickened more than 500 people, triggering symptoms such as bloody diarrhea and kidney failure. Since then, the bacterium has turned up sporadically in everything from raw milk and apple juice to daikon radishes and drinking water. Some 20,000 cases occur each year in the United States, resulting in about 250 deaths; young children are the main victims. Moreover, O157:H7 shrugs off antibiotics with ease.

    To tackle this daunting public health threat, a team led by NIH immunologist Shousun Szu is combining cutting-edge molecular biology with a method that dates back to Louis Pasteur. They homed in on O-specific polysaccharide, a molecule that studs the bacterium's cell membrane “like hair on the scalp,” Szu says. Its structure is unique to O157:H7, she says, and thus serves as a good vaccine target.

    The polysaccharide alone would make a poor vaccine, partly because it is too small for the body's immune system to notice. To solve this problem, Szu's group followed an approach developed 20 years ago by NIH pediatricians John Robbins and Rachel Schneerson. Szu bonded the polysaccharide to a carrier protein, which flags it for the immune system. In the future, Szu says, it may be possible to use a carrier similar to the O157:H7 toxin that triggers hemolytic uremia syndrome; this would create a powerful one-two punch against the organism.

    The team tested the conjugate vaccine in adults. Within 4 weeks, all 87 volunteers had substantial blood levels of antibodies to the O157:H7 polysaccharide, with no observed side effects. More importantly, the subjects' blood serum contained enough antibodies to kill O157:H7 bacteria, even after being diluted at least 1000-fold. For the next step, Szu's group is preparing to submit to an NIH safety panel a protocol for a similar study in 60 children aged 2 to 4.

    While clinical trials press ahead, Szu's team is hoping to design and test an O157:H7 vaccine in cattle—up to 2% carry the bacterium in the United States. In cattle, however, O157:H7 doesn't attach to the gut lining like it does in people, where it is easily reached by antibodies. It's unclear whether cow antibodies can reach the free-swimming bacteria in the intestines and stomach, says Mike Doyle, director of the University of Georgia's Center for Food Safety and Quality Enhancement in Athens. He's pursuing an alternative approach that involves feeding animals several harmless E. coli varieties believed to inhibit the growth of O157:H7. On another tack, Cornell researchers have found that in animals fed hay rather than grain for a few days before slaughter, gut conditions favor nonlethal E. coli (Science, 11 September, pp. 1578 and 1666). “The more control points we can develop, the better,” Doyle says.

    If researchers manage to create a working livestock vaccine, industry officials say they are keen to give it a try. “Assuming that the vaccination program would be no more expensive than some of the vaccinations they give cattle today, I believe people would use it,” says David Theno, a vice president at the fast-food chain Jack in the Box. Restaurant groups, he adds, may insist that their meat come from vaccinated animals.

    Scientists are trying not to raise public expectations too high, however. “No single intervention is going to get rid of this problem,” says Phillip Tarr, a pediatrician at Seattle's Children's Hospital and Regional Medical Center who has treated hundreds of O157:H7 infections. “None are magic bullets. It's not going to be easy to eradicate: We don't have a sterile food supply and never will.”

  13. PHYSICS

    A First Step Toward Wiring Up a Quantum Computer

    1. James Glanz

    By delivering precise laser pulses to two trapped beryllium ions, researchers have soldered a connection for quantum information

    The technology of quantum computers has not quite gotten routine enough to obey Moore's Law—the much-cited principle that the number of transistors packed onto an ordinary or “classical” computer chip doubles every 18 months. But after 3 years, a team at the National Institute of Standards and Technology (NIST) has achieved an initial doubling, going from a single cold, trapped ion functioning as a “qubit”—the active element of a quantum computer—to two ions.

    That's a bigger leap than it sounds. The math works differently in the quantum world: Just 40 qubits would make a computer that is more powerful, in some respects, than the very largest classical machines. Moreover, the step from one ion qubit to two is a critical one because it shows that qubits can be wired together through the ephemeral quantum-mechanical connection known as entanglement. The one-ion work, says Raymond Laflamme of Los Alamos National Laboratory in New Mexico, showed “that quantum computation is not crazy.” But the new paper by NIST's Quentin Turchette and others in the 26 October issue of Physical Review Letters “is definitely a major step in the direction of building a quantum computer with an ion trap.”

    Altered states

    By stimulating an accordionlike motion, a laser pulse “entangles” the possible spin states (arrows) of two trapped ions (top) into two independent relationships (blue and red)

    SOURCE: QUENTIN TURCHETTE

    Although the quantum microworld promises undreamed-of power for computing, it also seems destined to tie language in knots. In the classical world, an object can be in only one position or orientation at a time, but an atom's position or quantum-mechanical “spin” can have several different values at once. Quantum mechanics allows all of those potential states to persist until the system is measured or disturbed, when it collapses into just one of the possibilities.

    If an “up” spin represents 0 and a “down” spin 1, a single qubit can have two values at the same time. Two qubits, in turn, can store and compute with four different combinations of values, and the power of a quantum computer “rises exponentially in the number of qubits that it processes,” says Gerard Milburn of the University of Queensland in Australia.

    To perform a computation, the states of separate qubits have to be linked, or entangled. Entangled quantum states have values that depend on each other; an operation on one affects the other. In the past, entangling atoms or photons has been a hit-or-miss operation, although one quantum computing scheme—based on manipulating the spins of atomic nuclei in a liquid with pulses of radio-frequency (RF) energy—has already entangled the equivalent of several qubits to perform calculations (Science, 17 January 1997, p. 307).

    Many researchers think thermal confusion will probably stop the liquid computer from ever going beyond seven or eight qubits, however. The best bet to reach 30 or 40 qubits first, most researchers agree, is a scheme whipped up in 1995 by Ignacio Cirac and Peter Zoller of the Institute for Theoretical Physics at the University of Innsbruck in Austria. The scheme relies on a row of chilled ions in a trap made of a combination of RF and static electric fields. The spins of the ions provide the qubits, and the row of ions can be nudged with lasers so that they rock back and forth like the sound waves in a flute, at discrete, quantized frequencies. Cirac and Zoller proposed that the motion could be used as a kind of wire to pass quantum information between ions, entangling them.

    A laser pulse of just the right frequency, Cirac and Zoller suggested, would set up resonances between the internal states and the rocking—in essence, encoding the rocking with the ions' internal quantum states to create links between the internal states of the ions. “So if you want some particular entangled state at some time, you can apply the appropriate laser pulses,” says Turchette.

    But transmitting quantum information along this “wire” requires a dexterity akin to walking a tightrope while balancing a teacup on one's nose: Slight perturbations from the outside world, called “decoherence,” can stop the show cold. Three years ago, for example, Chris Monroe and David Wineland of NIST swapped quantum information between the internal states of a single ion and one of its rocking or “motional” states. This state was too susceptible to outside disturbances to serve as a quantum wire, however.

    In the new work, Turchette teamed up with Monroe, Wineland, and five others to exploit a slightly different motional state, which arises when two or more ions share the trap. They enlisted the motion—a kind of stretching and shrinking—to entangle the spins of two beryllium ions into two distinct relationships that existed simultaneously until the spins were measured. In one, the spin of ion 1 was up and that of ion 2 was down. In the other, ion 1 was down and ion 2 was up.

    Getting the ions entangled still required some clever tricks, as the team has not found a convenient way to focus separate laser beams on ions just micrometers apart. Instead, the team relied on spatial variations in the strength of the trap's RF waves to tune the interaction of the individual ions with the laser. Wherever the RF was more powerful, the ion jiggled more violently, reducing its interaction with the light. A separate electric field slid the ions slightly off-center in the trap to vary their interactions with the RF field so that the laser could put them into different spin states.

    Finally, the team made a series of measurements to verify that the two relationships of spin states existed simultaneously. The NIST demonstration is “the first time anyone's been able to make ‘entanglement on demand,’” says Richard Hughes of Los Alamos.

    Next, the NIST team will try to address ions individually with lasers, rather than relying on the RF trick, which should let them entangle larger groups of ions. “There are a lot of technical things to deal with here,” says Turchette. But theorists have been decidedly cheered by the first step in what could become a quantum Moore's Law. “The step from one [ion] to two has been really hard,” says Zoller. “But once they have it, going from two or three up to five or 10 should be a much easier task.”

  14. AFTER THE GENOME IV MEETING

    New Ways to Probe the Molecules of Life

    1. Evelyn Strauss*
    1. Evelyn Strauss is a writer in Berkeley, California.

    Jackson Hole, WyomingAlmost 200 years after Lewis and Clark first glimpsed the Grand Tetons, a posse of about 80 scientists gathered here from 10 to 14 October for an exploration of their own. At the annual “After the Genome” meeting, they discussed how to get from genomic information to an understanding of biology. Highlights include powerful computer programs for modeling human diseases, new techniques for protein analysis, and stabilizing coats for individual protein molecules.

    Making Coats for Molecules

    For humans, Halloween is over, and the witches, Monicas, and Bill Clintons have taken home their prizes for best costume and packed their gear away until next year. But a team at the biotech start-up company Alnis, in San Leandro, California, has devised ingenious costumes for proteins and other molecules that they could wear all year long.

    Alnis's scientific founder, David Soane, and his colleagues have found a way of trapping individual molecules inside polymer coatings a single molecule thick. Although the method is in its infancy, researchers can envision a wealth of applications for it. “This is a clever idea and the method has real scope,” says Alexis Bell, a chemical engineer at the University of California, Berkeley, and an adviser to the company. The coatings could stabilize nature's biological catalysts, the enzymes, enabling them to tolerate organic solvents and high temperatures, and protect therapeutic proteins, such as insulin, from digestion so that they could be taken by mouth.

    Other investigators are also devising stabilizing coats for proteins, but the Alnis method boasts the advantage that it can be adapted to a wide variety of biological molecules and solvent systems. What's more, because the polymer coats retain the impression of the target molecule's shape even after it is removed, they could be used for molecular detection both in the body and in biological samples, such as blood or biopsy materials. “It's potentially a very exciting technology, particularly for detecting small molecules and perhaps even proteins,” says Frances Arnold, a chemical engineer at the California Institute of Technology in Pasadena.

    Soane's method is a twist on molecular imprinting, a technique that has been around for several decades. In molecular imprinting, the target molecules are embedded in a material that polymerizes around them to produce a three-dimensional block bearing the targets' impressions. The block can be used for a variety of applications. By breaking it into chunks, for example, researchers can generate a chromatographic material that grabs onto the target molecules, allowing their isolation from complex mixtures.

    Instead of forming a polymer block, Soane generates a molecular glove that perfectly fits a protein or other molecule. He accomplishes this by exposing the molecule to custom-designed, polymerizable building blocks with distinctive heads and tails. The heads, for example, may carry positive or negative charges that allow them to bind to oppositely charged amino acid residues in the protein, while the tails, which are hydrophobic and tend to congregate with each other, are designed to link together.

    Once the heads of the chemical have bound to the target molecule, Soane uses treatments such as ultraviolet light to link the chemicals into a shell, dubbed a synthetic polymer complement (SPC), around the protein. It's also possible to construct the SPC coat in such a way that an enzyme protein retains its catalytic activity. One way of doing this is to protect the enzyme's catalytic site with a molecule, such as one of the enzyme's own substrates, that binds to the site and can be removed once polymerization is accomplished. As a “proof of principle” test, Soane has shown that an SPCcoat made the enzyme chymotrypsin far more durable at high temperatures in an organic solvent while still allowing it to be active.

    Soane says that the SPC covering can also be released from its target molecule, although he won't say how because the technique is proprietary. If the empty shells then encounter the molecule again, they can bind it. He's shown, for example, that empty SPCs can recognize a small molecule called esculin that contains a sugar. Eventually, the chemical molds might be used for molecular detection—in effect serving as artificial antibodies that are more stable, cheaper, and quicker to make than the real thing. For example, SPCs linked to tracers that can be detected by ultrasound might help with early, noninvasive diagnosis of cancer.

    “We have the beachhead successes for the recognition and binding aspects,” Soane says. Still, he adds, “it will be a long time between now and when a diagnostic or therapeutic discovery is made.” But costumes as good as these seem likely to win a prize or two eventually, for utility if not for beauty.

    Chips for Protein Analysis

    For the past several years, the fluorescent glow of DNA chips has signaled a revolution in researchers' ability to detect nucleic acids and monitor gene activity in living cells. But developing ways to keep track of the many different proteins in a cell has been much more difficult. Although techniques like the polymerase chain reaction can amplify scarce DNA into detectable amounts, the tiny concentrations of proteins in cell extracts, blood, and other biological samples can't be boosted so easily. But a new tool might help with protein analysis: the ProteinChip technology developed by scientists at Ciphergen Biosystems Inc. of Palo Alto, California.

    Because the Ciphergen method combines a tiny chip with a “sticky” surface with the sensitive analytic capabilities of mass spectrometry, it doesn't require an amplification step. Consequently, it is not only very fast but can be used with small samples—microliters instead of the milliliters of conventional methods. “They're tackling one of the core problems of analyzing proteins: looking at proteins that are present in very low abundance,” says Jeff Wiseman of SmithKline Beecham Inc. in King of Prussia, Pennsylvania. The method should allow scientists to discover new proteins, purify them, measure their quantities, and discern their chemical and biological properties, all with one chip.

    The technology is the brainchild of William Hutchens of Ciphergen and his colleagues. The chip, which is about a millimeter across, holds some kind of molecular bait—antibodies, carbohydrates, receptors, or any of a wide variety of smaller synthetic chemicals—that can trap many different proteins at once. To perform an analysis, a researcher applies a sample to a chip, lets the proteins adhere to it, and then washes away anything that doesn't stick.

    In the next step, a laser zaps the chip surface with just enough energy to break noncovalent bonds and release the proteins. An electric field shoots these proteins to the detector of a mass spectrometer, which reads out their molecular weights. (The company calls the process Surface-Enhanced Laser Desorption/Ionization or SELDI.)

    Knowledge of the chemical nature of the molecular bait combined with the molecular weights of the proteins permits one particularly useful analysis: producing fingerprints of the protein composition of samples containing hundreds or thousands of proteins. By comparing closely related samples—blood serum from a healthy person and from someone with a disease, for example, or extracts of dividing and nondividing cells—scientists can detect changes in the amounts and types of proteins.

    According to John Quackenbush, a molecular biologist at The Institute for Genomic Research in Rockville, Maryland, such changes can provide valuable clues to which chores the proteins are performing. In one set of experiments, for example, he compared the protein content of dividing and nondividing cells of the bacterium Haemophilus influenzae. The analysis picked up 600 of the 1740 proteins thought to be encoded in the H. influenzae genome, and for about 30 to 60 of them, the amount varied under the two conditions. “It's pretty extraordinary to be able to sit down and, in the course of a few hours, get information about 600 distinct proteins,” Quackenbush says.

    Once an interesting protein is identified—say, one that is made in large amounts in dividing cells but not in quiescent cells—the chips can be used to characterize, isolate, and even sequence it. By systematically testing different combinations of wash conditions and chip materials, researchers can use SELDI to explore the properties of a protein, such as how strongly it adheres to a surface with positive charge and whether it binds a particular metal ion. Furthermore, Ciphergen's computer programs can identify the combination of surface and wash conditions under which the protein of interest has the fewest neighbors, opening the way to purifying the protein.

    Scientists can sequence the protein on the chip by exposing it to enzymes that release its peptides one by one, so that they can be analyzed by mass spectrometry. Conventionally, Hutchens says, different tools had to be used for each step from discovering a protein to isolating and characterizing it. But with SELDI, he adds, “you use exactly the same chip [for all three steps].”

    Computer Models for Drug Testing

    Companies developing new drugs usually face a great leap into the unknown when they move from test tube and animal studies into clinical trials. Given that it takes at least $20 million just to get a drug into human efficacy tests, failures can be expensive. One critical choice comes early: which of the many disease-related molecules should be targeted. “If you don't make the right choice of drug target at the beginning, you can really have a big mistake at the end,” says Robert Dinerstein of Hoechst Marion Roussel Inc. in Bridgewater, New Jersey. Now scientists at Entelos Inc. in Menlo Park, California, are trying to reduce the guesswork by simulating diseases—and the molecular interactions that underlie them—in a computer.

    At the meeting, Tom Paterson of Entelos reported that the company had so far built models for three common diseases: asthma, obesity, and HIV/AIDS. Each one seeks to combine what's known about the molecular and cellular changes leading to the disease with the symptoms it causes. The Entelos system “links the basic processes to their consequences in the entire system,” says Dinerstein, who has used the asthma program in his work on respiratory diseases. “That hasn't been done before.”

    Using these programs, researchers can conduct virtual experiments to pretest drugs, modeling what happens when a drug alters the activities of a specific molecule. So far the models have helped pharmaceutical companies develop new hypotheses about mechanisms of disease and evaluate existing and novel therapeutic approaches.

    To construct the models, the Entelos team formulates mathematically based hypotheses about how all the components in the disease system interact. With asthma, for example, they incorporate what is known about the role of inflammatory cells and the factors they make and respond to in constricting the respiratory airways. The researchers then tune the math and the relationships between the different parts of the model until it accurately reflects the way the disease behaves. The simulation can then show what happens to any one component of the system in response to a change in another part of it—caused, say, by administering a drug or exposing the airways to allergens. “There's nothing quite this comprehensive,” says one of Entelos's scientific advisers, bioengineer Douglas Lauffenburger of the Massachusetts Institute of Technology.

    Dinerstein tested the asthma simulation by seeing how it responds to existing drugs. He found in the model exactly what companies had learned from clinical trials: Effective drugs decrease airway resistance, while ineffective drugs, including some that companies had pursued quite aggressively, don't.

    Using the asthma program, Dinerstein's group also carried out a virtual experiment in which they blocked the activity of a certain inflammatory factor to see if it might be a good target for an inhaled form of therapy. The next asthma attack was worse because another part of the body was compensating for the decreased inflammatory response. “We hadn't really thought about the rebound effects,” says Dinerstein.

    In addition, the software provides information management tools with quick connections to the literature references and the mathematics on which a given part of the model rests. Researchers can also incorporate their results into the program and chronicle the evolution of their thinking.

    Different parts of the model vary in reliability, depending on the information available. But as Dinerstein notes, even the gaps can help because they point out where researchers should direct their studies. Now that scientists are investing a large effort toward finding the sequence and function of all the human genes, such models are badly needed, says Lauffenburger. “The promise of this whole new field of molecular medicine requires that we get an idea of the consequences of molecular alterations. Until you can put together models like this, it's all pretty much guesswork.”

  15. THE FIRST CITIES

    Why Settle Down? The Mystery of Communities

    1. Michael Balter

    Archaeologists had long believed that farming prompted our nomadic ancestors into the first settlements. But how could the rudimentary agriculture of 9000 years ago have drawn 10,000 people to settle in Çatalhöyük?

    Çatalhöyük, TurkeyArchaeologist Shahina Farid can barely contain her excitement. While excavating an ancient rubbish deposit, her team of diggers found the skeleton of an adult male. Of course, many dozens of skeletons have been uncovered at this 9000-year-old site over the past few years. Yet this one is different. The others were all found buried under the floors of the mud-brick houses in which the people of this early farming settlement once lived. But this body seems to have been deliberately placed outside. Farid looks out at the wheat fields that surround this isolated mound in the middle of the Central Anatolian plain, wiping her brow against the heat. Would this be the exception that proves the pattern wrong?

    The skeleton is carefully removed and taken down to the lab at the base of the mound. There, anthropologists working at the site discover a possible explanation. The man was terribly deformed and probably very ill when he died. An outcast, perhaps? Yet even if this riddle is solved and the burial pattern holds, so many other questions remain unanswered at Çatalhöyük: Why did they bury their dead under the floors? What is the meaning of the vivid painted murals on their walls? Why did thousands of people give up the itinerant life of hunting and gathering and cram themselves into houses so tightly packed that they entered through holes in the roofs? Indeed, why did people bother to come together at all, eventually building the towns and cities that so many of the world's people live in today?

    Earlier this century, archaeologists thought they had the answer: The rise of agriculture required early farmers to stay near their crops and animals. But these new excavations are challenging the long-held assumption that the first settlements and the transition from hunting and gathering to farming and animal domestication were part of a single process—one that the late Australian archaeologist V. Gordon Childe dubbed the “Neolithic Revolution” (see p. 1446). Çatalhöyük and other sites across the Near East are making it clear that these explanations are too simple and that other factors—including, possibly, a shared cultural revolution that preceded the rise of farming—might also have played a key role.

    British archaeologist James Mellaart discovered Çatalhöyük, near the modern city of Konya, in 1958. In the 1960s his excavations of this Neolithic, or New Stone Age, settlement electrified the archaeological community. The age of the site, 4500 years older than the Egyptian pyramids, was staggering. At the time, only the traces of a few small villages could claim seniority as the world's oldest permanent settlements. Yet this was no tiny hamlet: Çatalhöyük covered more than 12 hectares and may have harbored as many as 10,000 people. Over the 1000 years the site was occupied, its inhabitants rebuilt their houses one on top of the other until they had created a mound 20 meters high. Some, including Mellaart, hailed it as the world's oldest known city.

    The details of the find captured imaginations. Mellaart uncovered vivid painted murals on the plastered walls of the houses, sometimes in bas-relief: vultures attacking headless men, an erupting volcano, a band of hunters pulling the tongues and tails of wild deer. An animal bone expert declared that Çatalhöyük was a hub of cattle domestication, the earliest known in the Near East. And clay figurines of obese women found at the site prompted Mellaart to claim that Çatalhöyük had been a major religious center, where people worshiped a Mother Goddess—an assertion that today inspires regular pilgrimages of latter-day goddess worshipers from as far away as California.

    Since Mellaart ended his work at Çatalhöyük more than 30 years ago, many more Neolithic settlements have been excavated in the Near East. Yet only a few of these sites—such as Ain Ghazal in Jordan, which covered the same area but probably had a smaller population (Science, 1 April 1988, p. 35)—can match Çatalhöyük's size and importance. And over the years, the mysteries of Çatalhöyük—most of all, the question of what brought so many people together on this isolated plain—have continued to nag at the minds of archaeologists.

    Now, in the 1990s, an army of excavators has again descended upon Çatalhöyük, seeking answers to these questions. The 90-member team, directed by Ian Hodder of Britain's Cambridge University and including a large platoon from the University of California, Berkeley, led by Ruth Tringham, represents one of the greatest concentrations of scientific firepower ever focused on an archaeological site. Seasoned excavators, who are slowly sifting through at least 12 successive layers of occupation, have been joined by experts from every field of archaeological science, including specialists in human and animal remains, fossil plants, pottery, and stone tools. Moreover, the dig at Çatalhöyük has become a showcase for the relatively new field of micromorphology, which puts archaeological remains under the microscope to provide the maximum amount of information about how people lived—and how they died.

    “Mellaart did a fantastic job at getting the big picture of Çatalhöyük,” Hodder says. “But the techniques available back then were relatively limited. Times have moved on and the questions have changed.” And although some of Hodder's interpretations of what his team is finding at Çatalhöyük may be controversial (see sidebar on p. 1444), archaeologists agree that the site could help solve some of the mysteries surrounding the origins of settled life.

    An overgrown village?

    Permanent settlements developed independently in several parts of the world, including the Near East, China, and the Americas. The oldest village known, just outside present-day Jericho in Palestine, may have sprung up around a shrine used by roving bands of hunter-gatherers. By 10,500 years ago it had evolved into a small farming village. Yet many more millennia passed before the first undisputed cities—such as Uruk, in Mesopotamia—were established, about 5500 years ago. And although the expansion of these first settlements roughly coincided with the rise of farming, whether agriculture directly fueled their growth—as Childe proposed—is now hotly debated by archaeologists. Indeed, one of the great attractions of Çatalhöyük is that its multilayered remains—which are remarkably well preserved for a site so old—might help answer this critical question.

    “Çatalhöyük is the dig of the new millennium,” says Colin Renfrew, also of Cambridge University. Mark Patton, at the University of Greenwich in London, says that “people are watching very closely” as the excavations unfold—a vigilance made easier by the dig's detailed Web site (catal.arch.cam.ac.uk/catal/catal.html). Çatalhöyük watchers will need to be patient, however. In contrast to Mellaart, who excavated more than 200 buildings over four seasons, the new team is excavating only one or two houses each year. “We are going very slowly,” says team member Naomi Hamilton of Edinburgh University in the U.K. “We have learned a huge amount about a few buildings, instead of a moderate amount about 200.”

    Because of its unusual size, Mellaart often referred to Çatalhöyük as a “Neolithic city,” and the notion that the settlement was an early metropolis is often repeated in media accounts of the ongoing excavations. But the new dig has already reinforced a suspicion long held by many archaeologists: Çatalhöyük is not a city, nor even a town, even though many modern towns cannot boast its substantial population. “Çatalhöyük may be the largest Neolithic settlement in the Near East, but it's still just an overgrown village,” says Guillermo Algaze of the University of California, San Diego. Which only makes the site all the more perplexing: Why did the people cram their houses together rather than spread them out across the landscape?

    For archaeologists, the difference between a village and a city is not just a matter of size but hinges on the social and economic relationships within a population. Thus the earliest cities in Mesopotamia—such as Uruk—were made possible by agricultural surpluses that allowed some people to quit farming and become full-time artisans, priests, or members of other professions. Meanwhile, the farmers who provided food for these urban centers continued to live in outlying villages. “A key defining feature of a town or city is that farmers don't live in them,” says Patton.

    But the new excavations at Çatalhöyük have uncovered little evidence for division of labor. Although the layout of the houses follows a very similar plan, Hodder's team has found signs that the inhabitants did their own construction work rather than relying upon Neolithic building contractors. Microscopic studies of plaster and mud bricks from different houses done by Wendy Matthews, a micromorphologist at the British Institute of Archaeology in Ankara, show great variation in the mix of soils and plants used to form them—the opposite of what would be expected if they had been fashioned by specialist builders using standard techniques.

    And although Mellaart believed that the production of the beautiful obsidian objects found at Çatalhöyük—such as finely worked blades and the earliest known mirrors—was carried out in specialist workshops, the new team has found what Hodder calls “masses of evidence” from microscopic residues of obsidian flakes on floors and around hearths that a lot of obsidian work was carried out in the individual dwellings. Nor has the new dig revealed another important feature of cities: public architecture, such as temples and other public buildings, which Uruk and other early urban centers had in abundance.

    But Mellaart, who retired some years ago from the Institute of Archaeology in London, does not necessarily agree. He told Science that because he only dug about 4% of the settlement—and Hodder's team has so far excavated considerably less than that—it is too early to tell whether large communal buildings might be hidden in another part of the mound. Other observers, including Algaze, raise similar cautions. But Hodder says a detailed study of the entire mound suggests that there are no temples or other monuments waiting to be discovered. Using standard archaeological survey techniques—including meticulous scraping of the topsoil and searching for local variations in Earth's magnetic field that might be caused by buried structures—the team failed to find any structures other than the myriad small, mud-brick dwellings.

    Based on this and other evidence about what was going on in the houses—including the pattern of burials under the plastered floors—Hodder has tentatively concluded that the basic social units at Çatalhöyük were extended families grouped together in clusters of four or five houses, which carried on their daily activities more or less autonomously. “It is hard to imagine that 10,000 people, minimally 2000 families, were going out and doing their own thing, but that is what we see.”

    The Neolithic Revolution

    This new view of Çatalhöyük as a decentralized community with minimal division of labor is reinforced by evidence that agriculture was still at a relatively early stage here. A survey of the area surrounding Çatalhöyük by a team of physical geographers, led by Neil Roberts of Britain's Loughborough University, suggests that the site was founded on the bank of a now-dry river that flowed here during Neolithic times and that frequent flooding of its banks created a lush wetlands environment. The plant remains found in and around the houses suggest that the people ate both wild and cultivated plants and seeds, including tubers, wild grasses, lentils, hackberries, acorns, and pistachios. Even the cereals likely to have been under cultivation, such as wheat and barley, may not have required irrigation in these wet conditions, and there is no evidence that grain was ground for bread.

    A reanalysis of animal remains adds to the impression that Çatalhöyük's agriculture was not terribly advanced. Çatalhöyük had long been heralded as an early center of cattle domestication, based on a study of animal bones from the site by the late American faunal expert Dexter Perkins Jr. (Science, 11 April 1969, p. 177). In general, domestic cattle are much smaller than the now-extinct wild oxen, or aurochs, from which they are descended. By comparing cattle bones from Çatalhöyük with both earlier and later archaeological sites in Anatolia, Perkins concluded that cattle were probably domesticated early in the life of Çatalhöyük, and also that cattle represented the most numerous domesticated species.

    But so far, at least, the animal bones emerging from the new excavations do not confirm this pattern. A study of the faunal remains by Nerissa Russell of Cornell University in Ithaca, New York, and Louise Martin at the Institute of Archaeology in London is showing that cattle made up only about 25% of the species present. Most of the animal bones represent sheep, which were domesticated much earlier than cattle across most of the Near East. Although Russell says it is too early to conclude whether the cattle were domesticated, “Çatalhöyük no longer appears to be a cattle-centered economy, which was a supporting argument for cattle domestication.”

    These findings, along with similar evidence from some other Near East sites, are challenging the original concept of the Neolithic Revolution. Many archaeologists are parting company with the view that settled life and agriculture were closely linked. “We have always thought that sedentism and agriculture were two sides of the same coin,” says Algaze. “But as we start getting into the nitty-gritty details across the world, it becomes increasingly clear that while they are very much related, they are not necessarily coterminous.”

    Even stronger evidence for this conclusion comes from excavations at another site, called Asikli, in Central Anatolia. Since 1989, a team from the University of Istanbul, led by Ufuk Esin, has been excavating Asikli, a village that appears to be about 1000 years older than Çatalhöyük and was home to several hundred people at its height. Although it is smaller, Asikli has a more complex arrangement of buildings than Çatalhöyük. A large collection of mud-brick houses is partly surrounded by a stone wall, and Esin has found a large cluster of public buildings that may have been a temple complex, as well as a pebbled street running through the settlement. Most amazingly, Esin's team has now excavated 10 successive occupation layers and found that the arrangements of the houses and the street are exactly repeated at each level. Yet, Esin told Science, most of the plant and all of the animal remains were wild. In essence, Asikli was a large, highly stable settlement that subsisted mostly on hunting and gathering.

    “This is the new thing that Çatalhöyük is starting to give us, and that Asikli makes absolutely crystal clear,” says Algaze. “You can have a major site, with a large population, on the basis of very little domestic agriculture. This goes against every paradigm we have ever had.” It also runs counter to common sense, says Hodder. He argues that the rich wetland resources around Çatalhöyük would have been more easily exploited by a dispersed population in small settlements rather than by packing thousands of people into a village so crowded that they entered their houses through the roofs. “What you end up with,” says Hodder, “is trying to understand why these people bothered to come together.”

    Coming together

    To get at this crucial question, Hodder says, “we first have to understand Çatalhöyük on its own terms. Let's not try to categorize it, as a city or a village, but first try to find out how it works.” As a leader of the “postprocessual” movement in archaeology, Hodder believes that deciphering the symbolic and religious life of the settlement is key to understanding what held its social fabric together.

    It may also be a clue to understanding the transition to farming in general, says Jacques Cauvin of the Institute of Eastern Prehistory in Jalès, France, who argues that the Neolithic Revolution in agriculture was preceded by a “cultural revolution” in religious practices and the use of symbolism. “The origin of these [farming] changes was more cultural than economic,” Cauvin told Science. Hunter-gatherer societies underwent a “mental transformation” that allowed them to see their environment differently and exploit it “more selectively and more actively,” he says—a transformation that may be recorded at Çatalhöyük.

    That symbolism was a defining element of Çatalhöyük is clear from the large number of spectacular artworks unearthed at the site, including a few figurines—of which the most famous is a seated woman with her hands on the heads of two leopards—which Mellaart believed represented a Mother Goddess. Hodder and other archaeologists at Çatalhöyük say the evidence to support goddess worship is scant. Instead, the team has focused on two striking features of life and death at the site, which might give insights into how its people viewed the world and their place in it: the habit of burying the dead under the floors, and the murals painted on the plastered walls, which often featured wild animals and hunting scenes.

    Mellaart's excavations had established that at some point during the life of a house, its roof was taken down, part of the walls dismantled, and the rooms filled in, leaving the burials, wall murals, ovens, storage bins, and other features intact. Last year, while excavating a large building, the team discovered more than 70 bodies buried under its floors. A study of the ages of the skeletons and the order in which they were buried, carried out by anthropologists Theya Molleson and Peter Andrews of the Natural History Museum in London, suggested that the life cycle of the house coincided with the life of an extended family. Thus the earliest burials appear to be of infants and children, while the later burials are mainly people who survived into adulthood and even old age.

    In addition, all of the murals were found on the walls around a raised platform in one corner of the room that covered a large concentration of burials. Paintings were especially common on earlier layers of plaster that coincided in time with the burials of children. Hodder and Berkeley's Tringham believe that this close association between paintings and burials is no coincidence. Arguing from so-called ethnographic evidence, which uses knowledge of present-day cultures to shed light on past societies, they suggest that the art might have represented a ritualistic attempt to assuage the spirits that had taken the lives of the community's young people, or perhaps an effort to protect the living from the spirits of the dead. Similar practices exist today among the San hunters of southern Africa, nomadic tribes in northern Asia, and the Nuba of Sudan. There are also striking parallels with the burial practices of the Tikopia people of Polynesia, who buried their dead under the floors as well.

    The habit of keeping the remains of the dead close to the living is mirrored at other digs across the Near East. At Jericho, for example, human skulls molded with plaster to represent real people were found during excavations there in the 1950s, and a recent dig at the site of Çayönü in southern Turkey, led by Mehmet Özdogan of the University of Istanbul, uncovered piles of human skulls in the cellars of a building. In addition, extraordinary painted statues, which may represent mythical ancestors, were found buried under a house at 'Ain Ghazal.

    Hodder also sees parallels between the murals of Çatalhöyük and the scenes of hunting and wild animals that dominate the earlier cave art produced by hunter-gatherers. He suggests that the transition to settled life required “the domestication of the wild by bringing it into the house, at least the symbolism of the wild, and controlling it.” This shared cultural transformation, combined with the creation of large family groups tied together by their links to their ancestors, might have been the “glue” that held the early society at Çatalhöyük together.

    Of course, archaeology, which attempts to understand past societies from the shards of bone and artifacts they left behind, cannot—and does not—claim to be an exact science. And Hodder admits that these ideas are only hypotheses, which may or may not be supported by further excavation. But if all goes as planned, archaeologists might not have to wait until the next millennium to learn more about what made the people of Çatalhöyük come together. Although Mellaart dug through a dozen successive occupation levels, core samples from the mound indicate that he stopped about 5 meters before reaching unoccupied virgin soil. Next year, if special funding for the project comes through, the team plans to extend its normal 2-month summer season to 8 or 9 months. This should be long enough to dig a deep trench through one section of the mound, right to its very bottom. There, by the bank of an ancient river, the founding mothers and fathers of Çatalhöyük may well lie buried.

  16. THE FIRST CITIES

    Digging Into the Life of the Mind

    1. Michael Balter

    Cambridge, U.K.—As a student at London's Institute of Archaeology in the 1960s, Ian Hodder heard James Mellaart lecture about his excavations at Çatalhöyük, a huge Neolithic settlement in present-day Turkey. The aspiring archaeologist was entranced. “Mellaart was a fantastic speaker, and he left an indelible impression of the site on my mind,” Hodder says. Now, 3 decades later, Hodder himself is in charge of major new excavations at Çatalhöyük, which are expected to take the next 25 years (see main text). The new dig is being closely watched by the archaeological community—yet as much for the way it is being dug as for what it is finding.

    Hodder—now at Cambridge University—has spent much of his career leading a theoretical revolt against established archaeological thought. This movement of mostly British and some American archaeologists—which has been greatly influenced by postmodernist trends in the humanities—is usually referred to as “postprocessualism.” It puts much more emphasis on studying the symbolic and cognitive life of ancient peoples than did earlier approaches and argues for the need to accept and even welcome differing interpretations of an archaeological site.

    The new school is in part a rebellion against what used to be called the New Archaeology, a movement sparked in the 1970s by Lewis Binford in the United States and the late David Clarke in the United Kingdom. The New Archaeology—which is now usually called processualism, because of its concern with processes of social change—was in turn a reaction against what was seen as the static, unscientific, and speculative approaches of the previous generation of archaeologists. But Hodder and others began to feel that the processualists were focusing too narrowly on questions that could most easily be answered by scientific method, such as adaptation to the environment, economy, and trade, to the neglect of religious and social beliefs. “Humans adapt to their environment partly through system of beliefs or preconceptions of the world,” Hodder says. “Culture and mind contribute something; we don't just respond to the environment the way animals do.”

    The debate over these issues often turned acrimonious, with processualists accusing postprocessualists of embracing “relativism” and being antiscience, and the latter countering with charges of “scientism” and “positivism.” More recently, however, the discussion has taken a calmer tone, although there are still occasional flare-ups in the pages of archaeological journals. Colin Renfrew of Cambridge University comments that “processual archaeology had its own rhetoric, and I think the ‘postprocessualists’ have quite successfully deflated a little of that. But that hasn't prevented them from introducing whole balloonfuls of rhetorical wind of their own.”

    Hodder is putting a strong emphasis on scientific methods at Çatalhöyük, bringing in dozens of experts who are literally putting the site under the microscope—an approach that some archaeologists take as an ironical indication that he has at last seen the processual light. “Everybody is very impressed with Ian Hodder's descent from the lofty heights of theory to the nitty-gritty of actually getting something done,” says Guillermo Algaze of the University of California, San Diego. But Hodder insists that he is using science in a much different way: Rather than focusing only on issues that can be resolved by hypothesis testing, such as the details of economy and trade, he is trying to understand ancient belief systems by using the scientific evidence as pieces of a “jigsaw puzzle” that can never be solved with certainty.

    Thus unlike most digs, where excavators excavate and archaeological specialists make short visits to the site or stick to their labs and work on specimens, Hodder has brought in a large team of full-time experts who sometimes work side by side with excavators, interpreting what they see as they go along. Indeed, everyone is encouraged to try to make sense of what they uncover rather than simply collecting data. “People here are pushed to make their own interpretations, to look for patterns,” says team member Nerissa Russell, an archaeologist at Cornell University in Ithaca, New York.

    Hodder fully realizes that excavating the large and well- preserved site at Çatalhöyük is the best chance he will ever have to prove that the postprocessual approach can work. “That's why I am prepared to spend the next 25 years of my life working here,” he says. “This is really a test of whether we can do it.”

  17. NEOLITHIC AGRICULTURE

    The Slow Birth of Agriculture

    1. Heather Pringle*
    1. Heather Pringle is a science writer in Vancouver, British Columbia.

    New methods show that around the world, people began cultivating some crops long before they embraced full-scale farming, and that crop cultivation and village life often did not go hand in hand

    According to early Greek storytellers, humans owe the ability to cultivate crops to the sudden generosity of a goddess. Legend has it that in a burst of goodwill, Demeter, goddess of crops, bestowed wheat seeds on a trusted priest, who then crisscrossed Earth in a dragon-drawn chariot, sowing the dual blessings of agriculture and civilization.

    For decades, archaeologists too regarded the birth of agriculture as a dramatic transformation, dubbed the Neolithic Revolution, that brought cities and civilization in its wake. In this scenario, farming was born after the end of the last Ice Age, around 10,000 years ago, when hunter-gatherers settled in small communities in the Fertile Crescent, a narrow band of land arcing across the Near East. They swiftly learned to produce their own food, sowing cereal grains and breeding better plants. Societies then raised more children to adulthood, enjoyed food surpluses, clustered in villages, and set off down the road to civilization. This novel way of life then diffused across the Old World.

    But like many a good story, over time this tale has fallen beneath an onslaught of new data. By employing sensitive new techniques—from sifting through pollen cores to measuring minute shape changes in ancient cereal grains—researchers are building a new picture of agricultural origins. They are pushing back the dates of both plant domestication and animal husbandry (see sidebar, p. 1448) around the world, and many now view the switch to an agrarian lifestyle as a long, complex evolution rather than a dramatic revolution.

    The latest evidence suggests, for example, that hunter-gatherers in the Near East first cultivated rye fields as early as 13,000 years ago.* But for centuries thereafter, they continued to hunt wild game and gather an ever-decreasing range of wild plants, only becoming full-blown farmers living in populous villages by some 8500 B.C. And in some cases, villages appear long before intensive agriculture (see p. 1442). “The transition from hunters and gatherers to agriculturalists is not a brief sort of thing,” says Bruce Smith, an expert on agricultural origins at the Smithsonian Institution's National Museum of Natural History in Washington, D.C. “It's a long developmental process”—and one that did not necessarily go hand in hand with the emergence of settlements.

    Similar stories are emerging in South America, Mesoamerica, North America, and China. Although cultivation may have been born first in the Near East, the latest evidence suggests that people on other continents began to domesticate the plants they lived with—squash on the tropical coast of Ecuador and rice along the marshy banks of the Yangtze in China, for example—as early as 10,000 to 11,000 years ago, thousands of years earlier than was thought and well before the first signs of farming villages in these regions. To many researchers, the timing suggests that worldwide environmental change—climate fluctuations at the end of the Ice Age—may well have prompted cultivation, although they are still pondering exactly how this climate change spurred people around the world to begin planting seeds and reaping their bounty.

    Cultivating the green hell

    Perhaps the most dramatic and controversial new discoveries in ancient agriculture have emerged from the sultry lowland rainforests of Central and South America. These forests, with their humid climate, poor soils, and profusion of pests, were long considered an unlikely place for ancient peoples to embark upon the sweaty toil of farming, says Dolores Piperno, an archaeobotanist at the Smithsonian Tropical Research Institution in Balboa, Panama. “If people were going to have a hard time living in [these forests], how were they ever going to develop agriculture there?” she asks. And most research suggested that these forest dwellers were relative latecomers to agriculture, first cultivating crops between 4000 to 5000 years ago.

    But tropical forests harbor the wild ancestors of such major food crops as manioc and yams. Back in the 1950s, American cultural geographer Carl Sauer speculated that these regions were early centers of plant domestication, but there was little evidence to support the idea, as the soft fruit and starchy root crops of these regions rapidly rot away in the acid soils. The better preserved evidence found in arid regions, such as seeds from grain crops in the Near East, captured the attention of most archaeologists.

    In the early 1980s, however, Piperno and colleague Deborah Pearsall, an archaeobotanist from the University of Missouri, Columbia, began searching the sediments of rainforest sites in Panama and Ecuador for more enduring plant remnants. They focused on phytoliths, microscopic silica bodies that form when plants take up silica from groundwater. As the silica gradually fills plant cells, it assumes their distinctive size and shape. Piperno and Pearsall came up with ways to distinguish phytoliths from wild and domestic species—domestic plants, for example, have larger fruits and seeds, and hence larger cells and phytoliths. Then they set about identifying specimens from early archaeological sites.

    This spring, after nearly 20 years of research, the team published its findings in a book entitled The Origins of Agriculture in the Lowland Neotropics. In one study, they measured squash phytoliths from a sequence of layers at Vegas Site 80, a coastal site bordering the tropical forest of southwestern Ecuador. From associated shell fragments as well as the carbon trapped inside the phytoliths themselves, they were able to carbon-date the microfossils. A sharp increase in phytolith size indicated that early Ecuadorians had domesticated squash, likely Cucurbita moschata, by 10,000 years ago—some 5000 years earlier than some archaeologists thought farming began there. Such timing suggests, she notes, that people in the region began growing their own plants after much local game went extinct at the end of the last Ice Age and tropical forest reclaimed the region. “I think that's the key to the initiation of agriculture here,” says Piperno. If this find holds up, the Ecuador squash rivals the oldest accepted evidence of plant domestication in the Americas—the seeds of another squash, C. pepo, excavated from an arid Mexican cave and directly dated to 9975 years ago (Science, 9 May 1997, pp. 894 and 932).

    The phytolith technique is also pushing back the first dates for maize cultivation in the Americas, says Piperno. Phytoliths taken from sediment samples from Aguadulce rock-shelter in central Panama by Piperno and her colleagues and carbon-dated both directly and by analyzing shells from the same strata imply that maize cultivation began there as early as 7700 years ago. That's not only more than 2500 years earlier than expected in a rainforest site, it's also 1500 years earlier than the first dates for maize cultivation anywhere in the more arid parts of the Americas. Almost certainly, the oldest partially domesticated maize at the site came from somewhere else, because the wild ancestor of corn is known only from a narrow band of land in Mexico. But the squash data raise important questions, says Piperno, about where agriculture first emerged in the Americas. “Clearly tropical forest is in the ball game.”

    But the community is split over whether to accept the phytolith evidence. Some critics question the dating of the phytoliths themselves, saying that carbon from other sources could have become embedded in the cracks and crevices on the fossil surfaces, skewing the results. Others such as Gayle Fritz, an archaeobotanist at Washington University in St. Louis, point out that the shells and other objects used to support the dates may not be the same age as the phytoliths. “I would be as thrilled as anyone else to push the dates back,” says Fritz, “but my advice now is that people should be looking at these as unbelievable.”

    However, proponents such as Mary Pohl, an archaeologist at Florida State University in Tallahassee, note that the Piperno team typically supports its claims with multiple lines of evidence, so that even if one set of dates is suspect, the body of work makes it clear that some domestication took place startlingly early in the rainforest. “The data seem irrefutable to my mind,” she says.

    If so, they overturn some basic assumptions about the relationship between village life and agriculture in the tropical forest. For years, says Piperno, researchers believed that the first farmers there lived in villages, like the well-studied Neolithic grain farmers of the Near East. “Because settled village life is just not seen in [this part of the] Americas until 5000 years ago, [researchers thought] that means food production was late too,” says Piperno. “But it doesn't work.” In her view, farming in the region came long before village life. For thousands of years, she says, “you had slash-and-burn agriculture instead of settled village agriculture.”

    Taming wild rice

    At the same time as early Americans may have been planting their first squash, hunter-gatherers some 16,000 kilometers east along the banks of the Yangtze River were beginning to cultivate wild rice, according to new studies by archaeobotanist Zhijun Zhao of the Smithsonian Tropical Research Institution and colleagues. Rice, the most important food crop in the world, was long thought to have been cultivated first around 6500 years ago in southern Asia, where the climate is warm enough to support luxuriant stands of wild rice. But in the 1980s, ancient bits of charred rice turned up in a site along the banks of the middle Yangtze River, in the far northern edge of the range of wild rice today. Directly carbon-dated to 8000 years ago, these grains are the oldest known cultivated rice and suggest that the center of rice cultivation was actually farther north.

    Now the dates have been pushed back even farther, revealing a long, gradual transition to agriculture, according to work in press in Antiquity by Zhao. He has analyzed a sequence of abundant rice phytoliths from a cave called Diaotonghuan in northern Jiangxi Province along the middle Yangtze, which was excavated by Richard MacNeish, research director at the Andover Foundation for Archaeological Research in Massachusetts, and Yan Wenming, a Peking University archaeologist in Beijing.

    Neolithic evolution.

    Around the world, societies tamed the plants and animals at hand, but didn't embrace full-scale farming until thousands of years later.

    S. BAUER/ARS; G. HEILMAN; S. DALTON/OSF/EARTH SCENES; B. WRIGHT/ANIMALS ANIMALS; B. FRITZ/ARS

    Radiocarbon dates for the site seemed to have been contaminated by groundwater, so Zhao constructed a relative chronology based on ceramic and stone artifacts of known styles and dates found with the phytoliths. In recent weeks, Zhao has further refined his Antiquity chronology as a result of a joint study with Piperno on paleoecological data from lake sediments in the region.

    To trace the work of ancient cultivators at the site, he distinguished the phytoliths of wild and domesticated rice by measuring minute differences in the size of a particular type of cell in the seed covering. With this method, which Zhao pioneered with Pearsall, Piperno, and others at the University of Missouri, “we can get a 90% accuracy,” he says.

    By counting the proportions of wild and domesticated rice fossils, Zhao charted a gradual shift to agriculture. In a layer dated to at least 13,000 years ago, the phytoliths show that hunter-gatherers in the cave were dining on wild rice. But by 12,000 years ago, those meals abruptly ceased—Zhao suspects because the climate became colder and the wild grain, too tender for such conditions, vanished from this region. Studies of the Greenland ice cores have revealed a global cold spell called the Younger Dryas from about 13,000 to 11,500 years ago. Zhao's own studies of phytoliths and pollen in lake sediments from the region reveal that warmth-loving vegetation began retreating from this region around 12,000 years ago.

    As the big chill waned, however, rice returned to the region. And people began dabbling in something new around 11,000 years ago—sowing, harvesting, and selectively breeding rice. In a zone at Diaotonghuan littered with sherds from a type of crude pottery found in three other published sites in the region and radiocarbon-dated to between 9000 and 13,000 years ago, Zhao found the first domesticated rice phytoliths—the oldest evidence of rice cultivation in the world. But these early Chinese cultivators were still hunting and gathering, says Zhao. “The cave at that time is full of animal bones—mainly deer and wild pig—and wild plants,” he notes. Indeed, it was another 4000 years before domestic rice dominated wild rice to become the dietary staple, about 7000 years ago.

    It makes sense that the transition to farming was slow and gradual and not the rapid switch that had been pictured, says MacNeish. “Once you learn to plant the stuff, you must learn to get a surplus and to get the best hybrid to rebreed this thing you're planting,” he notes. “And when this begins to happen, then very gradually your population begins going up. You plant a little bit more and a little bit more.” At some point, he concludes, the hunter-gatherers at sites like Diaotonghuan were unable to gather enough wild food to support their burgeoning numbers and so had little choice but to embrace farming in earnest.

    The cradle of civilization

    In the Near East, archaeologists have been studying early agriculture for decades, and it was here that the idea of the Neolithic Revolution was born. Yet even here, it seems there was a long and winding transition to agriculture. And although settled village life appeared early in this region, its precise connection to farming is still obscure.

    The latest findings come from Abu Hureyra, a settlement east of Aleppo, Syria, where the inhabitants were at least semisedentary, occupying the site from at least early spring to late autumn, judging from the harvest times of more than 150 plant species identified there to date. Among the plant remains are seeds of cultivated rye, distinguished from wild grains by their plumpness and much larger size. University College London archaeobotanists Gordon Hillman and Susan Colledge have now dated one of those seeds to some 13,000 years ago, according to unpublished work they presented at a major international workshop in September. If the date is confirmed, this rye will be the oldest domesticated cereal grain in the world.

    These dates are nearly a millennium earlier than previous evidence for plant domestication. And the rye is not even the first sign of cultivation at the Abu Hureyra site: Just before the appearance of this domestic grain, the team found a dramatic rise in seed remains from plants that typically grow among crops as weeds. All this occurs some 2500 years before the most widely accepted dates for full-scale agriculture and populous villages in the Near East. Although the semisedentism of the inhabitants fits with earlier ideas, the long time span contradicts ideas of a rapid agricultural “revolution.”

    The early date for plant domestication in the Near East is not entirely unexpected, says Ofer Bar-Yosef of Harvard University. For example, inhabitants of Ohallo II in what is now Israel had made wild cereal seeds a major part of their diets as early as 17,000 B.C., according to published work by Mordechai Kislev, an archaeobotanist at Bar Ilan University in Ramat-Gan, Israel. Moreover, as close observers of nature, these early foragers were almost certain to have noticed that a seed sown in the ground eventually yielded a plant with yet more seeds. “These people knew their fauna and flora very well,” says Bar-Yosef, “and they probably played with planting plants long before they really switched into agriculture.”

    Just what spurred hunter-gatherers to begin regularly sowing seeds and cultivating fields, however, remains unclear. For several years, many Near Eastern experts have favored the theory that climate change associated with the Younger Dryas was the likely trigger. Bar-Yosef, for example, suggests that inhabitants of the Fertile Crescent first planted cereal fields in order to boost supplies of grain when the Younger Dryas cut drastically into wild harvests.

    And at Abu Hureyra, Hillman thinks that the drought accompanying the Younger Dryas was a key factor. Before the jump in weeds and the appearance of domestic rye, the inhabitants relied on wild foods as starch staples. Over time, they turned to more and more drought-resistant plants—and even these dwindled in abundance. So “progressive desiccation could indeed have been the impetus for starch cultivation,” says Hillman.

    But new dates for the cold spell in the Near East paint a more complex view. At the Netherlands workshop, Uri Baruch, a palynologist at the Israel Antiquities Authority in Jerusalem, and Syze Bottema, a palynologist at the Groningen Institute of Archeology in the Netherlands, announced that they had redated a crucial pollen core at Lake Hula in northern Israel. Their original published estimate put a retreat in the region's deciduous oak forest—due to cool, dry conditions believed to be the local manifestation of the Younger Dryas—starting about 13,500 years ago. But after correcting for contamination by old carbon dissolved in the lake water, they found that the cold spell in the Near East was a bit later, starting around 13,000 years ago and ending around 11,500 years ago.

    These dates suggest that farmers of Abu Hureyra may have begun cultivating rye before the Younger Dryas set in, at the very end of the warm, moist interval that preceded it. “The domesticated rye dates and the pollen core don't match up so well at this time,” says Mark Blumler, a geographer at the State University of New York, Binghamton.

    Moreover, others point out that the clearest evidence for the domestication of grains such as wheat and barley in the Near East comes around 10,500 years ago, after the Younger Dryas had waned and the climate had improved again. By then, says George Willcox, an archaeobotanist at the Institut de Prehistoire Orientale in St-Paul-le-Jeune, France, other factors could have contributed to the transition. Hunter-gatherers in the region, for example, had settled year-round in small villages between 12,300 and 10,500 years ago. There, he says, rising human populations and overexploitation of wild foods could have driven people to take up farming. “Because people at this time appear to be living in one place,” says Willcox, “they could use up all the resources in a particular area.”

    Putting the evidence from around the world together, a new picture of the origins of agriculture begins to emerge. In the Near East, some villages were born before agriculture and may even have forced its adoption in some cases. But elsewhere—China, North America, and Mesoamerica—plants were cultivated and domesticated by nomadic hunter-gatherers, perhaps to increase their yield during the dramatic climate shifts that accompanied the final phase of the last Ice Age. Either way, it no longer makes sense to suppose a strong causal link between farming and settled village life, Piperno says.

    Indeed, in many regions, settled agriculturalists emerged only centuries or millennia after cultivation, if at all. Many ancient peoples simply straddled the middle ground between foraging and farming, creating economies that blended both (see sidebar, p. 1447). “For so long, we've put everybody in black boxes” as farmers or hunter-gatherers, notes Joanna Casey, an archaeologist at the University of South Carolina, Columbia, and a specialist in agricultural origins in western Africa. But mixed cultivation and foraging is not necessarily a step “on the way” to full-scale farming—it was a long-term lifestyle for many groups. “These societies in the middle ground are certainly not failures,” says the Smithsonian's Smith. “They are not societies that stumbled or stuttered or got frozen developmentally. They're societies that found an excellent long-term solution to their environmental challenges.”

    Eventually, for reasons still unclear, many of the early domesticators did become true agriculturalists—by 10,500 years ago in the Near East, 7000 years ago in China, and later in the Americas and Africa. And during this transition, human populations did indeed soar, and hamlets became villages. Archaeological sites in the intensively studied Fertile Crescent, for example, increased more than 10-fold in size, from 0.2 hectares to 2.0 to 3.0 hectares, during this period of transition. The combination of settlement and reliable food probably brought about “a longer period of fertility for the now better fed women,” says Bar-Yosef, setting the stage for cities and civilization.

    So it seems that the ancient Greek legends got it half right when they told how seeds fell throughout the world, sparking independent centers of domestication on many continents. But cities and civilization did not necessarily arrive at the same time as the seeds. Demeter's priest apparently gave out only one blessing at a time.

    • * All dates are calendar years.

    • The Transition From Foraging to Farming in Southwest Asia, Groningen, the Netherlands, 7–11 September.

  18. NEOLITHIC AGRICULTURE

    The Original Blended Economies

    1. Heather Pringle*
    1. Heather Pringle is a science writer in Vancouver, British Columbia.

    In the textbooks, preindustrial societies typically have two different ways to make a living: farming or hunting and gathering. But archaeologists studying ancient cultures are finding new evidence that people cultivated crops long before they settled down in one place or adopted full-blown farming (see main text). Recently anthropologists have found vivid examples of this middle way in historic cultures, which offer clues to how such mixed societies might have been organized in the past. Particularly in the Americas, many historic societies once labeled as hunter-gatherers turn out to have done a surprising amount of plant cultivation and management.

    Some cultures actively planted seeds, like the historic Cocopa people of northwestern Mexico, who supplemented their diets of wild game by sowing two species of panic grass on the floodplain of the Colorado River after the waters receded, says National Museum of Natural History archaeologist Bruce Smith. Other peoples simply altered the landscape to change the mix of plants. The historic Kumeyaay people of southern California, for example, burned ground cover to eliminate competitors for their favored wild plants. “In a lot of environmental, social, and cultural situations, populations aren't forced into a developmental trajectory that leads to agriculture,” says Smith. “They find solutions that are a better fit.”

    One of the most dramatic examples comes from published ethnographies of the Owen Valley Paiute in eastern California. Based on descriptions given by Paiute elders during the 1920s and '30s to American anthropologist Julian Steward, these writings describe how the Paiute propagated wild hyacinth, nut grass, and spike rush—root crops that thrived naturally in swampy meadows bordering the Owen River. Each year, Paiute men dammed tributary creeks in nearby hills and built irrigation ditches up to 6 kilometers long to the swampy meadows in the valley, thus creating hectares of new habitat for the crops. Even though they didn't plant seeds, notes Smith, “they're expanding the habitat of naturally occurring plants to increase their yield and productivity.”

    The Paiute dismantled their dams every year, so without historic records their work would have been invisible to archaeologists. But as researchers begin to look for the signs of such low-level food production, ancient examples are turning up. In the American Southwest, for example, Suzanne Fish, an archaeologist at the Arizona State Museum in Tucson, has recently identified rock mulching beds that prehistoric peoples in Arizona built nearly 1500 years ago for stands of agave, cultivated for both food and fiber. These early food producers, says Fish, “were transplanting the agave to lower elevations in areas where it's too hot and dry for it to normally grow. Mulching gives it a moisture advantage.” Smith agrees, concluding, “It really is one of those rare situations where this shows up archaeologically.”

  19. NEOLITHIC AGRICULTURE

    Reading the Signs of Ancient Animal Domestication

    1. Heather Pringle*
    1. Heather Pringle is a science writer in Vancouver, British Columbia.

    Over the millennia, humans seeking a steady source of food, hides and wool, and companionship have tamed everything from wolves to turkeys to guinea pigs. Learning when—and why—each of the more than two dozen domesticated animals was brought under human rule has been a continuing quest for archaeologists. Now researchers are shaking up their old conclusions by using more sensitive techniques, such as tracing demographic patterns in bone assemblages, to tease out the signature of human handling. So far such methods are pushing back the dates of domestication of one animal—pigs—revealing animal husbandry in what is now southeastern Turkey long before cultivation began there. More examples may follow. The findings are “causing quite a stir,” says Bruce Smith, an archaeologist at the National Museum of Natural History in Washington, D.C. “People are now going back and looking at other animal species.”

    Traditionally, the first farm animals were thought to be wild goats and sheep, tamed in southwest Asia around 10,000 years ago* by sedentary cereal farmers who had wiped out the local wild game and needed new sources of meat and hides. Domestic pigs and cattle followed around 9000 years ago. And the earliest firm evidence of dairy farming, from art and written texts, isn't until about 6000 years ago, although new dates could come from a new method for identifying milk fat residues on pottery sherds, reported on page 1478 of this issue.

    Most archaeologists rely on a more mundane characteristic to identify domestic herds: size. Researchers assume that early domestic goats, sheep, pigs, and cattle were smaller than their wild cousins. Early pastoralists, the theory goes, kept their animals in worse conditions than in the wild and selected for smaller, more easily subdued males. “Who would you choose?” asks Melinda Zeder, curator of Old World archaeology and zooarchaeology at the Smithsonian Institution. “The nerdy goat with the glasses or the bully on the playground?”

    But in a controversial new study, as yet unpublished, Zeder tests both the size idea and a newer indicator—a distinctive pattern of mortality that distinguishes herds from hunter's prey. Brian Hesse, a zooarchaeologist at the University of Alabama, Birmingham, reasoned that ancient pastoralists, like modern ones, probably tried to get as much meat as they could from their goats while still ensuring the herd's survival. The obvious strategy is the one still used around the world today for managing livestock: raising females to maturity and keeping them until they quit producing offspring, while butchering most males young and keeping only a few older males as breeding stock. “So there should be a very distinctive marker in the demography,” says Zeder, “and that should be instantaneous with the early period of managing.”

    To hone her strategy for spotting this transition, Zeder examined nine different bones of a control group of 40 modern-day wild and domestic goats of known sex from Iran and Iraq, where goats are thought to have been first domesticated. She could reliably distinguish goats from sheep and determine the animals' age at death, based on the sequence of bone fusion from 10 to 36 months. She could also determine the animals' sex, because the bones of the males were consistently bigger than those of similar-aged females.

    Encouraged, she turned to tens of thousands of goat bones from eight sites in Iran and Iraq, ranging from Paleolithic hunter-gatherer caves to two Neolithic villages. She found little size difference between goats at the 9800-year-old Neolithic village of Ganj Dareh and goats hunted by Middle and Upper Paleolithic bands more than 40,000 years earlier.

    But she did find a significant difference in mortality patterns. In the early sites, almost all the male goats were 36 months old or older at the time of death; the less numerous females were younger, suggesting that hunters had targeted male goats in their prime. But at Ganj Dareh, few billies lived past 24 months, while almost all nannies survived to 36 months or more. This suggests that “they are allowing the females to live as breeding stock,” says Zeder.

    The only evidence Zeder found for size reduction came at the Neolithic village of Ali Kosh, which new radiocarbon dates place at 9000 years ago. Zeder suggests that the animals were smaller there because it lies south of wild goats' natural range, so the animals were kept in hotter, harsher conditions—and females could no longer be bred with big wild males. Thus size reduction, rather than being the first sign of domestication, might instead indicate that animals had been transported beyond their original range or were no longer being bred with the wild type, Zeder suggests.

    Not everyone is persuaded. For example, Harvard University faunal analyst Richard Meadow argues that some of the bones Zeder used, in particular the toe bones, don't accurately reflect an animal's size; he's not ready to give up on size reduction as an indicator of domestication. But other researchers, such as Curtis Marean, a zooarchaeologist at the State University of New York, Stony Brook, say Zeder's analysis is an important step forward. “It shows that the old idea that body size of the animals is directly related to domestication really doesn't fit the evidence,” says Marean.

    Zooarchaeologist Richard Redding of the University of Michigan, Ann Arbor, agrees, and indeed Zeder's demographic patterns fit well with his recently published study of ancient pig bones at the site of Hallan Chemi in southeastern Turkey. For years, faunal analysts pointed to the declining size of pigs' second and third molars as a key trait for some reason associated with pig domestication, and they traced the earliest domestic pigs to a 9000-year-old village in Turkey. But Redding now believes he has found earlier evidence, by applying demographic criteria like Zeder's.

    He analyzed animal remains found in layers at Hallan Chemi and noted that in early layers dating to about 11,500 years ago, pig bones made up just 10% to 15% of the fauna and were almost evenly split between male and female. But in the later layers, dating from 11,000 to 10,500 years ago, pig bones climbed to 20%. “They also become very heavily biased toward female, and they become very young. So the inhabitants are killing suckling pigs,” Redding says. All this happened before domestic cereal grains appear at the site, indicating that the people at Hallan Chemi were herding pigs before they began to farm grain. If Redding is right, the inhabitants of Hallan Chemi are the world's first known herders—and pigs, not goats or sheep, were the first farmyard animals to start on the long road to full domestication.

    • * All dates are calendar years.

  20. ART

    Evolution or Revolution?

    1. Tim Appenzeller

    Human artistic ability burst forth in an explosion of creativity 38,000 years ago in ice age Europe—but was this the world's first flowering of artistic talent?

    Sometime around 250,000 years ago, an early human living on the Golan Heights in the Middle East picked up a lump of volcanic tuff the size of a plum and started scratching at it with a harder stone, deepening its natural crevices. Not long afterward, a volcanic eruption buried the soft pebble in a bed of ash, preserving it from erosion. A quarter of a million years later, in 1980, archaeologists dug it up, and since then, the pebble has been the object of rapt attention—far more, perhaps, than it got when it was new. By chance or design, those long-ago scratchings created what looks like a female figure—and a puzzle for the archaeologists who study the beginnings of art.

    To many archaeologists, art—or symbolic representation, as they prefer to call it—burst on the scene after 50,000 years ago, a time when modern humans are widely thought to have migrated out of Africa to the far corners of the globe. These scholars say the migrants brought with them an ability to manipulate symbols and make images that earlier humans had lacked. An explosion of art resulted, its epicenter in ice age Europe starting about 40,000 years ago, when most anthropologists believe modern humans were replacing the earlier Neandertal people. The new Europeans decorated their bodies with beads and pierced animal teeth, carved exquisite figurines from ivory and stone, and painted hauntingly lifelike animals on the walls of deep caves.

    Some recent discoveries have strengthened this picture. Hints of art and personal ornaments have been found in Africa from just a few thousand years before the artistic explosion in Europe, supporting the idea that a worldwide migration of protoartists did begin 50,000 years ago in Africa. As Richard Klein of Stanford University puts it, “There was a kind of behavioral revolution [in Africa] 50,000 years ago. Nobody made art before 50,000 years ago; everybody did afterward.”

    But other developments have raised awkward questions about this “big bang” theory of art, as some critics call it, hinting that art and the sophisticated cognitive abilities it implies may have a longer history. After years of doubt, most archaeologists accept that the so-called Berekhat Ram object from the Golan Heights is the work of human hands, although there is no consensus about what—if anything—it means. Neandertal sites in Europe, some of them well over 40,000 years old, have yielded a polished plaque split from a mammoth tooth, bones that may have been incised for decorative purposes, and layers of ochre—a red pigment that early humans may have used to decorate their bodies. Ochre is also abundant at early sites in Africa, and ochre “crayons” have turned up at ancient rock-shelters in northern Australia, in layers that may be nearly 60,000 years old. “We're seeing more and more of these things popping up all over the place,” says Paul Bahn, an independent archaeologist in England.

    And 3 years ago, cave art specialists were stunned when carbon dating showed that virtuoso paintings at Grotte Chauvet in France may be more than 32,000 years old, meaning they were created not long after modern humans arrived in Europe. “I simply cannot conceive of the Grotte Chauvet paintings appearing out of nothing,” says Bahn.

    Perhaps most telling, many archaeologists now think an array of grooved teeth and other ornaments from a cave called the Grotte du Renne, at Arcy-sur-Cure in central France, is the handiwork of Neandertals. The age of the Arcy deposits is in dispute; most archaeologists think they date to around 35,000 years, a time when modern humans were already spreading into Europe and making stunning art of their own. But the date could be as early as 45,000 years ago, before modern humans arrived. To some researchers Arcy puts the lie to arguments that nonmodern humans like the Neandertals did not—perhaps could not—express themselves in art and ornament. It supports the view that artistic habits going back tens or even hundreds of thousands of years could have prepared the ground on which the ice age explosion took place.

    The debate is more about the significance of this early evidence than about its reality. Traditionalists—call them explosion theorists—don't doubt that humans before 50,000 years ago sometimes left artifacts that appear decorative or symbolic. But they argue that the objects are so rare and crude that they can hardly be taken seriously as part of a systematic symbolic representation of the world. As Paul Mellars of the University of Cambridge puts it, “Everything that's ever claimed to be Neandertal is so amorphous, so lacking in crisp representation. … There's always this massive question of whether it's just someone doodling.” What impresses him, he says, “is the contrast between that and the clarity you get in the Upper Paleolithic”—the time after 40,000 years ago when modern humans populated Europe.

    Art's big bang.

    Even for archaeologists who focus on earlier times and other continents, there's no denying the artistic explosion that took place in ice age Europe. Some of the earliest confidently dated signs, from a site called Kostenki 17 in Russia, are 38,000-year-old beads and pendants of stone, animal teeth, and marine fossils. After that, ornaments and imagery proliferated. In well-dated 35,000-year-old deposits at a rock-shelter called Abri Castanet in southwestern France, says Randall White of New York University, “I have more material in a few square meters than [there is] in all the rest of the world up until then.”

    The ornamental objects at Abri Castanet are beads—thousands upon thousands of them, in all stages of manufacture, made of mammoth ivory and soapstone. And within a few thousand years, the artistic range of these first modern Europeans had broadened to expressive carvings of animals, enigmatic figurines of women in the last stages of pregnancy, and the painted lions, rhinos, bears, and other animals that romp across the walls at Grotte Chauvet. “Between 38,000 and 33,000 years, everything is there, including Grotte Chauvet,” says White.

    But what could have touched off this explosion? Klein and a few others think the answer lies in biology—some change in the wiring of the brain that enabled humans to innovate, think symbolically, and make art. “My view is that modern human behavior was a biological advance,” he says. Human ancestors in Africa looked anatomically modern by 150,000 years ago. But Klein thinks an additional evolutionary step, hidden in the brain, came 50,000 years ago. It gave modern humans the cognitive wherewithal to migrate to the distant reaches of Europe and Asia, replacing archaic human populations as they went.

    And, gratifyingly for Klein, Africa is where some of the earliest indisputable body ornaments are turning up. In last April's Journal of Archaeological Science, Stanley Ambrose of the University of Illinois, Urbana-Champaign (UIUC), describes his excavations at a rock-shelter in the Rift Valley of Kenya, at a site called Enkapune Ya Muto. There he found a cache of beads made of ostrich eggshell, blanks, and shell fragments. Some of the beads, says Ambrose, “are shiny, obviously worn, as if someone was wearing them as part of some ornament.” They must have served as symbolic markings, he says, “expressing an awareness of the self and how to enhance it.”

    It's the same phenomenon seen in Europe 38,000 years ago—but it may be several thousand years earlier at Enkapune Ya Muto, says Ambrose, who has carbon-dated the shells and come up with an age of at least 40,000 years. “These early ostrich eggshell beads are perhaps the earliest indicator” of symbolic behavior anywhere, says Klein. “And it's very important that they first appeared in Africa,” just as expected if the crucial biological innovation had occurred there.

    Other archaeologists agree with Klein about the sudden flowering of art but reject his biological explanation. “I don't think it's a mutation for the art gene,” says Olga Soffer of UIUC. “We're totally on the wrong track when we're asking the question of biology.” White agrees. “I think that what we call art is an invention, like agriculture, which was an invention by people who were capable of it tens of millennia before.”

    What spurred the invention is a matter of speculation, although many archaeologists think that, at least in Europe, it could have been part of a social change triggered by a challenging new environment. Chasing wide-ranging herds in the shadow of the ice sheets, modern humans thrived by developing an intricate social system, with a complex division of labor and long-distance ties. “That's one way to survive in an environment where you've got scattered and somewhat unpredictable resources,” says Philip Chase of the University of Pennsylvania, Philadelphia. Body ornaments and art might have helped express those new social relations.

    Or they may have served to distinguish modern humans from the other kinds of people they were meeting as they moved into new and perhaps hostile territory. Says White: “I have a hard time thinking it's coincidental that all of this was going on [in Europe] at a time when you have quite a different hominid moving into territory occupied for 300,000 to 400,000 years [by earlier humans]. A major concentration of art is right where Neandertals were being replaced by modern humans, all the way from the Russian plain to the Iberian peninsula.” Modern humans naturally sought ways to distinguish themselves from their neighbors and strengthen their own cultural ties, he suggests, and art was one solution.

    It's old, but is it art?

    A few researchers, however, think they have a more natural explanation for the ice age explosion: It was grounded in a tradition going back tens or even hundreds of thousands of years and glimpsed fitfully in sites around the world. Alexander Marshack, for instance, an archaeologist associated with the Peabody Museum at Harvard University, has campaigned for years to persuade his colleagues that ice age Europe can't be the beginning of the story.

    Thirty years ago, he took his first close look at 30,000-year-old ivory animals from Vogelherd, in Germany, then considered to be some of the earliest art. What he saw, he says, were works “so sophisticated they couldn't have happened instantaneously. Making them required thousands of years of technology, of symboling, of making stories about the animals.” Early cave paintings also showed signs of a rich cultural context that, he believed, simply could not have emerged full-blown in a few centuries. Other archaeologists argue that a few centuries is plenty of time for culture to blossom. But Marshack concluded that “there had to be a long prior history, so I began looking for earlier objects.”

    Here and there, in material from sites around the world, he has found them. From Quneitra in Israel comes a bit of flint incised with concentric arches some 54,000 years ago. From a site called Tata in Hungary comes an enigmatic plaque made of polished mammoth tooth, 50,000 to 100,000 years old, its crevices filled with red ochre. At a 250,000-year-old rock-shelter site in the Czech Republic, archaeologists found a bed of ochre and the rubbing stone used to make the powder—not art, but perhaps the means of making it. And then there is the 250,000-year-old carving from Berekhat Ram, which Marshack has studied closely and interprets as the figure of a woman with an elaborate coiffure.

    To Marshack, the Berekhat Ram object, like the later artifacts from Tata and Quneitra, is a trace of a capacity for making symbols that was well developed long before the ice age explosion. True, he says, it's just one piece of “art” from a span of tens of thousands of years, but it should not be dismissed. “It may be unique, but its complexity raises questions that have to be addressed.” It suggests, he adds, that other symbolic objects have simply been lost from the record: “Chances are that if they were making images of volcanic tuff, they were making images of wood,” which would have decayed. One reason ice age art is so abundant, he adds, is that modern humans in Europe worked durable materials such as mammoth ivory and bone.

    Marshack isn't the only one coming up with such evidence. A smattering of suggestive artifacts have come from Neandertal sites in Europe and Russia: bits of bone with what look like decorative markings, even a 43,000-year-old bone “flute” from Slovenia. But many of those claims have withered as researchers including Francesco d'Errico and Paola Villa of the Institute of Quaternary Prehistory and Geology in Talence, France, have taken a close look at the artifacts. Animal digestion, butchery marks, and even the tracks of blood vessels can easily explain many of the bone markings, says d'Errico. And both d'Errico and Chase have concluded that, as d'Errico puts it, the supposed flute “is absolutely natural and is the result of gnawing by animals.”

    Some of Marshack's artifacts, however, have held up better. His analysis of the Berekhat Ram object, published last year in Antiquity, seems to have convinced most of his colleagues that it was shaped artificially, and a few of them even accept it as an image. “It's extremely clear that it's humanly enhanced. It's definitely an art object,” says Bahn. D'Errico and April Nowell of the University of Pennsylvania, Philadelphia, actually tested Marshack's claims by going to the site and comparing the object with hundreds of other bits of tuff. They, too, are persuaded that it is human handiwork. “No other pieces have this kind of modification,” d'Errico says.

    But he isn't ready to call it art. “I'm not sure the people who made the grooves were people using symbols. Also, one case does not explain a lot.” Exactly, says Cambridge's Mellars. The uniqueness of artifacts like the Berekhat Ram carving “totally undermines their role in a symbolic communication system,” he says. Chase sums up the doubts about Berekhat Ram and similar artifacts: “Was it just a kid who was sitting there scratching on something? Or did it have some function we can't recognize?”

    Artful Neandertals?

    One set of decorative objects apparently made by nonmodern humans can't be dismissed as anomalies, however. At the Neandertal site of Arcy-sur-Cure, archaeologists in the 1950s and 1960s excavated not just one or two but dozens of animal teeth pierced and grooved for use as ornaments, along with a handful of ivory beads and pendants. No other Neandertal site has held anything like this trove of symbolic objects. The same site also yielded bone tools and stone points made by more modern techniques than those of earlier Neandertals. But most of the doubts about whether Neandertals were responsible for these objects faded when Neandertal bones were identified first at another site with the same “Châtelperronian” tool technology and then, 2 years ago, at Arcy itself.

    Now archaeologists are debating what the Neandertal ornaments at Arcy mean for the ability of nonmodern humans to traffic in symbols and make art. Although the exact age of the Arcy deposits is uncertain, most carbon dates from the site overlap with dates for modern humans in France and Spain. That leaves plenty of room for archaeologists to argue over whether the Arcy Neandertals developed art on their own or were imitating their trendy neighbors.

    At one pole is João Zilhão of the University of Lisbon in Portugal, who published an assessment of Arcy with d'Errico and others in the June issue of Current Anthropology. Zilhão says that at other Châtelperronian sites, the Neandertal deposits always underlie the layers of artifacts left by modern humans, implying that the Neandertal activity came first. And he puts his money on the earliest of the widely varying carbon dates obtained from the layers at Arcy, roughly 45,000 years old—a date that would mean the Neandertals made the objects well before modern humans were around to set an example. Zilhão says the evidence is clear: “Strictly empirically, Neandertals invented [ornaments] first.”

    At the opposite pole is Paul Mellars, who says Zilhão is wrong about the timing. “Most if not all of the Châtelperronian is post-38,000 radiocarbon years,” he says. “It's a phenomenon that occurs after the arrival of moderns in northern Spain.” The fact that Châtelperronian artifacts are found below those of modern humans just shows, he says, that the moderns moved into the abandoned caves and rock-shelters after the Neandertals vanished. In the meantime, the two groups could have had plenty of contact along a frontier that probably ran along the Pyrenees, with Neandertals to the north and modern humans to the south.

    It's there that the Neandertals would have taken their artistic cues from their new neighbors, says Mellars. “Here were these ‘supermen’ coming over the hill, wearing fancy beads, with better weapons, better hunting skills—the Neandertals would have to be staggered by this.” They would inevitably try to copy what they saw, if only because the modern style, pierced fox teeth and all, had cachet. The artifacts that resulted should not be taken as a sign of an independent artistic capacity, says Mellars. “To say that the beads must have had exactly the same symbolic meaning to Neandertals as they did to moderns—that's a non sequitur.”

    Most archaeologists agree with Mellars about the timing. But some note that the Neandertal beads aren't direct imitations of what nearby modern humans were making. The people at Arcy chose different kinds of animal teeth and used different techniques to work them, which leads these archaeologists to suggest that the Neandertals were drawing inspiration from their neighbors rather than simply mimicking them—making beads in their own way, for their own cultural purposes.

    If so, the Arcy deposits could still have unsettling implications for the idea that art, and the complex culture it implies, is unique to modern humans. Says Chase, “If this really is symbolism, and taken at face value it is, then you've got Neandertals who were capable of the same symbolic behavior as modern humans.” Klein is also mystified. “I want the Neandertals to be biologically incapable of modern behavior. So [the Châtelperronian] is a real problem.”

    Zilhão and others hope to do more dating of the Arcy deposits, which might settle the issue if it shows that the ornaments really do predate modern humans in Europe. In the absence of such a tie breaker, the dispute will continue—pitting big bang theorists against gradualists, and archaeologists who stress the overall pattern of evidence against those who focus on the puzzling exceptions. After all, the real answer about what is art and what is not lies in the minds of its makers—and they are long gone.

  21. ANTHROPOLOGY

    No Last Word on Language Origins

    1. Constance Holden

    Human beings were anatomically ready to speak more than 150,000 years ago—but clear evidence that they were doing so does not appear for 100,000 years afterward

    Nothing is more human than speech. Our closest primate relatives, chimpanzees, use tools, have intricate social lives, and show signs of self-awareness. But they lack spoken language, and all the capacities it implies, from rapid and flexible manipulation of symbols to the ability to conceptualize things remote in time or space. For archaeologists eager to learn how we became human, when and how language emerged is a crucial question.

    Unfortunately, “speech does not fossilize,” notes anthropologist John Shea of the State University of New York, Stony Brook. Writing appears 6000 years ago, and there is scant evidence for the existence of notation before 13,000 years ago. How long might language have been around before that? The only evidence is indirect, and it suggests two wildly different answers.

    Sound systems.

    The human upper respiratory tract made speech possible as the high larynx seen in species like the chimp (left) dropped, creating an expanded pharynx (red).

    AFTER J. LAITMAN, LA RECHERCHE

    Fossils show that the raw brain capacity for complex language, along with the necessary mouth and throat anatomy, were probably in place before 150,000 years ago. But most of the behaviors thought to depend on language did not appear until 40,000 years ago—the so-called Upper Paleolithic explosion that is manifested most strikingly in Europe. That was when tools, burials, living sites, and occasional hints of art and personal adornment reveal beings capable of planning and foresight, social organization and mutual assistance, a sense of aesthetics, and a grasp of symbols. “Everybody would accept that by 40,000 years ago, language is everywhere,” says Stanford University archaeologist Richard Klein.

    That leaves at least 100,000 years of wiggle room. Into this time gap fall rare hints of modern behavior—burials and glimpses of trade, art, and sophisticated tools—that have allowed some archaeologists to argue that humans were speaking, and thinking the complex thoughts that go with speech, long before they left a plentiful record of these activities. Others, however, argue that there is no unequivocal evidence for modern human behavior before about 50,000 years ago. “At one extreme there are people who think that all hominids are ‘little people’ and at the other that the really ‘human’ things about human behavior are really very late,” says Alan Walker of Pennsylvania State University in University Park.

    Delayed takeoff

    The anatomy needed for speech was in place before 150,000 years ago, but the signs of complex language don't proliferate until around 40,000 years ago.

    Judging from anatomy alone, speech of some sort—although not like that of modern humans—has probably been around for at least a million years, says Philip Lieberman of Brown University. Based on comparisons of modern humans with fossils and living apes, he says the hominid breathing and swallowing apparatus were even then beginning to reorganize in areas affecting the capacity for speech. Skull shape was becoming more humanlike, he says, with the distance between spinal column and the back of the mouth decreasing, indicating a shorter mouth better adapted for speech of some kind—albeit nasalized and phonetically limited.

    Meanwhile, the other precondition of modern language, a big brain, was also emerging. The chimp-sized brains of the early australopithecines almost doubled in a growth spurt starting 2 million years ago. Then a second surge, beginning around half a million years ago, increased hominid brain size by another 75%, according to Erik Trinkaus of Washington University in St. Louis, bringing it to the 1500 cubic centimeters of today. At the same time, brain organization was shifting, with dramatic growth in areas implicated in speech, in the frontal and temporal lobes.

    By at least 200,000 years ago, says anatomist Jeffrey Laitman of Mount Sinai Medical Center in New York City, African hominids had cranial bases “identical to [those of] modern humans.” The larynx had also descended, signifying that the tongue was no longer confined to the vocal cavity but was now rooted in the throat, a development necessary for rapid and versatile vocalization. “By 100,000 to 150,000 years ago, you know you've got modern speech—there's no other reason to retain this crazy morphology,” says Lieberman. He points out that the speech package is costly—not only is the big brain an energy gobbler, but a dropped larynx offers no benefits other than speech, and it raises the risks of choking.

    Words and deeds

    And thereby hangs a mystery. Even though modern humans were equipped to talk up a storm, there are few definitive signs, for tens of thousands of years, of any of the behaviors anthropologists associate with language: complex tool technology and other signs of conceptualization and planning, trade, ritual, and art. Indeed, in the Middle East, where modern humans co-existed with the more archaic Neandertals for tens of thousands of years starting perhaps 90,000 years ago, the two groups behaved pretty much alike, says Klein, even though Neandertals may not have been capable of complex speech (see sidebar).

    All that changes about 40,000 years ago, in the Upper Paleolithic revolution. Art and personal ornaments, which proliferate at about this time in Europe (see p. 1451), are far and away the clearest sign, says Ian Tattersall of the American Museum of Natural History in New York. “Empathy, intuitive reasoning, and future planning are possible without language,” he says. So are impressive tools such as the aerodynamically crafted 400,000-year-old wooden spears reported last year to have been found in a German coal mine. But “it's difficult to conceive of art in the absence of language,” says Tattersall. “Language and art reflect each other.” Both involve symbols that are not just idiosyncratic but have “some kind of socially shared meaning,” adds Randall White of New York University.

    “Socially shared meaning” shows up around 40,000 years ago in other realms besides art—such as tools. Harold Dibble of the University of Pennsylvania, Philadelphia, explains that until that time, the stone tools made by human ancestors don't fall into specialized types or vary much from one region to another. “The same three or four tools exist all over the Old World,” he says, adding that what have been described as different types of tools are often the same things at different stages of resharpening and reduction. “There is nothing in these kinds of technologies that necessarily forces us to assume a linguistic mode of transmission,” says Dibble.

    But at the beginning of the Upper Paleolithic, new qualities become evident. The transition was especially abrupt in Europe, where so-called blade technology, based on standardized “blanks” that can be modified to make a wide range of tools, took over. Highly standardized tools for specific purposes, such as hunting particular kinds of animals, appear—and specialized tools, says Paul Mellars of Cambridge University, are a clue to “specialized language” on the part of their makers. Toolmakers also began exploiting new materials, namely bone and ivory, which demanded sophisticated carving skills that soon led to a proliferation of styles and designs. Once tools start to show “stylistic variability,” says Dibble, we are witnessing the injection of culture into tools. And transmission of culture in any meaningful way requires language.

    To some researchers, these dramatic transformations imply that one more biological change, beyond the expansion of the brain and the change in throat anatomy, had taken place, making humans capable of fully modern language. Klein, for example, posits a “fortuitous mutation” some 50,000 years ago among modern humans in East Africa that “promoted the modern capacity” for rapid, flexible, and highly structured speech—along with the range of adaptive behavioral potential we think of as uniquely human. He doesn't see how anything else, such as a social or technological development, could have wrought such “sudden and fundamental” change, which modern humans then carried out of Africa and around the world.

    Steven Mithen of the University of Reading in the U.K. also believes evolution did a late-stage tinkering with the brain, one that produced what he calls “fluid” human intelligence. Both apes and early humans, he believes, operate with what he calls a “Swiss army knife” model of intelligence. That is, they have technical, social, and “natural history” or environmental modules, but there's little cross talk between them. This could explain, for example, why humans were deft at shaping stones to butcher animals, but it never occurred to them to transform an animal bone into a cutting tool. At some point around the 40,000-year mark, Mithen believes the walls between these modules finally collapsed, leaving Homo sapiens furnished with the ability to generalize, perceive analogous phenomena, and exercise other powerful functions of the integrated human intelligence. Only then would language have been fully mature.

    Others say that instead of reflecting a final step in brain evolution, language might have crystallized as part of a social change, perhaps triggered by population growth. “I don't subscribe to the cognitive model of a new bit gets added on,” says Clive Gamble of Cambridge University. “I would argue it's changes in the social context”—for example, the complexity of behavior needed for large numbers of people to live together.

    The revolution that wasn't?

    Or maybe there was no linguistic watershed 40,000 years ago after all. Alison Brooks of George Washington University in Washington, D.C., and Sally McBrearty of the University of Connecticut, Storrs, have called the Upper Paleolithic revolution “the revolution that wasn't,” arguing that at least in Africa, the modern behaviors thought to go hand in hand with language emerged gradually, well before 40,000 years ago. Their case rests in part on a set of barbed bone spear points that Brooks and her colleagues found at Katanda, in the Democratic Republic of Congo (Science, 28 April 1995, pp. 495, 548, 553). Bone technology is associated with the Upper Paleolithic in Europe, says Brooks—and yet these bone points have been dated to between 80,000 and 90,000 years ago. And stone points designed to tip spears or arrows, although very rare in Europe at this time, show up in various places in Africa more than 100,000 years ago, she says.

    The Katanda site also showed other signs of sophistication: “seasonal scheduling” of freshwater fishing, says Brooks, as revealed by the remains of large catfish—and no sign of juveniles—suggesting they were caught at spawning time. Elsewhere in Africa, there is evidence of a large “trading network” as early as 130,000 years ago, say Brooks and McBrearty. Two sites in Tanzania have yielded pieces of obsidian, used to make points, found 300 kilometers away from their origin in Kenya's Central Rift Valley. Brooks also cites “a tremendous elaboration in pigment use” in the form of red ochre, presumably used for decoration and body adornment, notably at a 77,000-year-old site in Botswana.

    Brooks believes all these lines of evidence spell the existence of language. All the signs are in the record, she says, including “complicated exchanges … planning depth, and capacity for innovation.” As for “stylistic variability” in tools, Brooks says there's plenty in 80,000-year-old African stone points. “You can pick up a stone point … and in eight cases out of 10 say what region it came from,” she says.

    Brooks and McBrearty's case for the early emergence of modern behavior and language is controversial, especially as it rests heavily on the presumed antiquity of the bone points, whose age was gauged by dating of surrounding sediments and nearby hippo teeth. Scientists have reservations about the dating techniques (Science, 10 October 1997, p. 220). Among the skeptics is Klein, who does excavations in South Africa. Of the bone points, he says, “I don't think those things are even remotely likely to be” 90,000 years old—especially because “the next oldest occurrence” of similar points is dated at 12,000 years ago. He also discounts the ochre data, saying “red ochre is all over the place” at early sites, including Neandertal ones, and could well have been used for some purpose other than decoration. Mellars is also skeptical, saying about the obsidian trade: “Human beings move around quite a lot. Even if there was some deliberate exchange, I don't see that necessarily as an index of anything exciting cognitively.”

    The hints of early language use don't end there, however. Two 90,000-year-old burials in Israel containing anatomically modern humans—from a time when the Middle East was ecologically an extension of Africa—unequivocally show ritual behavior and the use of language that implies, says John Shea. One burial, at a site called Qafzeh, held a child buried with a deer antler. At the other, Skhul, the skeleton was found clasping the jawbone of a wild boar to its chest. Although any deliberate burial represents going “beyond the minimal necessary action for body disposal,” says Shea, the inclusion of grave goods casts the action into a another realm of meaning—the socially shared meaning of arbitrarily assigned symbols that is at the heart of language.

    To some people, such as Brooks, these burials strengthen the case that modern behavior was well under way before the Upper Paleolithic revolution. Mithen sees them as a sign that the transition from Swiss army knife minds to “cognitive fluidity” was under way. Klein, on the other hand, is still dubious about the putative grave goods, saying it is extremely difficult to “distinguish what was an intentional act and a situation where something was accidentally incorporated.”

    There's one accomplishment that everyone agrees would qualify humans as fully modern, language-using people: getting to Australia. Even in the recent ice age, when sea level was lower, at least 100 kilometers of open water separated Australia from the nearest part of Asia. To reach Australia, humans had to build and provision sturdy boats—a sign not only of technological advancement and navigational skill but also of high levels of planning and cooperation, says Gamble.

    Some archaeologists believe there is persuasive evidence that people managed to do all this by 60,000 years ago, based on dating at two stone tool sites in Northern Australia. But on this as on so many other hints of modern behavior, consensus is elusive. The dating was done by thermoluminescence, a technique that has not always proven reliable. Gamble says that the more reliable technique of radiocarbon dating, although capable of going back at least 40,000 years, has never identified an archaeological site in Australia older than 35,000.

    Even if the uncertainties about artifacts and dates can be resolved, the question of whether fully modern language emerged in a sudden biological or cultural step 40,000 years ago or gradually, over the preceding tens of thousands of years, won't be settled. “The fundamental problem here is there is only one species on the planet who has language,” says Duke University anthropologist Matt Cartmill. “We have one data point. With so many things unique to humans, we don't know what language is necessary for or what is necessary for language.”

    And there will still be plenty of room to argue that the scarcity of evidence for symbolic behavior before 40,000 years ago doesn't prove it wasn't happening. Leslie Aiello of University College London, for example, says the evidence might have all perished—after all, she notes, it would be very difficult to pick up signs of symbolic abilities from the archaeological record of the historical California Indians, who had a complex culture but produced very few artifacts in durable materials like stone.

    Shea agrees, noting that an archaeologist “is like the drunk in the old joke who looks where the light is good” for his lost keys. Future finds could alter the hominid story: Although there are more than 100 excavated sites in southwestern France alone, Brooks notes, all of East Africa, the likely birthplace of modern humans, has just a dozen; and in Asia the record is mostly a big question mark. Thus paleoanthropology is a game for philosophers as well as scientists, and there is plenty of room for free play of the romantic imagination.

  22. ANTHROPOLOGY

    How Much Like Us Were the Neandertals?

    1. Constance Holden

    Next to our own selves, there is no more interesting hominid than the Neandertal. Neandertals are the humans manqué, the evolutionary dead end: eerily like us, but different in major ways. And they are the subject of one of the hottest ongoing debates in anthropology.

    How smart were these big-brained, stocky-bodied people, who inhabited Europe and the Middle East starting about 200,000 years ago? And what caused their relatively abrupt disappearance by 30,000 years ago? The Neandertals' reputation has oscillated over the years, and new evidence has sharpened the debate. Genetic data suggest a sizable gulf between Neandertals and modern humans, while recent discoveries hint that Neandertals had a brief technological golden age before vanishing.

    Last year, DNA testing of a Neandertal bone showed that these beings probably branched off the human line a half-million years ago, perhaps qualifying them as a separate species (Science, 11 July 1997, p. 176). But other lines of evidence have encouraged speculation that they may have been like us in one crucial respect: speech. One is the discovery in 1989 of a Neandertal hyoid bone—the bone that supports the larynx—in Kebara cave in Israel. Because it is a lot like a human one, it indicates, says archaeologist Francesco d'Errico of the Institute of Quaternary Prehistory and Geology in Talence, France, that “Neandertal abilities were also quite similar.”

    Earlier this year, anthropologists at Duke University reinforced that notion with a comparative analysis of the hole that carries motor nerves to the tongue, called the hypoglossal canal, in several hominid skulls. Chimp-sized in the 2-million-year-old australopithecines, the canal is significantly larger, falling in the modern human range, in both Neandertals and an earlier, 300,000-year-old skull. This suggests that “the vocal capabilities of Neandertals were the same as those of humans today,” Richard Kay and colleagues wrote in the 28 April Proceedings of the National Academy of Sciences.

    Cognitive scientist Philip Lieberman of Brown University disputes these claims. First, he says, you can't predict tongue shape—the critical factor for modern speech—from an isolated hyoid bone. Moreover, he says the Duke team based their calculations of the relative sizes of different species' hypoglossal canals on incorrect estimates of human tongue size and shape. Lieberman himself argues, from his 1971 analysis of a Neandertal skull from Chapelle-aux-Saints in France, that proportions such as the distance between the hard palate and the spinal column would have made it impossible for Neandertals to speak with the clarity modern humans possess.

    Kay says that his finding still holds, and that Neandertals might have had speech “in every way as complicated as modern humans.” But others say Lieberman's conclusions are reinforced by Neandertals' other behavioral limitations. Harold Dibble of the University of Pennsylvania, Philadelphia, for example, says “the lack of art and the lack of clear evidence of symboling suggests that the nature of [Neandertal] adaptation [to their environment] was significantly different” from that of their successors. The difference shows up, for example, in their stone tools.

    Neandertals could do stone-knapping with the best of them, says Stanford University archaeologist Richard Klein. But over thousands of years this practice never seemed to lead to clear differentiation in types of tools. “They didn't make tools in the [different] standardized patterns you see later,” coming from the modern people who arrived in Europe about 40,000 years ago, says Klein. To him this difference suggests that the Neandertals “were only interested in a point or an edge” rather than conceptualizing a particular product.

    Then there is the Neandertal hunting record. In a special Neandertal supplement of the journal Current Anthropology in June, for example, archaeologist John Shea of the State University of New York, Stony Brook, defends Neandertal hunting prowess. He argues that their tool assemblages show they engaged in “intercept” hunting, which would require a knowledge of animal migration routes. On the other hand, according to Erik Trinkaus of Washington University in St. Louis, the high rate of broken bones and early death among Neandertals suggests that they engaged in more close-quarter combat with large animals than did modern humans, who had figured out safer strategies.

    In the past, some have claimed that Neandertals held ritual burials, which would have implied highly developed social behaviors and possibly even religion. But that belief was largely based on a 60,000-year-old Neandertal burial at Shanidar cave in Iraq, where pollen grains were taken to imply that the body had been covered with flowers. Many scientists now believe the plant material is an incidental intrusion. In reality, “the number of claimed Neandertal burials is extremely low,” and none has yielded convincing evidence for grave goods, says Dibble.

    As archaeologists learned in 1996, however, the Neandertals in France and Spain showed surprising new talents at the end of their evolutionary career after 40,000 years ago. They began making more sophisticated and diverse tools, and even, at one site, an array of beads and pendants (see p. 1451). These artifacts have led to a new surge of debate over whether Neandertals were finally expressing their symbolic potential or were just imitating their modern human neighbors.

    Whatever the answer, it may have been a case of too little, too late. For shortly after that, the Neandertal record vanishes. What drove them to extinction? Many scientists say that even without a difference in brainpower, the Neandertals would have been at a disadvantage. Archaeologist Ezra Zubrow of the State University of New York, Buffalo, has made a mathematical model based on skeletal data on the life-spans of the two populations. From it he concluded that with only a slight disadvantage in life expectancy, “it was easy to drive Neandertals to extinction under a wide range of conditions” because of their small populations. Shea adds that with their heavy frames and active lifestyle, their voracious energy needs might have hurt them “in competition with more energetically efficient modern humans.”

    Debates about Neandertal abilities have become colored with notions of political correctness, say archaeologists. “I've been accused of being racist for saying the Neandertals couldn't speak like us,” says Lieberman. Clive Gamble of the University of Southampton in the U.K., for one, doesn't understand why people need to make Neandertals something they weren't. “Neandertals are fantastic ways of realizing the alternative ways of humanness.”

Log in to view full text