News this Week

Science  02 Mar 2001:
Vol. 291, Issue 5509, pp. 1676
  1. EUROPEAN SCIENCE

    Flagship E.U. Research Program Aims for Pan-European Panacea

    1. Robert Koenig

    Hoping to breathe new life into the Old World's fragmented scientific community, the European Commission last week approved the first draft of a 4-year, $16.2 billion research program that seeks to focus the European Union's disparate research programs on common goals. Slated to spend 17% more than its predecessor, the so-called Framework 6 (FP6), which will begin in 2003, would spur more pan-European scientific projects by channeling funds into big-ticket collaborations and paying for more scientists to country hop. “This is potentially tremendous for research, especially projects that involve expensive instruments, such as synchrotrons and neutron sources,” says geophysicist Vincent Courtillot, research director of France's Science Ministry.

    The most striking aspect of the FP6 proposal, which must run a gauntlet of European institutions before it's finalized next year, is the European Research Area (ERA) concept. The brainchild of research commissioner Philippe Busquin, the ERA is meant to reduce what Busquin calls the “fragmentation and isolation” of Europe's national research efforts (Science, 21 January 2000, p. 405). As a whole, E.U. nations invest 1.8% of their gross domestic products in R&D (an anemic figure compared to the United States' 2.7% and Japan's 3.1%), and their combined research spending was about $64 billion less than U.S. R&D outlays in 1999. Although Framework programs represent only about 5% of Europe's total R&D spending, Busquin has been searching for better ways to use that money as a lever to boost the impact of multinational E.U. research.

    Picking up on the ERA theme, the FP6 draft proposes new ways to help national research ministries and granting agencies open up their programs to researchers in other countries. For example, the $2.8 billion proposed for “Structuring the European Research Area” includes a doubling (to nearly $1.7 billion) of the pot of money available for the popular “mobility” program that gives grants to European scientists who shift to labs in other E.U. countries. The ERA “is an idea whose time has come,” says Dutch astronomer Reinder van Duinen, president of the European Science Foundation, which also has launched initiatives to better coordinate European research. “Sometimes you need a collaborative approach to tackle a major problem.”

    The FP6 draft also promotes the ERA concept by encouraging larger research consortia, such as labs or institutes, to team up in applying for grants to achieve “a substantial reduction” in the number of projects and contracts. In addition, the program would hand out grants in fewer priority areas than Framework 5 now encompasses (see table). An E.U. official says the goal is “larger, critical-mass funding” and more incentives for national research efforts to dovetail with wider European research themes. “We've taken a more strategic point of view, to foster more concentration and to help integrate national research programs,” says the E.U.'s Richard Escritt, who helped develop the FP6 draft. Courtillot hopes that the new approach might “reduce the cumbersome administration” of Framework grants in Brussels.

    View this table:

    Taking the ERA idea a step further, several research ministries persuaded E.U. officials to include in FP6 the possibility for groups of E.U. nations to team up and apply for funding for major projects in areas such as genomics, nanotechnology, or supercomputer research. “We need to do some experiments to test how this could work,” including pilot trials among budding collaborators, says Courtillot, who last week met with his counterparts in Germany and the United Kingdom to begin hashing out ideas for such big projects.

    The proposed Framework program doesn't settle one hot-button issue: the extent to which Brussels should help support and maintain expensive European facilities such as synchrotrons and Rome's European Mouse Mutant Archive. Although the FP6 draft sets aside about $830 million for infrastructure initiatives, the policies are still being debated. The Research Directorate is putting the finishing touches on a new report, “A European Research Area for Infrastructures,” that's expected to urge FP6 to provide seed money for new scientific instruments or labs that have “a clear European dimension or interest.” But the directorate wants to avoid long-term commitments to paying part of the operating costs of European-level labs.

    Some European science managers who give the FP6 draft high marks for setting noble goals worry about how it will be implemented. For instance, Kai Simons, director of the Max Planck Institute for Molecular Cell Biology and Genetics in Dresden, Germany, supports the draft's incentives to spur pan-European collaborations, but he questions whether there would be enough grants for young scientists and enough funds for “generic” research that is “not bound to anything except quality.”

    As the FP6 draft winds its way through the European Parliament and the council of member nations before it is finalized early next year, the European Union's smaller states are expected to take aim at the large sums that could fund collaborations among bigger countries. But some scientists with a European vision hope that FP6 will become a milestone on the road to what virologist Paolo Lusso of Milan's San Raffaele Scientific Institute would like to see: what he calls “the research community of the ‘European nation.'”

  2. FOOT-AND-MOUTH DISEASE

    U.K. Outbreak Is Latest in Global Epidemic

    1. John Pickrell,
    2. Martin Enserink

    CAMBRIDGE, U.K.—What began in late February as a single pig farm blighted by foot-and-mouth disease (FMD) could spiral into a full-blown epidemic in the United Kingdom, experts say. The reappearance of the dread disease here seems to be the latest twist in a yearlong rampage around the world of a virulent strain of foot-and-mouth virus.

    As Science went to press, 16 farms across the U.K. had reported cases and been quarantined, and the government had prohibited the movement of susceptible animals: primarily cows, sheep, pigs, and goats. Thousands of animals have been slaughtered and burned atop huge pyres in a bid to halt the disease's spread. The U.K. may be on the brink of a reprise of the 1967 scourge that saw 500,000 animals destroyed. “There is every likelihood that the disease will reach epidemic proportions,” says Liz Glass, a veterinary immunologist at the Roslin Institute in Edinburgh.

    The outbreak apparently originated on a pig farm in Newcastle, England, probably from infected animal feed imported from Asia. The U.K. strain is identical to one that recently stormed previously disease-free countries such as Japan, North Korea, and South Africa. “It seems to be a very virulent and successful strain” in all susceptible species, says Paul Kitching of the Institute for Animal Health (IAH) in Pirbright, U.K., who heads the world's largest FMD research group.

    The viruses that cause FMD (members of a diverse family of small RNA viruses called picornaviruses) are not all that deadly; they can cause fatal cardiac arrest in young animals, but most adult animals recover. But animals produce less meat and milk after an infection, so the only economically sensible option is to cull infected herds. And that has to happen fast, because FMD is extraordinarily infectious: Inhaling fewer than 10 viral particles can infect an animal, and the wind can carry virus from one blighted farm to another, even dozens of kilometers away.

    A vaccine was first developed in the 1960s, and the IAH stocks enough to vaccinate 500,000 animals in an emergency. The vaccine, which consists of a virus that has been killed with chemicals or ultraviolet light, offers a good degree of protection, says Martin Hugh-Jones, a veterinary epidemiologist at Louisiana State University in Baton Rouge. For instance, it has enabled South America to all but eradicate the disease from the continent.

    But the vaccine has been known to cause occasional outbreaks, presumably because the procedure used to kill the virus is imperfect. So although it's an important weapon in endemic areas, the vaccine is risky in countries that are currently disease free. Vaccinated animals can also be carriers of the virus—although they show no symptoms—and spread it to other, unvaccinated animals. And finally, once vaccinations are used, it is much harder for a country to show that it's disease free; the virus could be lurking in a small number of animals. “Better to keep them all susceptible,” says Hugh-Jones, “and shoot your way out when an outbreak occurs”—as Britain is doing now.

    Several research teams have tried to produce a vaccine that doesn't have these drawbacks. In the past, attempts to develop a vaccine based on foot-and-mouth virus peptides failed to offer adequate protection, as did a live, weakened virus. Researchers at the U.S. Department of Agriculture's (USDA's) lab in Plum Island, New York, have now set their hopes on a crippled adenovirus that has been equipped with two extra proteins from the foot-and-mouth virus. The vaccine is safe and protects pigs well, says USDA virologist Marvin Grubman; the first experiments in cattle are “encouraging,” too, he says. But Grubman says it will be years before the vaccine hits the market. Until then, aggressive monitoring and slaughter is the control method of choice for disease-free countries.

  3. 2002 BUDGET

    NIH Gets Big Boost; Lobbyists Want More

    1. David Malakoff

    Sometimes good just isn't good enough. President George W. Bush said last week that he will request a record $2.8 billion increase for the National Institutes of Health (NIH) in his 2002 budget proposal. But some biomedical science groups say that the figure—a 13.8% boost, to $23.1 billion—is only a starting point for their campaign to win a $3.4 billion boost.

    “We will work in a bipartisan fashion with our congressional champions … to increase the agency's budget,” vowed Mary J. C. Hendrix, president of the Federation of American Societies for Experimental Biology. The 60,000-member group has helped lead an effort, begun in 1998, to double NIH's budget to $27.3 billion by 2003.

    Maintaining a long Washington tradition of previewing the good news—and keeping silent about the bad—in upcoming White House budget proposals, Bush briefly mentioned his plans for NIH during a photo opportunity on 23 February, 4 days before outlining to Congress and the nation his spending proposal for the 2002 fiscal year, which begins 1 October. (The budget was unveiled after this issue of Science went to press; the details will be reported in next week's issue.) “We recognize the federal government plays a very important role in researching cures for disease,” Bush said in recommending the largest increase in NIH's history.

    But Bush was mum on the subject that has much of the science community talking: the pain his proposal is expected to inflict on nonbiomedical science budgets (Science, 23 February, p. 1463). He was expected to request only a 1.3% increase for the National Science Foundation (NSF), whose budget now stands at $4.4 billion. Scientists are also bracing for grim news for science programs at NASA, the Department of Energy (DOE), the U.S. Geological Survey, and the Environmental Protection Agency.

    Whether Congress will follow Bush's blueprint, however, is unclear. Congress traditionally increases the president's request for NIH, and already, Senators Arlen Specter (R-PA) and Tom Harkin (D-IA) have introduced legislation calling on the Senate to back a $3.4 billion increase. Dozens of House and Senate lawmakers have also signed an array of letters to Bush and congressional leaders asking for major science budget increases at NSF, DOE, and NASA.

    The first real test, however, will come this spring, when congressional budget committees issue road maps to spending panels overseeing specific agencies. Researchers, says one House aide, “are going to know pretty early just how far they'll have to push the rock up the hill.”

  4. RESEARCH ETHICS

    Query by Congress Halts New Policy

    1. Jocelyn Kaiser

    A complaint from a powerful member of Congress has at least temporarily scuppered a new federal requirement that institutions teach their biomedical researchers how to act responsibly. The Public Health Service, which issued the ethics education policy on 1 December, has put the requirement on hold while the Office of Research Integrity (ORI) reviews concerns voiced by the House Commerce Committee, which oversees the National Institutes of Health. The delay, part of a broader examination of actions taken by the outgoing Clinton Administration, marks the debut on research issues of the panel's new chair, Representative Billy Tauzin (R-LA), who is expected to be much more active than his predecessor.

    The rules were the government's response to a growing consensus in the biomedical research community that prevention, through education, is the best way to reduce scientific misconduct. Accordingly, the new policy required institutions to develop a “basic program of instruction” on responsible research conduct covering topics such as data sharing, record keeping, and animal care. All staff members were supposed to have completed their training by 1 October 2003 or their institutions could lose federal funding. The training shouldn't take more than a few hours, estimates ORI, which is developing a 3-hour Web-based course as one option for schools.

    Although biomedical and university advocacy groups support the idea, they have complained that the rules would be expensive to implement and cover too many people. The 1 December version contained a few changes from an earlier draft, giving institutions more time and allowing them to decide who should take the course. But “the most objectionable” sections were still there, says Howard Garrison, a spokesperson for the Federation of American Societies for Experimental Biology (FASEB).

    Those complaints led the Commerce Committee to include the rules in a review of the Clinton Administration's last-minute regulations. A 5 February letter from Tauzin and James Greenwood (R-PA), chair designate of the oversight subcommittee, says that, although the committee “strongly support[s]” the ORI policy's intention, “we are troubled by ORI's process in implementing such efforts.” The policy should have been issued as a formal rule, the letter explains, after steps such as a review by the White House, cost analysis, and publication of the entire text rather than simply a notice in the Federal Register. “There are procedures that have to be followed,” says a committee staffer.

    ORI doesn't believe the policy is equivalent to a formal rule, ORI Director Chris Pascal explained in a 14 February reply to Tauzin, because it gives institutes “considerable leeway” in how to implement it. ORI also notes that it reviewed more than 100 comments and met with FASEB and other organizations before issuing its final policy. But Pascal says that ORI has stopped the clock to review “both the substance of the policy and the process.” A committee staffer says that suspending the rule “is appropriate” and that the panel has not yet decided on its next step.

  5. PLANETARY SCIENCE

    Cosmic Misfits Elude Star-Formation Theories

    1. Dennis Normile

    TOKYO—Astronomers have become increasingly perplexed over the last few years by a strange new class of celestial body. Too small to fit conventional definitions of brown dwarfs, they nonetheless move through star-forming regions in a manner that separates them from planets orbiting a star. Once seen as anomalies, their growing numbers are forcing astronomers to sit up and take notice (Science, 6 October 2000, p. 26). On 14 February, a Japanese team raised the stakes by reporting its discovery of more than 100 of these objects in a star-forming region known as S106. “This poses a big challenge for the standard picture of star formation,” says Shu-ichiro Inutsuka, a theorist at Kyoto University.

    Mother lode.

    More than 100 planetlike objects have been found in a star-forming region called S106.

    CREDIT: SUBARU TELESCOPE/NATIONAL ASTRONOMICAL OBSERVATORY OF JAPAN

    Yumiko Oasa of the University of Tokyo and colleagues there and at the National Astronomical Observatory of Japan spotted the band of cosmic misfits while using NAOJ's Subaru Telescope on Mauna Kea, Hawaii. They were observing infrared emissions from a region approximately 2000 light-years from Earth in the constellation Cygnus. In addition to hundreds of brown dwarfs, the team spotted more than 100 fainter free-floating objects. Plugging data on luminosity and estimated age into models of how very low-mass stars evolve, the team estimated the objects' masses at 5 to 10 times that of the planet Jupiter. An analysis of their infrared emissions placed the objects within the region.

    “Our discovery sheds new light on the ubiquity of isolated planetary-mass objects,” says Oasa about her work, the basis for a Ph.D. thesis approved last month. A brief report of the discovery and photos have been posted on the NAOJ Web site (http://www.nao.ac.jp/).

    Joan Najita, an astronomer at the U.S. National Optical Astronomy Observatories in Tucson, Arizona, cautions that more work is needed. In particular, spectroscopic analysis of the objects' emissions would determine their temperature, which could be used to confirm their mass. But Najita says the essential message is believable. “I think these kinds of results show that the process that makes stars can also make things that are substellar,” she says.

    The objects don't neatly fit any conventional definitions. Brown dwarfs are usually smaller than about 75 Jupiter masses, the minimum size needed to ignite the hydrogen stars need to burn, but larger than 13 Jupiter masses, what's necessary to fuse deuterium and produce a faint glow. By failing to reach this lower limit, the new objects are hard to account for. Most astrophysicists believe brown dwarfs and stars condense directly out of vast seas of tenuous gas known as molecular clouds, whereas planets form in disks of matter swirling around nascent stars. Small lone bodies, however, don't mesh well with either scenario.

    Two theories about the origins of planetary objects shed light on the elusive creations but fall short of supplying a complete answer. One proposes that they are ejected from young stellar systems, the other that they form from molecular cloud cores with masses too low to give birth to stars. But Inutsuka says neither idea can account for the large numbers of smaller objects spotted in S106. “I think [Oasa's report] will prove extremely important for pushing the modification of currently accepted theories of star formation,” he says.

    Motohide Tamura, an astronomer at NAOJ and Oasa's thesis adviser, says that scientists need to spend more time observing these phenomena. “So far, only a very limited number of [star-forming] regions have been observed,” he says, too few to conclude just how common the objects are. With the teams planning to use Subaru to investigate other regions, the number of free-floating objects seems certain to grow.

  6. CHINA

    Two Honored, Other Prizes Go Unclaimed

    1. Yimin Ding*
    1. Ding Yimin writes for China Features in Beijing.

    BEIJING—China's newest—and by far richest —prize for lifetime scientific achievement was awarded last week to a mathematician and an agronomist. But the gala state celebration on 19 February was dampened by evidence of how far the country's research community still must go to compete globally: First place in two other major categories of scientific achievement went unclaimed after officials decided that no researchers were worthy of the honor.

    The winners of the new State Supreme Science and Technology Award, which comes with a 5 million yuan ($600,000) prize, are Wu Wenjun and Yuan Longping. Wu, 82, is a topologist who developed a computer algorithm for solving a collection of polynomials, the equivalent of proving a geometric hypothesis. It is useful in pattern recognition and other computer tasks. Newspaper reports say that he also may have been the first Chinese scientist to own a personal computer.

    Yuan, 72, is considered the father of hybrid-rice technology in China and is credited with helping China achieve a threefold boost in rice production over the past 4 decades. He has also amassed a personal fortune by lending his name to a high-tech seed company formed last year, in exchange for equity in the new company.

    The awards, conferred by Chinese President Jiang Zemin, were created to highlight outstanding achievement and demonstrate the importance of science in the nation's economic development. Some 90% of the prize money will be plowed back into research at their former work sites—in Wu's case, the Chinese Academy of Sciences' Institute of Mathematics and System Science in Beijing; for Yuan, the Hunan Academy of Agricultural Sciences. The remainder is for their personal use, or as Wu told reporters: “I think that is my own business.”

    Wu and Yuan were chosen from among 14 finalists to receive what is expected to be an annual prize. But the central government declined for the third straight year to pick a first-place winner in two other categories—natural sciences and technological innovation— because none of the nominees met the criteria for having achieved “at the world level.”

    Members of the selection committee said their decision reflects the fact that China's basic research enterprise still trails the rest of the world and that most projects lack the creative spark needed to achieve fundamental advances in science. Greater investment in large, cooperative basic research projects would help close the gap, says an official with the science and technology ministry.

    The top prize for international collaboration went to U.S. physicist Wolfgang Panofsky, former director of the Stanford Linear Accelerator Center in California, and Indian plant geneticist Gurdev Khush of the International Rice Research Institute in the Philippines. Hundreds of Chinese scientists and technicians received awards in one of the five categories, which include scientific and technological advancement.

  7. INDIA

    Work Starts on First Science Satellite

    1. Pallava Bagla

    NEW DELHI—Indian astronomers have begun to design the country's first satellite dedicated to basic space science after receiving the green light last month from the Indian government. If successful, the payload will be launched in the second half of the decade on a domestically built rocket.

    Looking up.

    India hopes to launch its first basic science satellite on this domestic rocket.

    CREDIT: P. BAGLA

    The project, dubbed Astrosat, aims to orbit four instruments to make broadband observations and surveys in the x-ray and ultraviolet (UV) regions of the spectrum. It would be funded by the Indian Space Research Organization, overseeing work by scientists at ISRO's satellite center, the Indian Institute of Astrophysics in Bangalore, and the Tata Institute of Fundamental Research (TIFR) in Mumbai. No price tag has been put on the mission. “We have to develop the prototype instruments in this period and show that we can indeed successfully make them in India,” says Prahlad Chandra Agrawal, an astrophysicist at TIFR.

    The instruments include soft x-ray and UV imaging telescopes as well as a large-area xenon-filled proportional counter and a cadmium-zinc-telluride array for long- duration studies over a broad range of spectral bands. The proposed payload is an order of magnitude more complex than one Agrawal's team built for an Indian satellite launched in 1996 to study x-ray sources within binary stars, and scientists say the large-field images should shed light on formation rates for low-redshifted stars. However, it falls well short of the high-resolution imaging and capabilities of the current generation of orbiting x-ray facilities, including NASA's Chandra X-ray Observatory and the European Space Agency's XMM-Newton.

    “It's not something that we or the Japanese would be interested in doing at this point,” says Peter Serlemitsos of NASA's Goddard Space Flight Center in Greenbelt, Maryland, which in the 1980s developed the foil mirror that the Indians hope to deploy on one of the x-ray instruments. “But if you're going to start a program, this isn't a bad way to do it. It should let them get their foot in the door.”

    Indian scientists are confident that they can make the mirrors and related optical devices. But they plan to seek outside help in developing other portions of the payload, in particular the photon-counting detector for the UV telescope. ISRO officials say that they hope to have designs completed in 18 months and to launch the satellite in “about 5 years” on ISRO's existing polar satellite launch vehicle. The project has won the support of Indian Prime Minister Atal Behari Vajpayee, who in December touted his government's intention to build “a multiwavelength observatory to conduct front-ranking research in astronomy.”

  8. MICROBIOLOGY

    Possible New Route to Polyketide Synthesis

    1. Dan Ferber

    For researchers prospecting for new drugs, one class of natural compounds—the polyketides—has long been the mother lode. These drugs, including such therapeutic mainstays as the antibiotic erythromycin, the immunosuppressive drug FK506, and the cholesterol-lowering drug lovastatin, have combined sales exceeding $10 billion per year. Now, researchers may have hit another rich vein: an improved method of synthesizing and engineering polyketides.

    The compounds are difficult to synthesize, forcing drug companies to rely on production by their natural sources—unusual soil bacteria and fungi. Some of these microbes can be cultured readily, but many others are slow-growing and finicky, which makes them difficult to grow in the huge vats needed for industrial production. They're also tricky to alter genetically, hampering efforts to tweak the polyketide-synthesizing enzymes so that they make new variants. But on page 1790 of this issue, chemical engineer Chaitan Khosla of Stanford University and his colleagues report that they've engineered the common lab bacterium Escherichia coli to pump out a polyketide at rates potentially useful for industrial drug production.

    Because E. coli is both easy to grow and highly amenable to genetic manipulation, the results offer a possible way of producing polyketides from exotic microbes in a much more tractable host. They also offer an opportunity to engineer new versions. “I think it's a real breakthrough,” says bioorganic chemist Heinz Floss of the University of Washington, Seattle.

    To pull it off, Khosla and his colleagues, Stanford graduate student Blaine Pfeifer, David Cane of Brown University in Providence, Rhode Island, and two others, had to overcome a series of formidable hurdles—adding the machinery for two new metabolic pathways and crippling another in E. coli.

    The first problem was getting E. coli to make the enzyme that synthesizes the researcher's target polyketide, which forms the core of erythromycin. In nature, the bacteria that produce polyketides, in this case, a soil bacterium called Saccharopolyspora erythraea, rely on an unusual enzyme. This enzyme, polyketide synthase, sequentially joins a series of small building blocks to form the eventual product, which is a circular molecule. The enzyme itself consists of three very large proteins. As a result, the researchers had to introduce three S. erythraea genes into E. coli and fine-tune growth conditions just to make the enzyme.

    The next challenge was getting the polyketide synthase to work in E. coli. These enzymes behave much like an assembly line, passing a growing polyketide chain from one active site to the next to add the next building block. The enzyme uses a cofactor compound called phosphopantetheine to carry out this transfer. But on its own, E. coli couldn't add the phosphopantetheine to polyketide synthase. To coax it to do so, the researchers added a gene from the soil bacterium Bacillus subtilis that produces another enzyme that attaches the cofactor.

    Finally, two modifications were needed to provide E. coli with the building blocks it needed to make the polyketide. To supply one, called propionyl coenzyme A (propionyl-CoA), the Stanford team knocked out key E. coli genes to cripple a metabolic pathway that breaks down that compound. To supply the other, called methylmalonyl-CoA, the Stanford team borrowed a gene from a third soil bacterium. If any one of their tricks had failed, the researchers would have had to start over. “We kept our fingers crossed to the very end,” Khosla says.

    Their efforts paid off, resulting in a bacterial strain that can pump out the polyketide at rates approaching those of industrial S. erythraea strains. In addition, by replacing one component of the S. erythraea polyketide synthase with a portion of an enzyme that makes a different type of drug, the Stanford team generated a hybrid enzyme that makes a polyketide unlike any found in nature. “They've demonstrated the feasibility of a directed approach” to making new polyketides, says microbiologist Joan Bennett of Tulane University in New Orleans, Louisiana, president-elect of the Society for Industrial Microbiology.

    If E. coli can be used as a factory for making either natural or designer polyketides, the work could lead to a big payoff for Khosla and a company he co-founded, Kosan Biosciences of Hayward, California. Kosan owns the patent for the method and has the option of commercializing the discovery under a license agreement with Stanford. “We're going to ask now if we can use E. coli on a very large scale,” says microbiologist Richard Hutchinson, Kosan's vice president of new technology.

    Khosla and his colleagues still have a way to go to get industrial polyketide production by E. coli. They need to coax the microbe to add sugars to the polyketide to generate a complete erythromycin molecule. And if they accomplish that, Floss cautions, there's a chance that the antibiotic will kill the bacteria producing it. Khosla maintains that there are ways around those problems, such as having chemists add the sugars or stitching erythromycin-resistance genes into E. coli. “The important thing for people to realize is that it's not difficult anymore” for researchers to devise and produce new polyketides, Khosla says. If so, then the work may trigger another gold rush of polyketide prospectors.

  9. CELL BIOLOGY

    Nobel Laureates Lobby for Stem Cells

    1. Gretchen Vogel

    Eighty Nobel Prize winners have signed a letter urging President Bush to allow government-funded researchers to work on human pluripotent stem cells. In a letter faxed to the White House on 22 February, they argue that the cells—which have the capacity to develop into any tissue type—could help treat a variety of diseases. The Bush Administration is under pressure from antiabortion groups to block federal funding for work on embryonic stem cells.

    Scientific teams around the world are working on strategies to prompt stem cells to become specific cell types—neurons, muscle, or pancreatic cells—that could treat diseases such as Parkinson's or diabetes. The most controversial research has been conducted on cells derived from aborted fetuses or days-old human embryos; in other cases, stem cells from adults have been used. Opponents of embryonic or fetal tissue research argue that adult stem cells could offer the same benefits without the ethical problems, but the Nobel laureates' letter calls this assertion “premature.” Stem cells from adults may prove very useful, says signatory Paul Berg, a biochemist at Stanford University, but “we can't ignore the potential of embryonic stem cells. … We should be proceeding full-speed along both tracks.”

    Bush has ordered the U.S. Department of Health and Human Services to review existing National Institutes of Health (NIH) policy. Under this policy, which was developed last year, government-funded researchers may not derive human embryonic stem cells but can use them if they are obtained from privately funded scientists who prepared the cells in accordance with a set of ethical guidelines (Science, 1 September 2000, p. 1442). (For example, applicants must certify that the cells were derived from embryos that were created for fertility treatments but were slated to be discarded.) The next deadline for submitting applications for embryonic stem cell research is 15 March, and, barring a change in policy, applications will be reviewed by an ethics board in April.

    The letter was written and circulated by researchers Robert Lanza and Michael West of Advanced Cell Technology, a biotech company in Worcester, Massachusetts, that works on cloning and stem cell research. The duo, with many of the same laureates, signed a similar letter in 1999 urging the U.S. Congress and the Clinton Administration to support plans at NIH to fund work on stem cells (Science, 19 March 1999, p. 1849). That letter was “successful,” says Lanza, who hopes the current one will show the president that “the scientific community is unified in support of this research.”

  10. NEUROBIOLOGY

    Working Memory Helps the Mind Focus

    1. Ingrid Wickelgren

    When traversing city streets, a driver needs to focus on important information—say, a red light or a car veering into the lane—rather than irrelevant images such as the type of tree lining the road. New work now provides a better understanding of just how the brain achieves such feats of selective attention—information that may have both public health and medical implications.

    In experiments described on page 1803, cognitive psychologist Nilli Lavie of University College London and her colleagues have pinpointed a surprising new influence on a person's ability to focus: working memory, which is where the brain temporarily stores information used in reasoning and planning. In both behavioral and brain-imaging studies, the researchers have demonstrated that when a person's working memory is occupied, his or her brain cannot filter out distracting sights in a separate attention task.

    Distracted.

    The greater stimulation of the visual areas at the back of the brain shows that the brain is more distracted by an image, such as that of former U.S. President William Clinton, when working memory is full (right image) than when it is less occupied.

    CREDIT: DE FOCKERT ET AL.

    Researchers had suspected that parts of the brain involved in conscious planning, such as working memory, might play a role in selective attention, but they did not know how. For the first time, Lavie's work provides “direct evidence that the working memory system is modulating attention and affecting processing within the brain's object recognition system,” says neuroscientist Robert Desimone of the U.S. National Institute of Mental Health in Bethesda, Maryland.

    The new findings could also have implications for the debate over cellular telephone use in cars. So far, safety measures have largely centered on getting drivers to use telephone headsets or speakerphones. But the new study suggests that the availability of one's hands may be only a small part of the solution. If a phone conversation requires any thought, it will tax working memory and may therefore cause a driver to be more distracted by irrelevant sights on the road.

    When Lavie began this research in the late 1990s, she was trying to identify the environmental factors that might influence a person's ability to screen out visual distractions. She had discovered, for instance, that this was easier when the scene was busy than when a person had to focus on fewer objects; this is because people become more focused when the task is harder. But because a person's ability to concentrate seems to vary even when the scene stays constant, Lavie knew that the scene's complexity couldn't be the whole story.

    Indirect evidence suggested that working memory might also play a role. In monkeys, for example, neurons in the prefrontal cortex, where working memory resides, seem to fire only in response to visual stimuli relevant to a given task. In addition, anatomical links between the prefrontal cortex and visual regions at the rear of the brain could mediate the hypothesized interaction between working memory and brain areas known to be involved in detecting objects.

    To test the idea more directly, Lavie and postdoc Jan de Fockert first devised a task that required selective attention. They flashed the names of pop stars and politicians in front of 10 volunteers, asking them to choose the profession of each person named. As each name was flashed on the screen, the researchers also showed the volunteers a picture of a face that might or might not match the name. This forced them to try to ignore the face and focus on the text. As expected from previous work, the volunteers took significantly longer to answer when the face didn't match the name than when it did, a measure of the influence of the distracting face.

    To investigate working memory's involvement, the volunteers also had to memorize a string of five digits, which they were asked to recall right after the attention task. When the number string was easy, such as 0 1 2 3 4, volunteers could remember it without taxing their working memories; as a result, they could classify the pop stars and politicians as quickly as they could in the absence of the memory task. However, when the digits were more random—say, 0 3 4 2 1—the volunteers had to continually rehearse them in their minds, putting a heavy load on their working memories. In this situation, they took much longer to determine whether a name belonged to a pop star or a politician in the presence of a nonmatching face.

    To see how this played out in the brain, Lavie and de Fockert teamed up with brain imaging experts Christopher Frith at the nearby Institute of Neurology and Geraint Rees, now at the California Institute of Technology in Pasadena. The researchers used functional magnetic resonance imaging to measure brain activity while six new volunteers performed the tasks.

    As expected, the researchers detected more activity in the prefrontal cortex when working memory was heavily taxed than when it was not. They also found that the distracting faces caused greater activation in a posterior brain area devoted to processing faces when working memory was full than when it was not. That is, when the brain was thinking hard, it spent more effort processing irrelevant visual information. “The ability to act upon relevant information and ignore irrelevant distractors depends on the availability of working memory,” Lavie concludes.

    The work not only adds a new slant to attention research but also could suggest new avenues for treating certain brain disorders. Schizophrenia and Parkinson's disease, as well as normal aging, are generally accompanied by both a loss of working memory capacity and a diminished ability to screen out distractions. If working memory exerts some control over visual attention, these symptoms might be due to impaired neural connections between the prefrontal cortex and visual brain regions or perhaps to damage solely within the prefrontal cortex itself. Indeed, Lavie and her colleagues may soon explore the implications of their findings for schizophrenia patients.

  11. BRAZIL

    New Industry Taxes Boost Science Budget

    1. Cassio Leite Vieira*
    1. Cassio Leite Vieira is a science writer in Rio de Janeiro.

    RIO DE JANEIRO—Science funding in Brazil, long hobbled by fluctuating federal support, now has a new and involuntary champion—industry. A new tax covering eight industrial sectors is expected to generate many times the current level of spending for research aimed at strengthening the economy. Although most scientists applaud the additional resources and look forward to closer ties with industry, some are worried that the money may not be well spent, that basic academic research may suffer, and that the mechanism gives the government an excuse to reduce its own contribution to research.

    Legislation passed in the last year or so by the Brazilian parliament imposes taxes on companies that will be channeled into three types of funds for the support of science and technology. This month, Brazilian officials will draw up ground rules for how to manage the revenues and then form committees that will oversee the process. Each panel will have representatives from the government, industry, universities, and other experts in the field, and the money will be spent within each of the designated sectors. The process has been shepherded by Carlos Américo Pacheco, executive secretary of the Brazilian science ministry (MCT), who hopes to create additional funds in health care, biotechnology, agribusiness, and aeronautics.

    The sums already are significant. The petroleum industry, whose CTPetro fund was the first to be created in 1999, last year generated $75 million of the government's overall science budget of $500 million. This year, the telecommunications fund is expected to be an even bigger contributor, as more than half of the projected $850 million federal science budget will come from the industrial sector. A second type of fund, called Green-Yellow in honor of Brazil's national colors, aims to stimulate business-university collaborations and increase local capacity by taxing companies that send money abroad to pay for royalties and technical assistance. A third fund will skim off 20% of the money collected by the sector funds and invest it in research infrastructure.

    “This is an exceptionally positive move,” says Carlos Henrique de Brito Cruz, a physicist at the State University of Campinas and president of the Foundation for the Support of Research for São Paulo, the richest science state. “The sector funds are especially important as a new and stable source of funding.” Adds Reinaldo Guimarães, a professor of social medicine at the State University of Rio de Janeiro, “This is the first time that the [current] government has adopted a scientific measure that is both original and important.”

    Government research funds are currently disbursed by the National Research Council (CNPq)—now called the National Council for Scientific and Technological Development—which will have a seat on the sector fund committees. Pacheco says that the council's 50-year history gives the country plenty of expertise in managing research monies. “There is no doubt that we are on a learning curve,” he says, “but I don't believe that there will be waste nor risks to the system.” But others are less sanguine. “There has been no planning, and management of the program is complex and confusing,” says Guimarães, a consultant to the CNPq. “Planning is necessary if we want to prevent waste.”

    Scientific societies are also troubled by a provision that reserves about 60% of the money for applied research. “There is no money specifically earmarked for basic research,” notes Dora Fix Ventura, a psychology professor at the University of São Paulo and president of the Federation of Associations of Experimental Biology. The Brazilian Society for the Advancement of Science voices a similar sentiment. “At all costs, we must preserve the capacity for research at public universities, which are the backbone of science in Brazil,” says Glaci Zancan, the society's president.

    Pacheco says that won't be a problem. “We must support the entire network of scientific knowledge, whether at universities or companies, whether basic or applied research,” he says. The sector committees have also fostered greater interaction between university and industry scientists, something that scientists say has been sorely lacking. “This has been a process of mutual education,” says Luiz Bevilacqua, a mechanical engineer at the National Laboratory for Scientific Computation and a member of the committee that governs CTPetro.

    Despite the increased flow of money, scientists remain cautious about the long-term impact of the new arrangement. The government's decision to withhold $70 million from this year's science budget as a contingency against an economic downturn raises questions about the ultimate fate of the sector funds. In addition, some scientists fear that the government may at some point trim its contribution to research because of the growing share coming from industry taxes.

    Pacheco insists that the government wouldn't do such a thing. “In fact, the opposite is happening,” Pacheco says, citing a $75 million hike in this year's science budget “above and beyond the money received from the sector funds.” But Guimarães remains wary. “Constant vigilance is needed,” he says.

  12. VIROLOGY

    AIDS Vaccines Show Promise After Years of Frustration

    1. Jon Cohen*
    1. Jon Cohen is the author of Shots in the Dark: The Wayward Search for an AIDS Vaccine.

    The most surprising message from a major international AIDS meeting last month is that vaccine research is heating up, although many obstacles remain

    CHICAGO, ILLINOIS—When 3000 researchers gathered here last month for the largest annual AIDS conference* in the United States, most of the news was depressingly familiar: scant progress on new drugs and soaring infection rates in many parts of the world. But one big surprise at the meeting made few headlines: AIDS vaccine research is hot again, for the first time in years.

    In session after session, AIDS researchers reported results from novel vaccine experiments that have worked to various degrees in monkeys. They range from clever new ways to make potent antibodies aimed at preventing HIV from infecting cells to new strategies to crank up the so-called cell-mediated arm of the immune system, which clears cells that HIV manages to infect.

    As AIDS researchers are painfully aware, results from animal experiments don't necessarily translate to humans. But this time around, a new element is lifting expectations: a flood of new AIDS vaccine projects sponsored by governments, nonprofits, and even the most elusive player to date, industry.

    The U.S. National Institutes of Health (NIH) has revamped its AIDS vaccine program in the past few years and is aggressively helping move products into clinical trials. The International AIDS Vaccine Initiative (IAVI)—a New York City-based nonprofit that recently received a $100 million shot in the arm from the Bill and Melinda Gates Foundation (Science, 2 February, p. 809)—plans to announce in the next few weeks new large-scale trials for India and China.

    Robert Gallo, director of the Institute of Human Virology in Baltimore, Maryland, is joining an AIDS vaccine effort called the Waterford Project, which will link his institution to leading researchers at Harvard University and the University of California, San Francisco (see sidebar). And the European Commission this week decided to enlarge a similar project, EuroVac, which links various AIDS research groups around that continent.

    On the industrial front, although almost every large pharmaceutical company has avoided the field, Merck & Co., headquartered in Whitehouse Station, New Jersey, has quietly built a major AIDS vaccine program and plans next month to unveil a new strategy based on extensive in-house monkey studies. Moreover, Science has learned that Merck now owns a large colony of monkeys, which are in short supply (Science, 11 February 2000, p. 959), that it is using for AIDS vaccine studies.

    All this has heartened investigators from the bench on up. “People are taking the bull by the horns now. … These approaches will bring the field much further forward in a year than occurred over many years in the past,” says Anthony Fauci, head of the NIH's National Institute of Allergy and Infectious Diseases (NIAID), the single largest funder of AIDS vaccine research in the world. David Baltimore, head of NIH's AIDS Vaccine Research Committee, says he, too, is excited: “Clearly, the field is energized.”

    Antibody beautiful

    In the 17 years since Gallo's lab proved that HIV causes AIDS, vaccine researchers have been thwarted by the virus's ability to dodge almost any antibody thrown at it. The quest has been so frustrating that many groups have shifted their focus from antibodies to cellular immunity. But antibodies regained center stage at the opening session of the Chicago meeting, when Ronald Desrosiers of the New England Regional Primate Research Center in Southborough, Massachusetts, presented promising new data.

    Desrosiers has long prodded the field with provocative experiments that involve deleting genes from SIV, a simian cousin of HIV. These crippled strains, which mimic traditional attenuated vaccines, have created the strongest protection yet seen in monkey experiments. But many researchers, Desrosiers included, worry that a live HIV vaccine, no matter how attenuated, could theoretically pick up deleted genes or revert to virulence on its own. Desrosiers's latest approach might lessen those fears.

    Desrosiers is focusing on the surface protein of the AIDS virus, gp120. In the first step of the infection process, gp120 binds to CD4 receptors on white blood cells. That realization led to an obvious vaccine strategy: A shot of gp120 should stimulate production of antibodies that would prevent the protein from binding to CD4, “neutralizing” the virus. But gp120 vaccines have failed to raise potent neutralizing antibodies, prompting NIH in 1994 to abandon the two leading vaccines in human trials, both of which contained genetically engineered gp120. (One is now being tested using private funds.)

    Desrosiers and his co-workers reasoned that gp120 may wear what amounts to bullet-proof vests to cover up regions of the molecule that are most vulnerable to antibodies. So they began deleting different portions of the gene that codes for SIV gp120 to expose underlying parts of the protein.

    In one experiment, they produced a mutant SIV strain that makes a gp120 with deletions in the molecule's variable regions 1 and 2 (see illustration). When they injected this V1/V2 mutant into four monkeys, the animals all became infected, but their immune systems quickly knocked down the virus (which typically kills monkeys in about 1 year) to extremely low levels. When they removed antibody-producing cells from the monkeys, their SIV levels rapidly skyrocketed, suggesting that antibodies were playing a key role in controlling the infection.

    Next, the researchers injected a lethal strain of SIV into two animals that had carried the V1/V2 mutant for nearly 3 years and remained healthy. Neither monkey became infected with the new virus. “These animals were strongly protected—as well protected as we have seen with any live attenuated strains that we have studied,” Desrosiers said. He is now working with Larry Arthur and Jeff Lifson at the National Cancer Institute to make a traditional whole, killed vaccine, which has obvious safety advantages over a live attenuated vaccine, with the V1/V2 mutant.

    Desrosiers's talk bowled over many researchers. “I was jumping out of my seat,” says Susan Barnett, the principal investigator on an AIDS vaccine project at Chiron, a biotech company in Emeryville, California. Barnett and her co-workers are exploring a similar strategy. Working with Leo Stamatatos of the Aaron Diamond AIDS Research Center (ADARC) in New York City, they have tested a vaccine that contains a V2-deleted gp120 in two monkeys. When injected with virulent SIV, the animals contained the infection, while two control animals did not.

    Mix and match

    Most vaccinemakers, Chiron included, are exploring a double-barreled approach: training the immune system both to make neutralizing antibodies and to launch a strong cell-mediated response. This has led to a raft of creative ideas for priming the immune system with one vaccine and boosting it with another.

    Cell-mediated immunity kicks into action after HIV infects cells. To boost this response, AIDS researchers create what amounts to a mock HIV infection: They stitch HIV genes into a vector—usually a harmless virus or bacterium—that can infect a cell and then direct it to produce HIV proteins. To the immune system, this chimera looks like an HIV-infected cell.

    The first AIDS vaccine tested in humans used vaccinia virus, the smallpox vaccine, as the vector. Several other investigators followed with monkey tests that explored other vectors, using modified vaccinia Ankara, various avian poxviruses, adenovirus, and members of the alphavirus family.

    More recently, researchers have tried ferrying HIV genes directly into cells, using such vectors as circular pieces of bacterial DNA called plasmids. But the first human studies of these so-called naked DNA vaccines did not elicit strong cell-mediated immune responses (Science, 5 December 1997, p. 1711). Now, a new trick may boost their effectiveness.

    Chiron's Barnett reported in Chicago that attaching an HIV DNA vaccine to microparticles of polylactide coglycolide (PLG)—a material used in resorbable surgical sutures—boosted cell-mediated immunity in mice 100-fold with just one shot. Even more impressive, this technique jacked up antibody levels by a factor of 1000. The company hopes to start human trials next year with a PLG-DNA prime and a V2-modified, gp120 boost.

    John Rose of Yale University has been working with a weakened form of vesicular stomatitis virus (VSV), which causes disease in farm animals. Many vaccines require booster shots to achieve maximum impact; when repeatedly exposed to the same vector, people can develop immune responses against it, wiping it out before it can deliver its cargo. To get around this problem, Rose has stitched SIV genes into three different strains of VSV, which he delivers to monkeys in succession.

    Working with ADARC's Preston Marx, Rose increased the animals' immune responses with each booster shot. When challenged with SIV, seven of seven vaccinated monkeys contained the infection, while four of eight controls quickly developed AIDS. Wyeth-Lederle now hopes to develop this for human tests.

    Merck is also trying to boost cell-mediated immunity. Although the company did not present any vaccine data at the conference, several researchers who had heard confidential presentations by the company said Merck has combined a DNA-based approach with an adenovirus vector boost. Merck's Emilio Emini, the lead scientist on the project, says the company plans to present its data at an AIDS vaccine meeting in Keystone, Colorado, next month.

    The reappearance of Merck, which all but abandoned its AIDS vaccine program in the early 1990s, has heartened many researchers. “I'm really delighted,” says Peggy Johnston, head of NIAID's AIDS vaccine program. “We need to engage the large players more and more.”

    View this table:

    Clinical concerns

    Big money—if not big pharma—will be needed to move candidate vaccines through clinical trials. To date, three dozen vaccines—most manufactured by small biotechs—have made it into human tests, but only two have advanced beyond small-scale, phase I experiments that gauge safety and immune responses. Only one of those has made it into large-scale efficacy trials to determine whether it actually works. Doubting its effectiveness, NIH declined to fund this genetically engineered gp120 vaccine in 1994. Originally made by Genentech of South San Francisco, California, the vaccine is now being tested by VaxGen—a spin-off company formed solely to stage the efficacy trials—in 5400 people (mostly gay men) in the United States, Canada, and the Netherlands.

    A second trial involving 2500 injecting drug users is under way in Thailand. Donald Francis, VaxGen's president and co-founder, says a safety and monitoring board will take a first look at the larger trial's results in November and could halt it if the panel finds that the vaccine clearly works (defined as at least 30% efficacy), or is worthless.

    Aventis Pasteur, a French-German company, has the second most developed vaccine, which relies on a canarypox vector. It could move into phase III trials next year (see table).

    Big pharma has shied away from vaccines both because of scientific uncertainties and because vaccines do not make anywhere near as much money as drugs. IAVI, which has raised a war chest of $335 million from governments and foundations, hopes to overcome this market failure by linking scientists from poor and wealthy countries with biotechs and helping underwrite clinical trials. It has already moved a DNA vaccine into human trials in Oxford and Kenya and is funding four other projects.

    In the next few weeks, IAVI hopes to win approval from Chinese officials for a collaboration between researchers there and ADARC director David Ho, and, separately, Hans Wolf of Regensburg University in Germany. IAVI is also negotiating with officials in India to start producing a vaccine for trials there.

    NIH is stepping up its support for clinical trials, too, funding companies such as Chiron and Wyeth-Lederle to move vaccines into the clinic and offering contracts to companies to manufacture some vaccines. “The pipeline has grown enormously,” says NIAID's Johnston. “Two and a half years ago, there were two products that NIAID was supporting the development of; now there are close to two dozen.” NIH's new Vaccine Research Center will also soon have its own in-house capability to make vaccines for small clinical trials.

    Finally, the European Commission has linked a far-flung group of researchers called EuroVac to compare HIV vaccines that rely on various weakened vaccinia vectors plus gp120 boosts, as well as a vector called Semliki Forest virus (an alphavirus) with a DNA boost. EuroVac, which now has roughly $11 million, hopes to have its first vaccine ready for human trials next year.

    Jaap Goudsmit of the University of Amsterdam, who co-launched EuroVac and also chairs IAVI's scientific advisory committee, recognizes that major problems lie ahead. Still, Goudsmit is encouraged by the number of scientists now reshaping themselves into AIDS vaccine researchers. “There's a lot more energy in the field, and a lot of new people have moved into it,” says Goudsmit. “I'm quite optimistic that something smart will come out of this.”

    • * 8th Conference on Retroviruses and Opportunistic Infections, 4–8 February, Chicago, Illinois.

  13. VIROLOGY

    Sublimating Egos for a Common Goal

    1. Jon Cohen

    Robert Gallo has long believed that AIDS vaccine research needs its own Manhattan Project. A few years ago, Gallo, who heads the Institute of Human Virology (IHV) in Baltimore, Maryland, mentioned this idea to one of IHV's board members, John D. Evans, a telecommunications entrepreneur who co-founded the cable TV station C-SPAN.

    Evans thought the idea had some merit, especially if different institutions could collaborate through the latest Internet technology to form a “virtual lab.” So, in July 1999, he invited Gallo and other leading AIDS scientists to Waterford Farm, his home in Middleburg, Virginia.

    After the meeting, Evans wasn't sure the idea would fly. “Science is the most cut-throat business I've ever seen,” he says. “Could we get these people to work together? Could we [get them to] sublimate their egos?” But after months of discussions, a newly configured group met at Evans's farm for 2 days before Thanksgiving last year and agreed to start a unique vaccine effort, dubbed the Waterford Project.

    The Waterford Project.

    Robert Gallo, John Evans, and Warner Greene (l. to r.) work on an allied attack.

    CREDIT: HARVARD UNIVERSITY

    In addition to Gallo and his IHV colleagues, the Waterford Project will include Warner Greene and Tom Coates of the University of California, San Francisco, and Harvard University's Bruce Walker and Max Essex. “There's so much politics and struggle in academia, and it's not going to be easy for anyone to do this alone,” says Gallo. “We need more allies and inputs.”

    With seed money from the John D. Evans Foundation, the Waterford Project hopes to raise at least $140 million over the next 10 years. The project will emphasize research, not development. And it will differ from the National Institutes of Health in the speed with which it can shift focus. “We can turn on a dime,” says Evans.

    To start with, the Waterford Project will pursue R&D on a baroque vaccine that combines the work of several IHV researchers. The core component will be a gp120 molecule linked to a CD4 receptor. Designed by IHV's Anthony Devico, this molecule theoretically exposes parts of gp120 that stimulate production of powerful antibodies (see main text).

    The IHV team, which includes George Lewis and David Hone, will put the gene for this gp120/CD4 construct into a DNA vaccine. The researchers will then pack the DNA into a harmless version of Salmonella typhi, which will act as a Trojan horse and deliver the vaccine to the very cells that orchestrate an immune response.

    Further jazzing up the vaccine, they plan to add an inactivated form of HIV's tat protein as a booster. This unusual component is based on findings that tat up-regulates chemokine receptors used by HIV during the infection process; an immune response against tat, they reason, should make it more difficult for HIV to enter cells.

    This multilayered scheme is precisely the sort of approach that makes companies and granting agencies run for the hills. But tripping up HIV may require just this type of creativity. And the beauty of the Waterford Project is that the principal investigators can move it forward without having to convince a single outsider that it's worth trying.

  14. AAAS MEETING

    Lake Vostok: Stirred, Not Shaken

    1. Robert Irion

    SAN FRANCISCO—The world's biggest hidden lake may not be as quiet as scientists thought. New analysis of the geologic setting of Lake Vostok, a vast crescent of fresh water beneath 4 kilometers of ice in Antarctica, suggests that its waters may churn vigorously enough to feed a suspected ecosystem of microbes. Furthermore, seismic activity in the area means that mineral-rich fluids—and possibly heat—may seep into the lake from below.

    “Every time we go to Lake Vostok, we learn something new,” says Karl Erb, director of the National Science Foundation's (NSF's) Office of Polar Programs. The latest work, he says, adds impetus to efforts to penetrate the lake's thick mantle of ice, which has shielded its waters from the surface for millions of years.

    Lake Vostok, one of at least 76 lakes trapped under Antarctica's ice pack, is the size of Lake Ontario. Russian seismic studies show that it plunges up to 1000 meters deep within a rugged valley. Last year, glaciologist Martin Siegert of the University of Bristol, U.K., and his colleagues used three airborne radar surveys to analyze the profiles of ice layers above the lake. The slopes of the layers hinted that 10 centimeters of ice melts each year at the northern and western margins of the lake, where the melting temperature is lower because of pressure from the thicker overlying ice. Water then refreezes onto the base of the ice at the lake's other end. This slow circulation, Siegert's team deduced, would replace the lake's volume every 50,000 years to 100,000 years.

    Fresh fieldwork now paints a more detailed picture. A team led by geologist Robin Bell of Columbia University's Lamont-Doherty Earth Observatory in Palisades, New York, spent 33 days at Vostok in December and January. The scientists crisscrossed the lake more than 60 times with an NSF Twin Otter aircraft packed with radar equipment. They saw dramatic variations in the thicknesses of ice layers above the water, caused by melting and refreezing during the 15,000 to 20,000 years it takes the ice to traverse the lake.

    The team hasn't yet analyzed its data thoroughly, but Bell thinks the melting-refreezing cycle is about three times faster than what Siegert estimated. The overlying ice sheet also buckles and deforms as it drifts over the lake and nearby mountains. That probably spawns local patches of faster water circulation, Bell notes.

    The researchers also put two seismometers on the ice at the start of their field season. They caught a magnitude 3 earthquake in the area on 5 January, Bell reported. Three other quakes have struck the region in the last century, with likely magnitudes between 4 and 5, according to historic records. The motions make sense, Bell says, because radar images of steep topography along one side of the lake suggest that Vostok sits at a restless geologic boundary: “It's not a piece of quiet old crust. The earth is actively moving.”

    Bell doubts the crustal jigglings are enough to sustain geothermal vents under Lake Vostok. Still, even a seepage of cold fluids into the lake could yield energy for the cold and isolated ecosystem that Vostok may harbor, says ecologist John Priscu of Montana State University in Bozeman. Faster circulation would make those mineral nutrients waft throughout the lake.

    As for the biological seeds, Priscu notes that carbon-rich sediments and bacteria blown onto Antarctica should take about 450,000 years to migrate down through the ice and enter the water. Indeed, Russian ice corings that came within 120 meters of the lake are riddled with bacterial cells in refrozen Vostok water, Priscu and others have claimed (Science, 10 December 1999, pp. 2138, 2141, 2144).

    Based on his studies of the ecology of other antarctic lakes, Priscu suspects that Vostok's waters host a million bacterial cells per milliliter. “It's an extrapolation, but there should be a thriving community,” he says. However, Erb thinks it will take a decade for scientists to develop the sterile drilling tools needed to sample the lake without contaminating it. Part of the delay, he explains, is that Vostok belongs to no one. “It's an international treasure,” Erb says. “We cannot act unilaterally.”

  15. AAAS MEETING

    Stem Cells Make Brain Cells

    1. Evelyn Strauss

    SAN FRANCISCO—As the future of stem cell research teeters on the unsteady ground of political controversy, the case for its potential benefit grows more solid. At the meeting, neurotransplant researcher Ole Isacson of Harvard Medical School in Boston reported that embryonic stem cells implanted in the brains of rats and mice grow into the types of cells that wither in Parkinson's disease.

    “This is interesting, because it implies that all the instructive mechanisms [for these cells' maturation] are present in the adult brain,” says neuroscientist Ron McKay of the National Institute of Neurological Disorders and Stroke in Bethesda, Maryland. The results raise hopes that researchers will learn to fix the nerve damage that causes Parkinson's without transplanting fetal nerve cells—a technique beset by practical and ethical concerns.

    Parkinson's disease, which afflicts 1 million people in the United States, kills a class of brain cells that produces dopamine, one of the brain's chemical messengers. Drug therapy helps for a while, but it provokes side effects and ultimately can't keep up with the progressive disease.

    Testing an alternative, clinicians have implanted fetal neurons into the brains of dozens of Parkinson's patients. In many cases, this procedure has produced long-lived, dopamine-producing cells and reversed symptoms of the disease, such as rigidity, slowness, and tremor. But fetal tissue is scarce.

    Stem cells—unspecialized cells that have not yet committed to a particular fate—may help overcome that problem. Each fetal transplant requires material from several fetuses, whereas the types of cells Isacson used can be mass-produced in a test tube from a single embryo.

    Isacson and colleagues found that stem cells can compensate for some Parkinson's-like damage in animals. The researchers implanted cells originally taken from mouse embryos into the brains of rats and mice whose dopamine-producing neurons had been obliterated by a toxin. Inside the part of the brain that Parkinson's disease targets, the immature cells developed into neurons that made dopamine-producing enzymes. They also connected with nearby brain cells, just as transplanted fetal nerve cells do, Isacson says.

    “We hope that we can do with stem cells what we've done with fetal dopamine-producing cells,” he says. “All these things together imply that these cells will be able to do the job of reversing symptoms.”

    To see whether that is so, Isacson has undertaken studies of brain activity that probe possible symptomatic benefits to the animals. Although he's not yet ready to present the experiments, “they show some very, very encouraging results,” he says.

    If these results pan out and extend to humans, they could uncork a serious bottleneck to Parkinson's treatment. Embryonic stem cells can be gathered from unused embryos, which fertility clinics discard by the thousands each year. McKay and other researchers have figured out how to grow dopamine-producing cells from mouse embryonic stem cells in culture dishes, but with Isacson's method, “you just take cells and put them in the brain,” McKay says. “You don't need to purify them, you just shove them in.”

    If organs other than the brain show a similar knack for programming cells, therapies based on simple stem cell transplantation might provide cures for cardiovascular disease, diabetes, and multiple sclerosis, McKay says: “Parkinson's disease may stop being the poster child for [the stem cell] field.”

    But serious political obstacles may lie ahead, researchers said, as antiabortion groups pressure President Bush to ban federal funding for research on stem cells derived from embryos. However, one commentator at the meeting—Jeffrey Martin, a private-sector Republican lawyer with Parkinson's who advises the National Institutes of Health (NIH) about the disease—was optimistic that the president is keeping an “open mind.”

    Instead of issuing a ban immediately after his inauguration, Martin pointed out, Bush has ordered Secretary of Health and Human Services Tommy Thompson to review the NIH guidelines affecting such research. Thompson has commended stem cell research in the past, Martin says. “He's antiabortion, but he sees the distinction. Because he gets it, I think that's a hopeful sign.”

  16. AAAS MEETING

    The Melting Snows of Kilimanjaro

    1. Robert Irion

    SAN FRANCISCO—“As wide as all the world, great, high, and unbelievably white in the sun, was the square top of Kilimanjaro.” Those evocative words by Ernest Hemingway describe a scene that could vanish within 20 years, according to new field research reported at the meeting. More than 80% of the ice on Africa's highest peak has melted since the early 20th century, joining other glaciers that are ebbing from the world's tropical mountains at an accelerating rate.

    The dramatic findings, splashed on front pages and the evening news, may spur policy-makers and the public far more than abstract warnings of climatic trends, says Will Steffen, director of the International Geosphere-Biosphere Program in Stockholm, Sweden. “This is exceptionally important work,” Steffen says. “Tropical glaciers are a bellwether of human influence on the Earth system.” The past decade's warm years, it seems, have sounded that bell with unexpected force.

    Ice in the tropics sits at the knife edge of climate change. Slight temperature increases push the snowline to ever-higher altitudes, saturating fields of ice with water. Glaciers, which normally drain slowly from an ice cap and maintain a steady size, begin to melt and retreat. Researchers have observed ice waning on peaks in Kenya, Venezuela, New Guinea, Ecuador, and elsewhere. The famous ice fields on Kilimanjaro and in Peru appear especially frail.

    Aerial mapping of Kilimanjaro's summit in February 2000 revealed a 33% loss of ice since the last map in 1989 and an 82% decline since 1912, says geologist Lonnie Thompson of Ohio State University's Byrd Polar Research Center in Columbus. Just 2 weeks ago, Thompson's colleagues measured the levels at survey poles that they inserted into the ice pack last year. More than a meter of ice had melted in 12 months, out of a total thickness of 20 to 50 meters. “It won't take many more years like that to completely melt the ice fields,” Thompson says.

    Moreover, Thompson's group has documented runaway melting at Quelccaya, a massive ice cap in the Andes of Peru. Surveys reveal that Qori Kalis, Quelccaya's main drainage glacier, has retreated 155 meters per year since 1998. That's 32 times faster than the rate between 1963 and 1978. The area of the ice cap itself has shrunk from 56 square kilometers in 1976 to 44 square kilometers today. The hastening pace suggests that it too may dribble away within 20 years, says Thompson.

    These mass meltings have both scientific and social consequences, Thompson says. Without corings from tropical ice packs, climatologists will soon lose a valuable way to reconstruct El Niño histories and other patterns in the tropics for the last several thousand years. Already, he adds, water is flowing through the porous ice and smearing out the annual chemical signals.

    Citizens will feel different impacts. As Quelccaya and other Peruvian ice fields disappear, sources of irrigation and hydroelectric power will dry up. Peru and other nations may need to burn more fossil fuels to compensate, exacerbating the warming trend. And in Tanzania, government officials worry that a denuded Kilimanjaro will lose tourist appeal. “One of the attractions is to see ice at 3 degrees [latitude] south of the equator,” Thompson observes.

    Steffen praises Thompson's team for its long timeline of 20 years or more at sites around the globe. “That's essential for the data to have a public policy impact,” he says. He envisions one other ice-related signal that might resonate as strongly: large-scale melting of Arctic sea ice (Science, 19 January, p. 424). “The question has been, ‘How fast will the Earth system respond to these changes in heat·'” Steffen says. From the top of Africa to the top of the world, the answer appears to be very fast indeed.

  17. AAAS MEETING

    The Flip Side of Obesity Research

    1. Martin Enserink

    SAN FRANCISCO—In recent years, researchers have learned a lot about how the body regulates its weight. The findings are driven in part by the prospect of developing “thin pills” for people who crave calories but loathe exercise. But the same research may help patients who need to gain weight but can't.

    At the AAAS meeting, researchers showed how a brain pathway that holds promise for obese people may also be key to battling the devastating weight loss seen in some patients suffering from cancer, rheumatoid arthritis, and AIDS.

    Those patients' bony limbs and ghostly faces are caused by a combination of increased metabolism and decreased appetite, resulting in a rapid breakdown of body fat and muscle protein. This condition—which goes by the little-known name of cachexia—is a serious medical problem itself: It weakens patients, undermines their ability to be treated, and often hastens their death.

    To prevent this, researchers are looking at various newfound drug targets. Since the 1994 breakthrough discovery of leptin, a hormone produced by fat cells that helps control the body's metabolism, researchers have begun unraveling the brain circuitry through which leptin exerts its effects (Science, 10 March 2000, p. 1738).

    One of the main players is the so-called melanocortin 4 (MC4) receptor in the hypothalamus. When MC4 is stimulated by a neuropeptide called _-MSH, appetite appears to go down and energy use up—exactly what you want to stay slim. That's why MC4 is now one of the prime candidates for antiobesity drugs.

    But what if you block the receptor, so that it can no longer be stimulated? In 1999, Jeffrey Tatro of Tufts University and the New England Medical Center in Boston and colleagues did just that. Tatro's team injected rats with a lipopolysaccharide (LPS) from the cell wall of gram-negative bacteria, a common technique to induce cachexia and fever in the animals. When they then injected α-MSH in the animals' brains, they found that their food intake decreased, but when they injected a compound that blocked the MC4 receptor, the rats started eating normally.

    At the meeting, Roger Cone of Oregon Health Sciences University in Portland presented the results of a broader study in mice that also has direct relevance to cancer. Cone's postdoc Daniel Marks confirmed Tatro's results with the MC4 receptor and then, to clinch the case, showed that genetically engineered mice that lacked the receptor altogether didn't waste away when injected with LPS. Marks also showed that blocking the receptor could prevent cachexia induced by two different types of cancers.

    “It's a nice study,” Tatro says. “It's interesting that this also seems to work in cancer models.” Harvard obesity researcher Joel Elmquist agrees and says MC4 makes “a very interesting target” for anticachexia drugs.

    Cone has teamed up with Neurocrine Biosciences, a company in San Diego, to produce and test small-molecule drugs that block the MC4 receptor in mice. At a Keystone meeting this week, the team was scheduled to present the results on one interesting candidate. “It works really well,” Cone says.

  18. AAAS MEETING

    Fighting Diplomatic Technophobia

    1. Andrew Lawler

    SAN FRANCISCO—The first science and technology (S&T) adviser to the U.S. Secretary of State says that his strategy of building a “superconducting bus-bar” between the scientific community and foreign policy-makers is well under way—against the odds.

    Norman Neureiter, who got the job 5 months ago after a long campaign by the R&D community to establish the position (Science, 8 December 2000, p. 1893), has persuaded government agencies, industry organizations, and scientific societies to fund S&T-related slots at embassies overseas and at State headquarters in Washington. On 16 February, he told an audience at the meeting that this approach is vital, given State's constrained budget and lack of in-house S&T expertise.

    “To be effective we must penetrate, and demonstrate our value to, the regional bureaus,” he said. “They are the real heart of the department, but they are also places where, traditionally, technophobia is pandemic.” He noted that most S&T counselors and officers are Foreign Service officials with little or no technical background.

    To help fill that gap, NASA Administrator Dan Goldin has promised to pay for five of his agency's staff scientists to work at State, both in Washington and abroad, on 2-year assignments, according to Neureiter. The National Science Foundation, meanwhile, has agreed to support staff members willing to go abroad to an embassy for 1- to 3-month stints. Some three dozen embassies have responded favorably to the idea. “The embassies are on the front line” and are desperate for advice on research-related areas ranging from climate change to genetic engineering, Neureiter said.

    The American Institute of Physics also has set up the first paid science diplomat fellowship program among professional societies, and Neureiter indicates that the American Physical Society and other societies may follow suit. The Association of American Universities, a Washington-based organization that represents research universities, put out a call to its members for volunteer State Department summer interns to work on S&T issues. More than 30 students have responded. Some of those, Neureiter hopes, will remain in the Foreign Service—a first step “in building a more scientifically literate Foreign Service officer corps for the future.”

    Neureiter also is trying to drum up interest in industry. The Industrial Research Institute of Washington, which represents 180 companies in the United States, last week approved plans for diplomatic fellows: up to five industry scientists who would work in embassies abroad for lengthy tours.

    Despite the transition of power at State, the S&T adviser's position seems safe for the moment. He met recently with Secretary of State Colin Powell, who told him, says Neureiter, that “everything is fine for now, and we will be taking a look at this down the road.”

    Neureiter's appointment lasts until September 2003. So far, he is getting high marks from people who have been pushing for greater S&T input into State. “We're impressed with the rapidity with which [he has] moved,” said Mary Good, a former Clinton Administration official now at the University of Arkansas, Little Rock. Adds William Golden, a former government adviser who led the push for a stronger science presence at the department: “The tradition has been to bash State, but now there's hope for the future—and the changes already are remarkable.”

  19. HEMOPHILIA

    After a Setback, Gene Therapy Progresses ... Gingerly

    1. Trisha Gura*
    1. Trisha Gura is a science writer in Cleveland, Ohio.

    Amid all the controversy and allegations over gene therapy, clinical research is continuing, and something close to a success story is emerging

    Katherine High and Mark Kay were on a roll. Working independently, the researchers had pulled off a scientific tour de force: In January 1999, each reported using gene therapy to partially correct hemophilia in dogs. By April, the bicoastal duo—High is a hematologist at Children's Hospital of Philadelphia and Kay is a pediatric geneticist at Stanford University—had joined forces and won approval to test the new therapy in humans. In the first stage of clinical trials, they injected a novel gene into the leg muscles of three hemophilia patients. The outcome proved better than either had dared to hope: At a very low dose designed to test safety, not efficacy, the therapy did not harm patients and even showed signs of alleviating disease symptoms. The results, published in the March 2000 issue of Nature Genetics, brought a wellspring of hope to hemophilia patients—of which there are 15,000 in the United States alone—and a welcome tonic for a field in which hype has far outstripped payoffs.

    Seemingly on their way to the gold, High's and Kay's teams were preparing to up the dose in the next trio of patients and seek approval for a second trial that would inject the novel gene directly into patients' livers. But on 17 September 1999, the death of Jesse Gelsinger in a gene therapy trial hit headlines—and the field—with sobering force (Science, 17 December 1999, p. 2244). “We were worried,” Kay recalls. “We had no doubt that the field was going to fall under a lot of scrutiny.”

    Many clinical trials were immediately put on hold; others were cancelled outright. Gelsinger's death, caused by the injection of a novel gene construct into the young man's liver, prompted a spate of investigations that raised questions about everything from the choice of vector to deliver the novel gene, to ethical issues such as patient recruitment, consent forms, and financial conflict of interest. Overall, the tragedy forced the research community into a collective soul search. Although successes had been few and far between, gene therapy practitioners had assumed their research was safe—until now.

    “It was a defining moment where people began to say, ‘Let's separate the wheat from the chaff here,'” says Society for Gene Therapy president Inder Verma of the Salk Institute for Biological Studies in La Jolla, California. Verma headed a special meeting of the Recombinant DNA Advisory Committee (RAC) in December 1999 to investigate the Gelsinger case.

    Katherine High

    “It was such an obvious idea, transferring genes from one organism to another. But as we used to say in the lab, ideas are cheap.”

    CREDIT: BILL NATION

    High and Kay voluntarily lowered the dose they had planned to give the second cohort of patients in the muscle trial. They also postponed seeking approval for the liver trial, which would inject the novel gene into the hepatic artery—the same route of administration used in the Gelsinger trial.

    Since the incident, away from the public glare, clinical work in the gene therapy field has quietly continued. Indeed, some of the most encouraging results to date, High and Kay's included, have been reported in this past year. In April 2000, for instance, a group at the Pasteur Institute in Paris published the first unequivocal results showing that gene therapy can treat a rare immune disease called severe combined immunodeficiency (SCID) (Science, 28 April 2000, pp. 627, 669). Four months later, a team at M. D. Anderson Cancer Center in Houston reported success using gene therapy in combination with chemotherapy to halt tumor growth in patients with head and neck cancer. Most recently, a group at the University of Pittsburgh used gene therapy to repair a defect in mice with an ailment that mimics Duchenne type muscular dystrophy.

    But as High and Kay readily concede, the field has been irrevocably changed by what happened that day at the University of Pennsylvania in Philadelphia and the stringent regulations that have since emerged. “The final rules are still not implemented,” says Kay. “But, depending on what happens, clinical trials may become so difficult and expensive that academic centers will not be able to do them.” The current environment could deal a hefty blow to a field long plagued by doubt—or simply mark its transition from infancy to maturity.

    A deceptively difficult task

    High and Kay want to correct hemophilia by giving patients a novel therapeutic gene to make up for a defective one—in this case, a gene that codes for a blood clotting factor. The strategy seems straightforward: Bundle the gene inside a modified virus and allow that vector to ferry the gene inside the patients' cells. Then, if all goes as planned, the new gene will insert itself into the cells' chromosomes. There, basic cell machinery will switch on the corrective gene and produce the much-needed clotting protein.

    But, as High and Kay can attest after almost a decade of trying, the feat is much more elusive than it sounds. Like everybody else in the field, they had to overcome a core—and daunting—problem: how to get enough functioning genes into target cells so they would make sufficient quantities of protein, and how to do so without triggering a severe immune reaction. After numerous fits and starts, High and Kay are now using a promising new viral vector, but it still has limitations: It can't yet carry the full-length gene needed to correct one common form of hemophilia. And overall, no gene therapy treatment has yet reached the market for any disease. Through the course of their studies, High and Kay hooked up with a company that hopes to commercialize their treatment, but such a product is years away, they caution. In short, although High and Kay are now considered two of the stars of gene therapy, their decade-long struggle shows just how tough life can be on this new medical frontier.

    Mark Kay

    “I wasn't worried [from] the standpoint of having a safety problem. I was more worried about what the environment and the perception might be.”

    CREDIT: BILL NATION

    High, now director of research in the hematology division at Children's Hospital, never expected to end up pursuing gene therapy—or in the middle of one of the most controversial fields in medicine. In fact, she dreamed of working as a bench chemist. But in the late 1970s, she got hooked on medicine in general and hemophilia in particular during a stint with pathologist and coagulation expert Kenneth Brinkhous and his famed blood coagulation group at the University of North Carolina, Chapel Hill.

    The group's main focus was hemophilia, an X chromosome-linked disease that afflicts about 1 in every 5000 people, mostly males. The disease is characterized by the lack of at least one of a family of key enzymes that aid in blood clotting. Two of the most prominent are called Factor VIII, which is the culprit in hemophilia A, and Factor IX, which when defective causes hemophilia B, the less prevalent of the two disease forms. The sickest patients suffer uncontrollable bleeding episodes and debilitating joint damage.

    Then, as now, clinicians had few treatment options for hemophilia: mainly giving patients injectable concentrates of a clotting factor derived from blood plasma (now, by recombinant means); or, in developing countries, where most hemophiliacs don't live beyond their 20s, simple bed rest and ice. Patients with a severe form of the disease—defined as making less than 1% of the normal amount of either clotting factor—have to inject themselves with blood factor proteins up to three times a week at a cost approaching $100,000 a year. Such injections promote clotting and temporarily relieve joint pain, but they have had ugly consequences: More than 90% of adult hemophiliacs are now infected with hepatitis C or HIV from contaminated blood products. Patients with a more moderate form of the disease—defined as having 1% to 5% of normal levels of the enzyme—live a significantly easier life with far fewer injections.

    Thus, the Holy Grail for any hemophilia gene therapist is to boost the active level of enzymes above the benchmark 1%. That fairly lax requirement is one reason why a handful of intrepid researchers venturing into gene therapy in the early 1990s picked hemophilia as their target. Another reason is that the protein can make its way into the bloodstream, where it is needed, when the gene is expressed in any one of a multitude of cell types, unlike, say, cystic fibrosis, in which the gene must be expressed in the lungs or surrounding tissue.

    But first investigators had to find and characterize the human genes for Factor VIII and Factor IX—a feat pulled off by researchers including George Brownlee and colleagues at Oxford University in the mid-1980s and Darrel Stafford at Chapel Hill. Hemophilia researchers, including High, spent the next 5 years determining how defects in those genes influence disease severity. “This was a fertile time for expressing clotting factors and getting large amounts of them to study,” High recalls. Specifically, High studied how alterations in the structure of the protein affect its function as a blood clotting enzyme. But her group needed animal models in which they could better study the disease and its treatments.

    So in 1989, High and postdoc Jim Evans identified, cloned, and characterized the Factor IX gene defect that causes hemophilia B in a colony of dogs born with the illness. Canines are the animal model of choice because of their size and similarity to humans. But dogs are expensive to house and relatively hard to work with. To create a more malleable mouse model, three groups led by Stafford at Chapel Hill, Verma at Salk, and Erlinda Maria Gordon at the University of Southern California in Los Angeles, knocked out the gene for Factor IX in a strain of mice in the early 1990s. Haig Kazazian, currently chair of genetics at Penn, did the same for Factor VIII. With this work, hemophilia researchers had a gamut of organisms to work with, from cells to rodents to dogs. “It is a model that a lot of gene therapy ought to copy, if it could,” says gene therapist Savio Woo of Mount Sinai School of Medicine in New York City.

    Early in 1991, High and colleagues decided to take the plunge into gene therapy. “It was such an obvious idea, transferring genes from one organism to another,” High explains. “But as we used to say in the lab, ideas are cheap.” And often they don't work, High soon found out. When trying gene transfer experiments in animals, her Chapel Hill team quickly ran into the roadblocks that had stymied other fledgling gene therapists: namely, the vectors. For years, researchers could not coax the contemporary virus vectors to shuttle Factor IX genes into cells in culture. One popular vector of the day, retroviruses, didn't deliver enough genes into cells to eke out even 1% of normal Factor IX levels. The other available vector, adenoviruses, had its own drawbacks, chief among them that the immune system easily recognizes the virus vector, which in its unaltered state causes the common cold. Host cells harboring adenoviruses and their corrective genes are quickly pitched out of the body.

    On a national front, meanwhile, gene therapy was gaining credibility. After lengthy debates on safety and ethical issues, W. French Anderson, then at the National Institutes of Health (NIH), and colleagues in 1990 had won approval from the RAC to conduct the first human gene therapy trial in the United States. In September, with reporters and photographers on hand to record the event, Anderson and his colleagues injected a corrective gene into a 4-year-old girl with SCID. Trials to treat various cancers followed 5 months later.

    A new vector offers hope

    A year after Anderson's pioneering experiment, High moved to Penn, where she could devote herself entirely to lab work. She set up shop in the pediatrics department of the affiliated Children's Hospital. At Penn, officials were aggressively building the Institute for Human Gene Therapy, now the largest in the country. James Wilson, who later led the team of doctors that treated Gelsinger, was hired in 1993 to head it.

    Signs of success.

    In clinical trials, High and Kay injected adeno-associated virus carrying a corrective gene for hemophilia into the leg muscles of several patients. As hoped, the cell's machinery switched on the gene that codes for blood clotting factor IX. Fibers expressing the clotting factor appear green.

    CREDIT: MARK A. KAY AND ROLAND W. HERZOG/THE CHILDREN'S HOSPITAL OF PHILADELPHIA AND UNIVERSITY OF PENNSYLVANIA MEDICAL CENTER

    About the time High made her move, a hot new virus vector, known as adeno- associated virus or AAV, made its debut. High and many others were right on it— including Kay on the other side of the country. First developed and patented for use as a biological vector by virologists Barry Carter, then at NIH, and Nicholas Muzyczka of the University of Florida, Gainesville, the novel vector looked like the much-needed shot in the arm for the disheartened field. The vector, in essence, is a core of viral DNA shrouded in a protein coat. Related to adenovirus in name only, AAV doesn't cause any disease in humans or other mammals. The virus simply enters cells and homes in on chromosome 19. There, the strand of viral DNA inserts itself and becomes a permanent part of the host cell's chromosome.

    Given this mode of integration, researchers hoped that this new vector, unlike adenovirus, might be able to avoid detection and annihilation by the host immune system. What's more, it seemed to be able to target nonreproducing cells. But again, there were problems. Many researchers soon found that they could not coax the virus to grow in culture in the lab. Nor could they shoehorn large genes, such as Factor VIII, into the viral capsule. And once loaded with smaller corrective genes, the virus no longer integrated into its predictable spot on chromosome 19 but inserted randomly throughout the genome.

    A few found their way around these problems. One was viral guru Jude Samulski, now at Chapel Hill, who in the early 1990s pioneered the use of AAV as a gene-delivery vehicle. Samulski encouraged High and supplied her with the biological materials she needed to make the crucial vectors. With Samulski's help, High succeeded in splicing the gene for Factor IX, which is shorter than that for Factor VIII, into AAV.

    In key experiments in 1997, High and postdoc Roland Herzog teamed up with Wilson. The team injected AAV carrying human Factor IX genes into the leg muscles of mice; after the gene integrated into muscle cell chromosomes, the rodents steadily and stably churned out therapeutic levels of Factor IX. The following year, High was able to use AAV, loaded with human Factor IX, to correct hemophilia in Stafford's genetically altered mice by injections into either rodents' leg muscles or livers.

    Finally in January 1999, High, Herzog, and Tim Nichols, also of Chapel Hill, reported in Nature Medicine that they had partially corrected hemophilia B in a dog colony. To do so, they injected AAV, laden with canine Factor IX, into the animals' leg muscles. The paper ran back to back with an equally eye-catching report. Kay's group at Stanford, in collaboration with Richard Snyder, then at Somatix Therapy Corp. in Alameda, California (now owned by Cell Genysis), had independently engineered its own version of AAV carrying the gene for Factor IX. The West Coast team had pumped the vector directly into liver veins of hemophiliac mice and dogs obtained from Nichols. The liver procedure, although more invasive and therefore more risky than muscle injection, proved to be slightly more efficacious. In both procedures, the treated dogs produced at least 1% of normal blood levels of Factor IX. Kay's liver protocol, however, needed 10-fold fewer viruses to do the trick, in part because of stronger liver-specific promoters that drive the gene harder.

    “These were very promising studies,” says Anderson, who notes that at that time, no one had achieved such high levels of expression by injecting a new gene directly into muscle tissue.

    From competition to collaboration

    Unlike High, hemophilia wasn't even Kay's area of expertise—it was gene therapy instead that drew him into the field. As an M.D.-Ph.D. student at Case Western Reserve University in Cleveland in the early 1980s, Kay was struck by the potential of this budding field. Admittedly naïve, Kay initially feared the field would pass him by. The concept seemed so simple, “I figured by the time I'd finished medical school, residency, and my fellowship, all the interesting diseases would already be cured,” he recalls.

    That was hardly the case. By the time Kay graduated from Case in 1987, Anderson and other gene therapy pioneers were still wrestling with uncooperative vectors and tough regulatory hurdles that seemed to be getting tougher, as members of the RAC battled with Anderson and each other. Kay got his chance to witness gene therapy experiments firsthand in 1989, when he moved to Baylor College of Medicine in Houston, Texas. As he completed his pediatric genetics residency in the clinic, he also worked at the bench with molecular biologist Woo, who was then beginning to dabble in gene therapy.

    Kay wanted to use those nascent tools to help the children he saw in the clinic, most of whom were stricken with so-called inborn errors of metabolism. These diseases often involve rare genetic defects in various crucial liver enzymes—defects that cause devastating, if not fatal, consequences. “We could make the diagnoses, but we were really horrible at trying to develop efficacious therapy,” Kay recalls. “I realized that the chances of treating these diseases with anything other than gene therapy [were] pretty low.”

    Kay started working on hemophilia in the hope that any gene therapy techniques he devised could later be used for other liver-based disorders. By the end of his Baylor stint, Kay had been able to partially correct the defect in hemophiliac dogs using a retrovirus that carried Factor IX. But the procedure itself was draconian and “not something that you could do to people on a wide scale,” says Kay. Because retroviruses can only target dividing cells, injecting the vector directly into the body failed to get anything more than a negligible amount of corrective genes into liver cells. Kay's team had to remove two-thirds of an animal's liver, prod the cells to divide in culture, and then infuse the retrovirus vector carrying the Factor IX gene into the vein that runs into the liver.

    Despite the clinical impracticality, Kay published his work in 1993 as a “proof of principle” (Science, 1 October 1993, p. 29). Such obstacles gave Kay pause, however. “You solved one problem, and then you'd get another you did not anticipate,” Kay recalls. “I really thought hard about whether I should work on gene therapy.” At the same time, Kay and other gene therapists were confronting an increasingly skeptical research community. After repeated failures and slow progress, a 1995 report commissioned by then-NIH director Harold Varmus essentially warned the community to turn down the hype.

    Kay decided to stick with gene therapy, lightening his patient load and devoting most of his time to looking for other vectors and strategies to improve gene transfer into human cells. “I realized that to be effective, I couldn't be doing a little bit of everything,” says Kay, who by 1993 had moved to the University of Washington, Seattle. Kay's move came on the heels of High's relocation to Penn. And like High, Kay soon began working with AAV, trying to engineer a construct to treat hemophilia in a mouse model. Because of his bent toward liver diseases, Kay worked on liver routes of delivery instead of methods with muscle cells, enlisting the help of Snyder at Somatix to provide him with a steady supply of AAV.

    Almost a decade of effort paid off in 1997, when Kay and colleagues reported in Nature Genetics that they had provided normal mice with curative levels of human Factor IX via one injection of genetically altered AAV into the rodents' liver veins. Only then, says Kay, did “I realize that gene therapy really would work in people.”

    Kay and postdoc Hiroyuki Nakai spent the next 2 years trying to extend this work to dogs. In January 1999, as “friendly competitors,” Kay and High published their dueling Nature Medicine papers showing the partial correction of hemophilia (up to 1% of normal protein levels) in dogs. The feat placed them well ahead of their colleagues and competitors such as Salk's Verma, who in 1998 also used AAV—injected into the liver—to correct hemophilia in mice lacking Factor IX. “Up until this point, [corrective] Factor IX genes could be expressed but not enough to make therapeutic amounts of protein,” Verma says.

    Royal blood.

    Queen Victoria's family was plagued by hemophilia, an X chromosome-linked disease that affects mostly males.

    CREDIT: COURTESY OF KATHERINE HIGH

    Kay and High, who often sat together trading notes at NIH meetings on hemophilia gene therapy, soon decided to collaborate and move the work into humans, fusing her expertise in hemophilia with Kay's flair for manipulating vectors—a winning combination, says Anderson. High recalls saying to herself, “Well, I could stay funded for the next 20 years just doing mouse and dog experiments, and it would be a lot less grief for me. But sooner or later I have to ask, ‘Is this going to work in people?'”

    Into humans

    The deal was set. High would take the lead on delivering the corrective gene via muscles —and write up the necessary documents for Food and Drug Administration (FDA) approval in that tissue—while Kay would do the same for liver approaches. Avigen, a biotech company in Alameda, California, that High had been working with, would make the AAV vectors for both and help fund the trials. In this arrangement, the researchers sit on Avigen's scientific advisory board and are compensated for their time and expertise. To avoid the appearance of any conflict of interest, the two do not directly participate in recruiting patients, gaining informed consent, or treating patients in the trials. The regulatory bodies bought it. By April 1999, the pair passed through biosafety committee and internal review by boards at both Stanford, where Kay is now director of the program in human gene therapy, and Children's Hospital; initial review by the RAC; and then ultimate approval by the FDA for safety trials.

    Kay and High thought the biggest risk lay in a quirk of hemophilia. Individuals with certain forms of hemophilia—the type caused by large deletions of genetic material as opposed to simple misspelled nucleotides —do not produce any clotting factors at all; thus, the immune system has not had a chance to develop a “tolerance” to the otherwise native proteins. High and Kay worried that if they successfully introduced a gene for the clotting factor into patients with large gene deletions, thereby providing them with a steady supply of protein, the patients' immune systems might consider the protein to be foreign and make antibodies to destroy it. Not only would gene therapy be unsuccessful, but patients' lives might be endangered if, during a later bleeding episode, they produced antibodies to the clotting enzymes administered to save them.

    For these and other safety reasons, Kay and High decided to start with muscle instead of liver injections. Even though AAV takes 6 to 12 weeks to settle into a nesting site within the cell nucleus, the gene usually does not stray far from the site of injection —in other words, if the vector were injected into the muscle, it would stay there. “If you have some unanticipated untoward effect, you can go back and resect the muscle,” High notes. “Once you go into the liver, that's it. You are there.”

    To hedge their bets further, High and Kay selected only patients with misspelling mutations. Those may cause the gene to produce a faulty blood clotting protein, but a protein nonetheless. That meant that all patients in the trial would likely have been exposed to inactive blood clotting factors. The duo won FDA approval for their muscle trial—the first gene therapy trial ever to inject AAV.

    High watched anxiously from the side of the treatment room in June 1999, when hematologist Catherine Manno, who led the clinical team at Children's Hospital, injected the first patient with genetically altered AAV. To their relief, the procedure went without a hitch. As the team reported in the March 2000 issue of Nature Genetics, at the initial, suboptimal dose meant only to detect obvious safety problems, the first three patients, aged 23 through 67, showed no apparent toxicity. Even more encouraging, within 12 weeks after the injection, the team detected normal Factor IX genes—and the actual protein itself—in biopsies taken from the patients' legs. That meant that the gene had been incorporated into muscle cells and then produced its protein. The presence of Factor IX in the bloodstream suggested that the protein had successfully crossed from tissue to its target destination. To date, none of the three patients has produced any antibodies to fight off the protein.

    What's more, High and Kay reported, patients fared even better than animal studies would have predicted: Even at low doses, one of the three patients showed a boost in circulating levels of Factor IX—one even topped the benchmark 1%. Two of the three patients also reported needing significantly fewer therapeutic protein injections to treat and prevent bleeding episodes.

    To Anderson, now at the University of Southern California, the results were “not a matter of excitement, but a matter of relief.” If gene therapy doesn't work for hemophilia, he says, it is unlikely to succeed for most other diseases. An ever-cautious High will only say she was “surprised and pleased” at the apparent, although preliminary, success.

    Although the early results appear positive, Salk's Verma also warns against overoptimism. Because the patients in the study did not yet make enough protein to cure their disease, they also did not make enough to test whether the added protein will trigger the production of anti-Factor IX antibodies. “It's a double-edged sword,” says Verma.

    The Gelsinger case

    Buoyed by their preliminary success, High and Kay moved to collect more data. But, just as the team was getting ready to inject the next three patients with a slightly higher dose and to propose a second trial, Jesse Gelsinger died. The news hit hard. The young man was being treated for an entirely different disease: a deficiency in a liver enzyme called ornithine transcarbamylase that's needed to remove ammonia from the blood. The Penn team was also using a different vector—adenovirus—one that Kay and High had abandoned. But there was one similarity: High and Kay were proposing to introduce their more benign vector via the exact same entry route that the Penn team had used: direct infusion into the main liver artery.

    Under scrutiny.

    The death of Jesse Gelsinger in a gene therapy trial at the University of Pennsylvania prompted a spate of investigations and hearings, including the December 1999 RAC meeting where James Wilson, head of Penn's Institute for Human Gene Therapy, testified.

    CREDIT: SAM KITTNER

    FDA soon halted all trials at Penn's Institute for Human Gene Therapy and stopped several other human studies using adenovirus vectors (as opposed to AAV). Although High and Kay's trial was not directly affected, all gene therapy fell under intense scrutiny.

    Without any prodding, High and Kay immediately began to review their animal and human data to decide how or whether to go forward. “I think it is really important, number one, that safety issues are addressed,” says Kay. “There has been some debate by members of the gene therapy community that if the rules had been followed previously, this might not have happened,” he says, referring to the death at Penn.

    The pair decided to lower the next dosing regimen to a half-log increment and sent a letter to the FDA requesting permission to modify their trial. Kay and High had also planned to present their proposed liver trial to the RAC for discussion at its December 1999 meeting but postponed the review until the following March. “I wasn't worried [from] the standpoint of having a safety problem,” Kay says. “I was more worried about what the environment and the perception would be.”

    It proved to be a wise decision. At a packed 2-day meeting conducted under the glare of television cameras, it became clear that both the adenovirus vector and the route of administration were suspect. Scrutiny continued at the next RAC meeting on 9 March, when Kay and Bert Glader, the Stanford physician who would head the clinical trials, presented their proposal to use AAV, injected into the liver artery, to carry a corrective gene into hemophilia patients. It passed muster. The RAC approved the proposal, and the trial now sits before the FDA awaiting the ultimate nod.

    As the researchers wait, the muscle trial continues. On the advice of FDA, Manno has amended the consent form for clinical trials to mention the Gelsinger death and the risks of gene therapy. The middose crop of patients is doing well, Kay and High reported at an American Society of Hematology meeting in San Francisco in November 2000. One patient achieved the 1% benchmark and reported a reduction in self- administration of clotting factor. The other two, however, did not reach that benchmark. “We're still looking for a dose where every subject gets a result over 1%,” says High. The team recently increased the dose by another half-log in three more patients.

    High and Kay will likely have to meet even more stringent requirements in the liver trial. Over the past year, the RAC proposed new reporting and monitoring requirements to enable them to track adverse events. Among other things, these rules call for an independent monitor to check that data are being collected and reported properly. Hiring such an expert can add $100,000 to the already hefty price of clinical trials. Kay says that every day he spends an increasing amount of time complying with these rules and worries about the effect of such costs and paperwork on the field.

    Despite the expense, however, the pharmaceutically minded are convinced the investment will payoff. Avigen announced in November that it had entered a partnership with pharmaceutical giant Bayer Corp., headquartered in Leverkusen, Germany. Bayer, with a long interest in hemophilia drugs, plunked down $60 million to help conduct and finance phase II and phase III clinical trials with Avigen's Factor IX-laden AAV vector, dubbed Coagulin-B. In exchange, Bayer will hold regulatory licenses of the drug worldwide and receive royalties from its sales.

    Both High and Kay say the Gelsinger tragedy has changed their working lives. “At this point, the field is not something to go into if you want to labor in obscurity,” High remarks. “It's a highly visible field because of public, commercial, and political interest. That creates a great deal of pressure.”

    But, having spent years treating patients, they also believe the potential payoff is well worth the pressure. Indeed, Kay remains as optimistic as he was in medical school. Says Kay: “We are starting to see evidence of success and to really appreciate the potential of gene therapy for the entire field of medicine.”

  20. In Search of the First Europeans

    1. Michael Balter

    Who were the first people to set foot in Europe—and why did it take them so long to get there?

    SIERRA DE ATAPUERCA, SPAIN—Paleoanthropologist Juan Luis Arsuaga of the University of Madrid stands at the bottom of a towering limestone cliff just outside Burgos in northern Spain. Soaring above him, a meshwork of metal scaffolding blankets the 18-meter-high Gran Dolina, one of several hidden caves that were exposed in the late 19th century when a deep trench was dug through the hill for a mining railway.

    Here, in 1990, Arsuaga and his co-workers found five pieces of quartzite that appeared to be primitive stone tools—in stratigraphic levels thought to be close to 1 million years old. But at the time, many researchers believed there were no humans in Europe before about 500,000 years ago. So in June 1994, a small squad of excavators led by Arsuaga's colleague Eudald Carbonell of the University of Tarragona set out to dig in the Gran Dolina and prove the conventional wisdom wrong.

    One day in early July, they struck pay dirt in a stratigraphic layer called TD6. First came some human teeth, then a fragment of a lower jaw, followed by pieces of limb bones, and finally, part of a skull. Just a few days earlier, paleomagnetics experts had reported their first results from TD6: The layer was nearly 800,000 years old, the oldest uncontested date for hominid fossils in Europe. The common wisdom was shattered. In these caves, said Arsuaga, looking up at the Gran Dolina, there were people as long as 1 million years ago.

    And in another cave just a stone's throw away, called the Sima de los Huesos—the “Pit of the Bones”—the team has uncovered the remains of some 30 individuals whose bodies were apparently thrown there about 300,000 years ago. The amazing collection of hominid fossils in the Sima, the largest anywhere, may represent the ancestors of the Neandertals, who later lived throughout Europe and the Middle East. These fossils were found entirely by chance, says Arsuaga: “If the railroad had not been built here, we may never have found these caves.” He adds that hominid bones rarely survive the elements to be preserved as fossils. Arsuaga believes that cannibalistic practices at Gran Dolina and early burial practices at Sima de los Huesos helped create this treasure trove of ancient human bones.

    Spreading out.

    Archaeological sites record human expansion into Asia, then Europe.

    Atapuerca's riches are all the more prized because they are so rare. The region we now call Europe is only a short distance from Africa, the acknowledged birthplace of humans. Yet the archaeological and fossil record of the most ancient Europeans is frustratingly sparse. Although there are signs of early wanderers in Asia at least 1.7 million years ago and in the Middle East at 1.5 million years ago, early human colonizers seemed to have largely bypassed Europe. Only after 500,000 years ago—the age of an explosion of archaeological sites across Europe—did early humans take up residence on the continent, researchers thought. Before that time, “there [was] so little data to go on,” says archaeologist Mark Roberts of University College London (UCL), leader of recent excavations at Boxgrove in southern England, where early humans appear to have butchered horses 500,000 years ago.

    But discoveries over the past decade have brought new insight into the waves of migration into Europe and Asia. Now most anthropologists agree that early humans frequented southern Europe beginning perhaps 1.2 million years ago. But there are only a handful of these most ancient sites, and many researchers aren't sure just what species name to give to the first Europeans, or how many species were here. There is even less agreement on where the first settlers came from and whether they gave rise to later Europeans. And yet the recent discoveries at Atapuerca and other sites in Spain and Italy indicate that Europe was more than just a neglected backwater during the early days of human evolution. “After Atapuerca, we have to rethink everything,” says Roberts.

    A “short chronology”?

    The migration story begins in Africa, where the human lineage was born some 2.5 million years ago. But just how many waves of migration left that continent, which species they were, and where they went remains the subject of much speculation. There are a few solid data points, and the earliest suggest a very ancient migration out of Africa about 1.7 million or 1.8 million years ago. Most scientists now agree that simple stone tools and human bones discovered at Dmanisi, Georgia, indicate that humans were there at least 1.7 million years ago (Science, 12 May 2000, p. 948). The German-Georgian team that made the discovery believes that the remains belong to a tall, fully erect species called Homo ergaster, previously found only in Africa starting about 1.9 million years ago. Other researchers prefer to use the name H. erectus for these fossils as well as for many other human fossils found across Asia.

    Short chronology goes long.

    An explosion of hominid remains appears in Europe about 500,000 years ago, but a handful of accepted European sites now date back as far as 1.2 million years ago.

    Many researchers believe H. ergaster/H. erectus, carrying simple Oldowan stone tools—named after the Olduvai Gorge where they were first found—left Africa early and spread widely. Signs of this early diaspora include Dmanisi as well as other, more contested finds. Hominid remains often attributed to H. erectus on the island of Java in Indonesia have been dated to 1.8 million years ago, for example. And hominid jaw and tooth fragments as well as stone tools dated to 1.9 million years ago have been found at Longgupo Cave in Sichuan province, China, although both claims have drawn varying degrees of skepticism. There are persistent, if not fully accepted, claims for even earlier sites in Pakistan and China, too. Later, beginning about 1.1 million years ago, there are many widely accepted human sites across Asia.

    To researchers working in Europe, such ancient dates can only give rise to envy. Over the years there have been a smattering of widely disbelieved claims for stone tools and hominid fossils in France and Spain dating as far back as 2.5 million years ago, and somewhat stronger arguments for hominid activity between 700,000 and 1 million years ago at sites in France, Spain, and Italy. But in the early 1990s, many scientists questioned whether humans were in Europe before 500,000 years ago, when uncontested hominid remains and advanced stone tools begin to show up at far-flung sites such as Boxgrove, Tautavel in France, and Mauer in Germany.

    In a widely debated article in the September 1994 issue of Antiquity, prehistorians Wil Roebroeks and Thijs van Kolfschoten at Leiden University in the Netherlands threw down the gauntlet by arguing for a “short chronology” of human occupation in Europe —no more than 500,000 years. They reviewed a number of southern European sites where claims had been made for stone tools dated at 700,000 years or earlier, including Le Vallonet in France, Isernia and Monte Poggiolo in Italy, and the Orce Basin in Spain. The pair argued that in nearly every case the dates were unreliable or the “tools” were more likely to have been created by natural erosion. They concluded that their theory would be very easy to falsify: “The finding of only one [earlier] site … would disprove it,” they wrote. Ironically, while the Antiquity article was still in press, two hominid fossil discoveries were made that would do just that.

    Atapuerca and Ceprano

    In March 1994, Italian scientists discovered a hominid skull—fragmented by the bulldozer that uncovered it—near the town of Ceprano, about 80 kilometers southeast of Rome. Dating of the clay layer in which the skull was found suggests that it might be 800,000 to 900,000 years old, although some experts prefer a safer age of about 700,000 years. Then in 1994 and 1995, the Spanish team found skull fragments in the Gran Dolina beds at Atapuerca. The paleomagnetic dates of about 780,000 years were later verified by electron spin resonance and radiometric methods on animal teeth from the same layers. Suddenly there were two very early sites—and they even seemed to represent different species.

    The Ceprano skull resembles H. erectus, the only apparent sighting of this species in Europe, according to reconstructions by anthropologist Ron Clarke of the University of Witwatersrand in Johannesburg, South Africa, and by the Italian team, both published in the October 2000 Journal of Human Evolution. With features such as a massive browridge and a sharply angled occipital bone at the back of the skull, “to me the cranium appears very erectus-like,” agrees paleoanthropologist Philip Rightmire of the State University of New York (SUNY), Binghamton.

    And a fragment of the Gran Dolina frontal bone and parts of an upper jaw and midface from a juvenile bear an eerie resemblance to modern humans. That was a shock, because the Gran Dolina people lived at least 600,000 years before most researchers believe modern humans evolved. The Spaniards concluded that the Gran Dolina skull represents a new human ancestor, which they call Homo antecessor. They argue that this hominid gave rise both to our own species—H. sapiens—and the Neandertals, although many of their colleagues are reluctant to accept that conclusion, given that the skull is fragmentary and comes from a juvenile (Science, 30 May 1997, p. 1331).

    In any case, these finds put an end to the short chronology theory, at least for the whole of Europe. Roebroeks now concedes that the Gran Dolina skulls—as well as Oldowan-like stone tools from Orce, near Granada, which may be as old as 1.2 million years—do indeed disprove the theory. “The short chronology has been falsified for the Mediterranean,” he says.

    But Roebroeks argues that these few certified early sites in southern Europe represent an initial incursion into the continent that did not take permanent hold. He now proposes a “two-phase model” for the colonization of Europe: an early “intermittent occupation of the Mediterranean … followed by a substantial occupation of the Mediterranean and a significant, virtually continuous occupation of the areas north of the Alps and Pyrenees mountains from about 500,000 to 600,000 years onward.” This picture sounds right to many other scientists. “The hominid occupation ebbed and flowed with the changes in climate,” says paleoanthropologist Russell Ciochon of the University of Iowa in Iowa City, with most occupations taking place during interglacial periods.

    But Atapuerca may tell a somewhat different tale, UCL's Roberts says. The stratigraphic levels at the cave complex span several hundred thousand years of occupation, from the earliest levels to the most recent, estimated at about 250,000 years. “There are people there over a hell of a long time period,” Roberts says. They held on “through a number of interglacial cycles and [were] able to survive quite extreme climates.” Atapuerca team member Antonio Rosas of the National Museum of Natural Sciences in Madrid agrees that early humans probably began frequenting the area on a steady basis “beginning about 900,000 years ago.”

    The long journey

    Even if people lived at Atapuerca for that long, anthropologists are left puzzling over the lack of evidence for an even earlier European occupation. If humans were already at Dmanisi, on the cusp between Europe and Asia, at least 1.7 million years ago, why did it take them so long to get to Europe proper· “The apparent time lag between Dmanisi and the first European traces is puzzling, but probably also very informative,” says Roebroeks.

    One possible answer is that the first migrants out of Africa turned east rather than west, perhaps steered by geology and climate. During the late Pliocene epoch, some 1.8 million to 2 million years ago, conditions might have been more favorable for sorties out of Africa than during the period of frequent glaciations that followed. And during the Late Pliocene there was a land bridge between Africa and Arabia at the southern end of the Red Sea. But that route led to Asia, not Europe. And although it's clear that humans reached the Middle East by 1.5 million years ago, because stone tools dating to this time have been found at ‘Ubeidiya in Israel, the Taurus and Zagros mountains in Turkey and Iran present a “major barrier” to moving north and west into Europe, notes biologist Alan Turner of the University of Liverpool, U.K., in the September 1999 issue of Antiquity.

    Some researchers see support for this scenario in the kinds of stone tools early humans left behind. The first wave out of Africa apparently used Oldowan tools, which show up in Africa about 2.6 million years ago. About a million years later, a more advanced toolmaking technology, called the Acheulean and consisting of hand axes and other carefully crafted symmetrical tools, apparently arose in Africa (see Review on p. 1748).

    But Acheulean tools are rarely found in Asia beyond ‘Ubeidiya, even in more recent sites, nor in any of the early European sites dating before 500,000 years ago. To explain this, in the early 1990s paleoanthropologist Nicholas Rolland of the University of Victoria in Canada proposed the “long journey” hypothesis. According to Rolland, early humans left Africa and colonized Asia before the Acheulean technology was invented. But the mountains prevented them from turning west. Much later, taking a more northern route through central Asia that avoided the most formidable ranges, these Oldowan-bearing hominids dispersed west into Europe, eventually showing up at sites such as Atapuerca and Orce. “The fact that hand axes do not appear in Europe until about 500,000 years ago suggests that hominids were trapped in the Levant and could not move north,” says archaeologist Sarah Milliken of the University of Cork in Ireland. Adds paleoanthropologist Chris Stringer of the Natural History Museum in London, “We might have had a very complex situation in Europe, with people coming into the continent from different directions, including Homo erectus coming over all the way from the Far East.”

    But there are other possible explanations for the perplexing pattern of stone tool use. For example, Tarragona's Carbonell notes that Oldowan and Acheulean tool users overlapped in time in Africa, and he suggests that early humans there were split into two cultural groups. Competition between them may have driven the Oldowan users to migrate, leaving the Acheulean group to venture out later in an independent movement.

    Another possible route for the first settlers of southern Europe, including the Atapuercans, would have been the most direct: directly across the Mediterranean from North Africa. The rise and fall of sea levels during the Pleistocene sometimes narrowed the Straits of Gibraltar to as little as six or seven kilometers, compared with its current width of 13 kilometers. And the straits between modern-day Tunisia and Sicily were at times dotted with islands. “Judging from the distributions of archaeological material, and the early dates in the south but not the north of Europe, early humans must have crossed the Straits of Gibraltar and very probably also the route into Sicily in the Early and Middle Pleistocene,” says University of Oxford archaeologist Derek Roe. (The fossils on the island of Java suggest that humans may have had boats very early, but there is no direct evidence of such ancient seafaring.)

    And however narrow these sea passages were, they do seem to have blocked the migrations of other large mammals, faunal experts note. “No large mammals from the Lower Pleistocene, nor fauna of an African origin, have been found” in Sicily, notes independent Spanish faunal expert Bienvenido Martinez-Navarro. Likewise, few claims for African mammals—with the exception of one giant baboon—showing up only in Europe are widely accepted, casting at least a shadow of a doubt on the Sicilian route. “Most of these dispersals can be explained as migrations from the Near East,” says Jordi Agusti, a faunal expert at the Paleontological Institute in Sabadell, Spain.

    A big bang

    Whatever routes the first immigrants took into Europe, about 500,000 years ago a new species suddenly appeared, leaving substantial traces of its presence across the continent. At Mauer, near Heidelberg, Germany, where this hominid was first found in 1907, it left a massive mandible later dated to about 500,000 years ago. At Arago Cave, outside the town of Tautavel in southeastern France, its calling card was a partial skull some 400,000 years old and some substantial Acheulean stone tools. A similar skull, beautifully preserved and at least 300,000 years old, was found in the Petralona Cave in northeast Greece. And researchers at Boxgrove, on the Sussex coast of southern England, unearthed a 500,000-year-old hominid tibia and a cache of spectacular Acheulean hand axes. “Something happens at about 500,000 years ago,” says Roberts. “You get a big bang [of hominid occupation], and the geographical area covered is much greater.” After many millennia of sparse occupation, suddenly, “it's as though [early humans] said, ‘OK, now let's do things properly,'” says archaeologist Clive Gamble of Southampton University in the United Kingdom.

    Go with the flow.

    One view of how various human species might have dispersed in space and time.

    CREDIT: AFTER P. RIGHTMIRE

    Most researchers believe that this human was a different species from the more ancient Europeans. Its brain cavity ranged from about 1100 to 1300 cubic centimeters, compared to 1000 cubic centimeters or less for Asian Homo erectus, and it carried not simple Oldowan tools but Acheulean hand axes. Although opinions are divided as to what to call this European hominid, with some favoring H. heidelbergensis after its discovery in Germany and others simply calling it “archaic Homo sapiens,” many researchers believe that it represents an intermediate step between H. erectus and full-fledged modern humans.

    These new immigrants were able to firmly establish themselves in Europe where perhaps less hardy hominids had failed. “It seems that Homo heidelbergensis was better able to cope with fluctuations in climate,” says Gamble. One possible reason is that its Acheulean tool kit allowed it to be a better hunter than earlier humans, who may have survived primarily by scavenging. At Boxgrove, for example, hominid remains are associated with animal bones bearing cut marks and other signs of butchering. Spectacular support for this view may come from Schoeningen, Germany, where 400,000-year-old wooden spears—the oldest uncontested hunting weapons—were found together with the skeletons of more than a dozen horses. “This is Homo heidelbergensis at its best!” enthuses Iowa's Ciochon. “It had superb hunting skills far outpacing [those of] any hominid that had come before.”

    The Atapuerca team believes that H. antecessor gave rise to H. heidelbergensis, but other researchers are not so sure. Instead, the origins of H. heidelbergensis might be traced to similar-looking hominid fossils in Africa, including skulls found at Bodo, Ethiopia, dated to at least 600,000 years ago, notes Ciochon. If the African origin is correct, a possible route for the migration of H. heidelbergensis out of the continent may be suggested at the site of Gesher Benot Ya'aqov, on the banks of the Jordan River in Israel (Science, 14 January 2000, p. 205; 11 August 2000, p. 944). Although there are no human bones at Gesher, Acheulean hand axes and cleavers closely resembling those found in Africa clock in at 780,000 years old. “One can argue that this species evolved [in Africa] … and then spread quickly to western Eurasia” with the Acheulean tools, says SUNY's Rightmire.

    Many researchers also think that H. heidelbergensis later gave rise to the Neandertals, who first appeared in Europe about 250,000 years ago and whose ability to survive in the cold climates of the Pleistocene was unequaled. Indeed, the Atapuerca team believes that the Sima de los Huesos skeletons are a transitional form between H. heidelbergensis and the Neandertals. For example, Atapuerca team member Rosas compared more than 30 of the 300,000-year-old mandibles found at Sima with those from Neandertals and other earlier species of Homo, including H. erectus and H. heidelbergensis, and concluded in the January 2001 American Journal of Physical Anthropology that a number of their features—such as the shape of the chin and the arrangement of the back molars —are ancestral to the Neandertal fossils.

    But according to one leading theory of modern human origins, while this transition was taking place in Europe, the ancestors of modern humans—whatever species they were—remained in Africa. In this “Out of Africa” view, about 100,000 years ago, in one final explosion of migrations, modern humans began moving out of Africa, ultimately pushing the Neandertals and any other remaining hominids in the world aside. Some researchers strongly disagree with this scenario (see sidebar on p. 1728). But if it is true, the nearly 2 million years of hominid wanderings across Europe and Asia that preceded it merely set the stage for events to come.

  21. The Riddle of Coexistence

    1. Ann Gibbons

    Neandertals and modern humans lived side by side for thousands of years in Europe—with apparently dramatic consequences for each group

    Forget first contact with aliens. For real drama, consider close encounters of the human kind. Forty thousand years ago, for example, our ancestors wandered into Europe and met another type of human already living there, the brawny, big-brained Neandertals. Such a collision between groups of humans must have happened many times. Several early human species coexisted in Africa, and when our ancestors left Africa and spread around the globe, they probably came across other kinds of humans, such as Homo erectus, who had left Africa in a previous migration. But the European encounter with Neandertals was probably the last such meeting. And so it has proven to be irresistible terrain for anthropologists and novelists alike, who often explore the same themes, including the question of sex (see sidebar on p. 1726), and come up with similar endings to the story: Anatomically modern Cro-Magnons arrive, prevail, and abruptly wipe out the brute Neandertals.

    Cultural diversity.

    As modern humans and their sophisticated tools arrive from Asia (red), Late Mousterian tools made by Neandertals (black) persist in refugia in Europe and Asia. “Transitional” tools, perhaps made by both kinds of people (purple), also appear at this time.

    CREDIT: O. BAR-YOSEF/HARVARD UNIVERSITY

    The real story from the archaeological and fossil records, however, is far more interesting. It suggests that Neandertals were neither stupid nor easily driven to extinction. They vanished about 25,000 to 30,000 years ago, and many researchers think that they were indeed replaced, with little or no interbreeding, by modern humans—although that debate remains one of the fiercest in paleoanthropology (see sidebar on p. 1728). But a new look at archaeological sites throughout the Mediterranean region, as described in two recent books, >* shows that the two groups coexisted in Europe for at least several thousand years and took turns occupying the same caves in the Middle East for much longer. Although modern humans had a clear technological and cultural advantage in Europe, they did not rout the Neandertals. There are no signs of war or rapid replacement. So far the evidence suggests that there was plenty of room for both groups for thousands of years, with competition for resources intensifying only as the climate worsened. “It was not a blitzkrieg,” says archaeologist Steve Kuhn of the University of Arizona in Tucson. Rather, fossils and artifacts show that Neandertals hung onto prime real estate in Europe before eventually splitting up into retreats in southern Italy, Greece, Iberia, and the hilly Balkans and Caucasus.

    Despite—or perhaps because of—the competition, this time of contact apparently stimulated both sides: Neandertals and moderns both reached new heights of cultural achievement, as represented by new styles of stone tools, ivory beads and body ornaments, cave art, and bone carvings. “When you get these new people moving into Europe, all sorts of cultural excitement is going on,” says Fred Smith, a paleoanthropologist at Northern Illinois University in DeKalb. “Both groups are trying out new styles of tools and culture.”

    There's no doubt, however, that the modern humans' lifestyle quickly surpassed that of the Neandertals. Soon after they arrived in Europe, the modern newcomers made barbed projectile points and bone needles, painted vivid scenes on cave walls, carved animals out of ivory, and adorned themselves with bone pendants. Meanwhile, although some Neandertals experimented with new technologies, they generally continued using much simpler artifacts. “If you look at this coexistence for several millennia, it is striking how limited the influence is on each other,” says paleoanthropologist Jean-Jacques Hublin of the University of Bordeaux in France. “It's not like one group was gradually digested by another one. They maintain their own identity for millennia.”

    Even with more advanced humans nearby, the Neandertals were “quite successful in surviving for quite a long time,” says Harvard University archaeologist Ofer Bar-Yosef. “They lost the war, but they were not dummies.”

    The birth of modern behavior

    All this is quite different from the classical view of Neandertals as inferior humans doomed by their innate limitations. Hublin recalls that as a graduate student in the 1970s he was taught a simple story: Neandertals made primitive stone tools, whereas modern humans made more sophisticated ones. Neandertal bones are often found with the thick flakes and hand axes known as the Mousterian tradition of the Middle Paleolithic, the period from 270,000 to 45,000 years ago. By contrast, Cro-Magnon people were found in Europe with sophisticated blades, slender-hafted spearheads and bone tools, and body ornaments that are hallmarks of the Aurignacian tradition of the Upper Paleolithic, the period from 45,000 to 10,000 years ago. The Aurignacian is the first Upper Paleolithic tradition in Western Europe and is famous for its artwork, such as the stunning cave paintings seen in France's Grotte Chauvet (Science, 12 February 1999, p. 920), which are often cited as evidence of fully modern behavior. Meanwhile, the Neandertals buried their dead, hunted the same game, and exploited the same small animals and plants as the neighboring modern humans did. But overall they behaved like earlier Neandertals, who had survived the harsh climate of Europe since the onset of the last interglacial 127,000 years ago, relying on physical strength and Mousterian tool kits to survive.

    The timing of the transition from simple to sophisticated artifacts coincided with what was then thought to be the disappearance of the Neandertals. So, many anthropologists concluded that the Neandertals in Europe had in fact quickly evolved into the more advanced modern humans, such as Cro-Magnon. This view was supported by tool assemblages that seemed to be “transitional industries” between the Mousterian and Upper Paleolithic. “There was this linear view of human evolution, with Neandertals as the ancestors of modern humans,” says Hublin. Now, as he tells his students: “This glorious march of the hominids is completely wrong.”

    The first major blow to this long-standing view came in the late 1980s, when the remains of anatomically modern humans from caves at Qafzeh and Skhul in Israel were dated by modern radiometric methods to 92,000 to more than 100,000 years old. That's 40,000 years before Neandertals inhabited the neighboring cave of Kebara, only 100 meters away from Skhul. Clearly, modern people could not have evolved from these Neandertals. And the anatomically modern people here behaved just like the Neandertals—they used the same tool kits, hunted the same wild oxen and deer, and exploited the same small animals and plants. Both groups buried their dead.

    But then, 50,000 to 40,000 years ago, anatomically modern humans began to act modern in many places, first in Africa, then in the Levant and Europe. By 45,000 years ago, modern humans in the Levant had the sophistication to retouch, or correct, the stone points they made to put on the tips of spears. Beads appeared about 42,000 years ago in Africa and southeast Turkey, says Arizona's Kuhn. “We begin to see these so-called transitional industries that combine some old and new features,” says Bar-Yosef, who thinks these tools in the Levant are the handiwork of the ancestors of moderns who later brought their techniques to Europe and refined them.

    It is in Europe where the technological and cultural revolution reached its height, starting 40,000 years ago when modern humans began to enter the continent from western Asia. Once in Western Europe, they underwent a creative explosion, and suddenly new so-called “transitional” industries came and went. But it was apparently not only the Cro-Magnon whose abilities flourished in Europe at this time: More and more evidence now suggests that the Neandertals advanced, too. “The Aurignacian is widely agreed to be made by moderns,” says Harvard University paleoanthropologist David Pilbeam. “At issue is who made the transitional industries.”

    Transitional tools

    All the transitional technologies are close in time to the arrival of modern humans. So if the Neandertals made some of those technologies, it might suggest some sort of response to the modern human invasion, whether direct copying, more subtle imitation—or even a honing of their own abilities in the face of new competition for resources, says Kuhn.

    And there is at least one strong case for Neandertal sophistication: the artifacts of the Châtelperronian tradition in France. These are a mix of Mousterian and Upper Paleolithic stone tools and include grooved teeth, bone and shell pendants, beads, and body ornaments. This 35,000- to 40,000-year-old culture was long thought to be the work of modern humans making the transition from Mousterian to Aurignacian. Then in 1979, Neandertal bones were found in a layer with Châtelperronian tools. Now, says Hublin, it seems Neandertal artisans made Châtelperronian artifacts just about the time that Cro-Magnon people invaded the region. There is fierce debate over whether Neandertals imitated the nearby modern humans or invented this culture independently (Science, 20 November 1998, p. 1451). Regardless, the timing implies contact: “It was clear that Neandertals in some spots survived the arrival of modern humans,” says Hublin. “They were the makers of some of these transitional industries, not the modern humans.”

    And some archaeologists now argue that the Châtelperronian is not unique. Other sites in Italy, Greece, and central Europe, where the most complete archaeological trail exists for this time, indicate that Neandertals were not just copycats but were experimenting with more modern tools and behavior. “Everyone believed in the sequence of Mousterian, Châtelperronian, Aurignacian,” says archaeologist Janusz Kozlowski of Jagellonian University in Cracow, Poland. “Now we know the transitional technologies are much more rich.”

    More than 20 different technologies, for example, have been identified in archaeological sites in central Europe. No one knows who made these tools 45,000 to 30,000 years ago, because no diagnostic human remains are associated with them. But Kozlowski argues that at least two of the cultures, the Bohunician and the Szeletian, may have been made by Neandertals. The Bohunician is early—45,000 years ago in Moravia—and the blades, although modern, retain many of the steps from a late Mousterian style of producing flakes from a stone core, combining old methods with new blade production. These ancient people were experimenters: A bit later, they used methods more like those of the Upper Paleolithic to haft blades. “This means these people were able to produce tools according to different sequences of movement,” says Kozlowski. “This flexibility shows more complex cognitive abilities.”

    The other technology with possible Neandertal ties, the Szeletian, was found with a Neandertal mandible in the Szeleta Cave in Hungary. Although that is not proof that Szeletians were Neandertals, the Szeletian may well have been the work of some of the last Neandertals, says Northern Illinois's Smith. And these toolmakers apparently had contact with the Aurignacians. Szeletian tools have been found at Aurignacian sites in Slovakia, and Aurignacian bone points have been found at Szeletian sites, although there is no evidence as to whether the contact was friendly, hostile, or direct.

    Retreating south

    There are also signs that Neandertals were aware of modern humans' presence in Italy, where three distinct traditions coexisted between 40,000 and 30,000 years ago. The early Aurignacian (characterized by many little bladelets, rather than large blades) appeared in northern Italy almost 37,000 years ago. At that time, the late Mousterian tools began to include more blades, and the Mousterian people—the Neandertals—moved out of northern Italy, says Arizona's Kuhn, who has worked at Italian sites. And another new transitional technology, known as the Uluzzian, appeared in central and southern Italy. There are no diagnostic human remains linked with this culture, but the Uluzzian is completely different from the Aurignacian and at some sites predates it; thus, Kuhn thinks it is the handiwork of Neandertals.

    This pattern, of extended coexistence and slow movement south into refuges by the Neandertals, is best documented in Iberia by a trail of Mousterian and Aurignacian tools. The Aurignacian tools appeared at about 36,500 years ago and coexisted with Mousterian artifacts for some time. But the Mousterian tools persisted in Iberia—and are even associated with Neandertal remains in Columbeira and Figueira Brava near Lisbon—until 30,000 to 28,000 years ago, making Iberia one of the last Neandertal holdouts, says archaeologist Joao Zilhão of the Portuguese Institute of Archaeology in Lisbon.

    The record from these far-flung sites taken together suggests that the replacement of the Neandertals was slow (see map above). At first, they remained entrenched in their homelands, but eventually they moved into southern Italy and Iberia and into the Balkan and Caucasus mountains. Something—either modern humans coming from the north or a climatic cooling during this time—prompted them to give up their “home field advantage,” says Kuhn. “When the new people got there and started eating the same foods, life became a little more uncertain; there was a little competition for who got the game first.” And he thinks that competition for resources—not necessarily direct contact—could have spurred the Neandertals, who had been living the same way for millennia, to make changes: “I think that when times got hard, Neandertals came up with better tools. It shows what they were capable of when you pushed them.”

    This competition, says Kuhn, may have inspired changes in “the other guys, too”—modern humans. “Suddenly, they ran into these really intransigent locals. It would have affected them, too,” says Kuhn. He notes that the most complex Paleolithic art and culture appeared only in Europe, although modern humans emerged earlier in Africa, the Middle East, and Australia. He speculates that this flowering of culture was in part a reaction to competition with another kind of human.

    Eventually, the Neandertals disappeared, perhaps because they were unable to rebound when the climate turned frigid starting 28,000 years ago and competition for prime land became harsher, speculates Bordeaux's Hublin. Moderns had some subtle advantage—perhaps slightly better language or abstract reasoning skills, or even a shorter interval between births of babies—that meant the difference between survival and extinction, say Pilbeam and Bar-Yosef.

    But not everyone is buying the idea of a long coexistence, with Neandertals as resourceful experimenters. Stanford archaeologist Richard Klein, for example, says that most Neandertals simply couldn't match the symbolic sophistication of the moderns. He considers the Châtelperronian “the only compelling indication of overlap” between moderns and Neandertals. And he warns that many of the sites of the “transitional technologies” and of the latest persistence of Neandertals have dating problems. The period from 60,000 to 30,000 years ago is at the limits of radiocarbon dating, so the resulting dates can easily be skewed by tiny amounts of recent carbon contamination and cause errors spanning 5000 to 40,000 years—enough to make it seem that Neandertals and moderns coexisted far longer than they really did.

    Bar-Yosef responds that dates from dozens of sites show “a clear geographic pattern” indicating long periods of overlap. “If you go one by one, you can find problems with individual dates, but the general trend isn't going to change,” he says. Nonetheless, he agrees that an international effort to redate key sites is needed to nail down precisely when this last transition from one type of human to another took place. “What happened with these two populations happened over and over again,” notes Pilbeam. “It's just that this interaction was recent enough that we are able to detect it.”

    • * O. Bar-Yosef and D. Pilbeam, The Geography of Neandertals and Modern Humans in Europe and the Greater Mediterranean (Peabody Museum of Archaeology and Ethnology and Harvard University, Cambridge, Massachusetts, 2000).

  22. Pre-Clovis Sites Fight for Acceptance

    1. Eliot Marshall

    A handful of archaeological sites across the Americas are modern-day battlegrounds, where iconoclastic researchers struggle to prove their claims of very ancient peopling of the Americas

    COLUMBIA, SOUTH CAROLINA—Al Goodyear sees himself as a conventional archaeologist—or he did until a couple of years ago. He works in the state archaeologist's office here, has a faculty position at the University of South Carolina just across the street, and is known as an expert on Paleo-Indian artifacts in the Southeast. But Goodyear says his career took a radical turn after he began to explore a site called “Topper.”

    As Goodyear tells it, in May 1998 his team got flooded out by the Savannah River about 137 kilometers southwest of Columbia. Retreating to higher ground, he led his volunteers, who had paid about $300 a week to participate, up a sandy hillside to the site of a previous excavation, named after forester David Topper, who pointed it out. Here, Goodyear and his volunteers dug deep, beyond a depth that conventional wisdom regarded as sensible, below the “Clovis level” that marks what archaeologists have long considered the first human occupation of the Americas (see sidebar on p. 1732).

    Gregarious and affable—known to friends as the state pork barbecue champion of 1977—Goodyear was not looking for controversy, and he says he expected no traces of human activity at this depth. Like many of his peers, he believed that the Clovis big-game hunters, with their distinctive fluted spear points, were the first to arrive in the Americas, about 12,000 years before the present (BP). >* However, Goodyear knew that supposed pre-Clovis artifacts had been found at more than half a dozen North and South American sites, and he decided it would be “irresponsible” not to look at this one. He explains: “I hadn't done it before, because you don't look for what you don't believe in.”

    Goodyear was “shocked” by what his volunteers began to unearth from below the Clovis level: small blades of chert, chiseled stone “burins” or needles—possibly for decorating bone—and other fragments. His team found no biface tools or charcoal for dating, which would make the artifacts more convincing. But Steven Forman of the University of Illinois, Chicago, dated the sand just above these microlithics by optically stimulated luminescence to an age of about 15,500 calendar years or a radiocarbon date of about 13,000 years BP, says Goodyear—making them clearly pre-Clovis. These findings haven't been published, but skeptics tend to accept Goodyear's dates and geology; what they question is whether the stone pieces were made by humans.

    Suddenly Goodyear had crossed the line, challenging the orthodoxy of North American archaeology. It's a weird feeling, he says, to be “putting a career of 30 years on the table.” But there's no turning back.

    Topper's visibility—the site has been mentioned already in four national magazines—has added momentum to the pre-Clovis movement, but Topper is hardly unique. Today, advocates of pre-Clovis immigration can cite a string of evidence—ranging from tools from the 16,000-years-BP-or-older Bluefish Caves in Alaska's Yukon, to a 12,500-years-BP dwelling at Monte Verde in Southern Chile—that points to a very early human presence (see map). Although none of these sites might be persuasive on its own, taken as a group they appear to be winning converts. Says archaeologist David Meltzer of Southern Methodist University in Dallas, Texas: “The gates have been thrown open” to new ideas.

    The pre-Clovis trail.

    Excavators seek evidence of very ancient Americans in artifacts from Cactus Hill, Topper, Meadowcroft, and other sites.

    CREDIT: PHOTO: E. MARSHALL

    All the same, Goodyear and his fellow iconoclasts have not won full acceptance. Leaders in the field remain skeptical, noting that the evidence from pre-Clovis sites is patchy and uneven, unlike the powerful stone record of the Clovis people. “I've been looking at this for 40 years,” says C. Vance Haynes Jr., the eminent University of Arizona, Tucson, archaeologist. He finds it hard to accept that it is just “a coincidence” that the Clovis evidence lies atop layers that at “site after site” contain no trace of humans. He and other skeptics have challenged pre-Clovis finds, questioning everything from dates to stratigraphy. A close look at a few of the most important and controversial sites illustrates why it is so difficult to prove very ancient occupation—and why the peopling of the Americas remains an open question.

    Monte Verde and beyond

    The most accepted pre-Clovis site—although it still has skeptics—is Monte Verde, in south-central Chile. It took 2 decades for it to be recognized, and its principal investigator, Tom Dillehay of the University of Kentucky, Lexington, campaigned hard to win converts. His work centers on what appears to be an ancient dwelling in an upland bog 56 kilometers from the Pacific coast. Beside a small creek, Dillehay and his group unearthed the remains of several primitive structures, stone and wood implements, fire pits, and chewed plant cuds. The quantity of evidence is massive, but the carbon dates were controversial: Some reviewers had balked at dates of at least 12,500 years BP—long before the Clovis people set foot in North America.

    In 1997, Dillehay invited a panel of well-known archaeologists to the site, handing each of them a bulky site report published by his sponsor, the Smithsonian Institution. The members responded with a unanimous vote of confidence (Science, 28 February 1997, p. 1256). Even Haynes, who felt he was included as the odd man out on the panel, accepted the early date.

    That acceptance, according to Meltzer, “broke the logjam” of skepticism about pre-Clovis dwellings. It also helped that Clovis-contemporary or pre-Clovis sites have popped up in Venezuela and Brazil (Science, 19 April 1996, pp. 346, 373). Considering all the evidence, Meltzer adds, “it's striking that there's so much material at 11,000 years BP in South America; it suggests that people had been there a long time.”

    But even Monte Verde has been challenged again. In 1999, archaeologist Stuart Fiedel, a pre-Clovis skeptic at the consulting firm of John Milner Associates in Alexandria, Virginia, blasted the quality of Dillehay's site report in a long critique published in the popular journal Discovering Archaeology. Fiedel found many glitches, noting for example that key artifacts were described as being unearthed in different locations (Science, 22 October 1999, p. 657). Although Meltzer and others say Fiedel's review was nitpicking and unfair, it had an impact. Haynes again began to raise questions about whether the artifacts might be younger objects mixed with older material and animal bones in a flood of glacial water.

    Meadowcroft

    If Monte Verde has earned respect, although not wholehearted acceptance, another veteran site, the Meadowcroft Rock Shelter in western Pennsylvania, still struggles for recognition. Its fate—3 decades of bitter argument over its antiquity and credibility—illustrates precisely what Goodyear would like to avoid at Topper.

    Perched on an outcrop of sandstone over Cross Creek, a tributary of the Ohio River, this shelter was a popular camping spot for people exploring America's East Coast, says principal investigator James Adovasio, director of the Mercyhurst Archaeological Institute at Mercyhurst College in Erie, Pennsylvania. Meadowcroft, enclosed at the back, has a commanding view of the landscape, access to fresh water, a high roof that allows smoke to escape, a southern exposure for warmth, and a floor that stays dry all year, 15 meters above the nearby creek. And radiocarbon dates from the deepest occupation level are more than 19,000 years old—far older than the Clovis time barrier.

    Adovasio, meticulous and feisty, claims he didn't go looking for an ancient site. He says he came to Meadowcroft in the early 1970s because it was a good place to train students. At first, says Adovasio, “I guessed human occupation might go back to 3500 or 4000 years BP.” But the cultural debris went much deeper. Adovasio's team identified 11 floor layers, with clay and shale at the very base, 4.6 meters down. “Jim can be quite a taskmaster,” says Meltzer, and indeed, the rock shelter's immaculate interior —now protected by a wooden structure and wired with floodlights—is dotted everywhere with labels.

    In soil removed from the site, researchers found 20,000 stone flakes and objects, 150 fire pits, 21 refuse or storage pits, 1 million animal remains, and 1.4 million plant remains. By 1975, Adovasio's group had released a string of 17 radiocarbon dates associated with the materials; today, 52 dates have been published. Adovasio says they line up in elegant order, the oldest in sterile clay and shale at the bottom (31,000 years BP) and the youngest at the top (1000 years BP). There are only four “reversals” in the column, points where a piece of material has a date that's out of sequence with its location in the soil. All four such flip-flops have dates of less than 6000 years ago.

    But as soon as the older dates were announced, Adovasio says, they drew a “barrage of criticism.” No critic has been more persistent than Haynes. Today, after decades of trading salvos with Adovasio, Haynes and others still have reservations. They point out that Adovasio has not published a final site report laying out the stratigraphy, the precise source of each artifact, and the associated dates. Adovasio says that this formal tome is still in progress, but that he and his colleagues have already answered questions in “thousands of pages” in 85 reports.

    As at Monte Verde, radiocarbon dates have been a focal point for criticism. Early on, Haynes and others suggested that the Meadowcroft samples might be contaminated with coal particles or dissolved carbon in groundwater, tipping the results toward older dates. The scenario is plausible, because the area was once strip-mined for coal. Noting that soluble carbons removed from a sample before testing were older than the residue itself, Haynes suggests they were carried in by water.

    Clovis or “Clovis-lite”?

    Very ancient stone spearheads (bottom) resemble Clovis points, only simpler.

    CREDIT: COMMONWEALTH OF VIRGINIA, DEPARTMENT OF HISTORIC RESOURCES

    Adovasio responds angrily that the nearest coal seam is nearly a kilometer from the shelter and that every sample was checked for coal. Just two of 11 samples had unusually ancient soluble fractions of carbon, Adovasio says, calling it an anomaly. For 2 decades he has dismissed what he now calls “pathological” skepticism about the carbon dates, saying there's no sign of water intrusion. In 1999, he was vindicated by an independent investigator, geomorphologist Paul Goldberg of Boston University. After microscopically inspecting 25 samples from six layers at Meadowcroft, Goldberg and his colleague Trina Arpin concluded that “no signs of groundwater activity could be seen.”

    Haynes, nevertheless, continues to say that the best way to settle all this is to get carbon dates on a few remaining items: a nutshell and some seeds. Adovasio isn't interested. He says he informed Haynes in the taverna at the Monte Verde meeting 3 years ago: “I will never run another date you have asked me for, because, since 1974, we've addressed every criticism anyone has raised. I've spent half my life on this.” To Adovasio, the case is closed.

    Cactus Hill: Racing the bulldozer

    While critics wait for Meadowcroft's site report, they can pore over another tome: a formal report published by the state of Virginia on a site called Cactus Hill, about 72 kilometers from Richmond. Cactus Hill is only the second major East Coast site whose pre-Clovis artifacts have been well documented and the only one for which a full site report is available. The report was written by lead investigator Joseph McAvoy and his wife, Lynn, who run a private consulting firm called the Nottoway River Survey, with an appendix by a third investigator, Michael Johnson of the Archaeological Society of Virginia in Fairfax. The two teams maintain a competitive joint tenancy at the site without collaborating.

    In 1989, McAvoy says, he learned about Cactus Hill from a farmer who noticed a fluted point in sand dumped on a roadway. In 1992, Johnson arrived, led by another amateur collector. Word got out. Often, says Johnson, “we have to run in front of bulldozers”—and in this case, he and the McAvoys also had to run after looters.

    Cactus Hill, a gently sloping ridge 100 meters east of the Nottoway River, gets its name from the yellow-flowered prickly pear that covers it in summer. Windblown sand piled up at this spot over many millennia, according to McAvoy. This must have been a great camp, he says—high and dry, with a view toward the river and a now-vanished pond visited by waterfowl and deer. When archaeologists arrived in 1989, commercial sand haulers had done some damage. Prehistoric stone points were tumbling out of the side of the hill. Quarrying stopped, and the two teams began working at opposite ends of the ridge. In visits over the next few years, the McAvoys recovered 500,000 stone chips and more than 600 “diagnostic” artifacts that can be linked to specific cultures such as Clovis.

    The hill yielded a mixed treasure. Near the top, excavators found traces of the British colonial period, including tobacco pipe stems, scissors, and a 1696 sixpence. At depth, they found layers containing projectile points of successively greater age, including some of the fluted Clovis type. At the lowest level just above sterile clay, they found a scraper, small stone blades, and a quartzite core from which blades were struck. Charcoal from this layer suggests its age is at least 15,000 years BP. In addition, the McAvoys found two unusual small stone points in a deep but undated layer, shaped in a style sometimes called “Clovis-lite.” In the official 1997 report, McAvoy admits that these could be Clovis points that were whittled down by heavy use, but more plausibly, they are relics of a pre-Clovis culture. Johnson, for one, is adamant that they're old: “There's no way in hell these points are Clovis or post- Clovis,” he says.

    Reviewers who have visited the site are concerned mainly with validating the dates and sorting out the layers. Haynes, for example, worries that disruption by roots, animals, or looters could have pushed old charcoal into layers with young artifacts. And, because different samples from the same layer have been given different ages, he worries that researchers may be selecting favorable dates. McAvoy says that Haynes knows that the anomalous dates were from samples that outside experts, such as archaeobotanist Lucinda McWeeney of Yale University, judged to be intrusions of younger plants that burrowed down. There's no evidence that older material was pushed upward, McAvoy says.

    Whatever the dates, Haynes is nevertheless impressed by the old points. He even offers a semantic concession, saying they could be “proto-Clovis artifacts” made by people who hadn't yet mastered the art of fluted points.

    Similar tales of claim and criticism are playing out at other pre-Clovis sites. In Wisconsin, David Overstreet of Marquette University in Milwaukee aims to prove that stone tools and mammoth bones with cut marks are really as old as 13,500 years BP. Anna Roosevelt of the University of Illinois, Chicago, has been challenged on her reports of an 11,000-years-BP site at Monte Alegre in Brazil's Amazon. Jon Erlandson of the University of Oregon in Eugene is exploring artifacts in the 10,500-years-BP Daisy Cave on a channel island off the California coast.

    Meanwhile, the McAvoys have already invested more time and money defending their dates and conducting new tests than they had ever imagined, says Joe McAvoy. And although he thinks that the antiquity of the site has been established, the effort has been stressful. If he were to do it all over again, he sometimes thinks, he wouldn't dig so deep.

    • * Dates in this story are corrected radiocarbon dates.

  23. Clovis First

    1. Eliot Marshall

    As some investigators struggle to prove the great antiquity of a smattering of archaeological sites in the Americas (see main text), they face a powerful and entrenched theory. For decades, archaeologists have agreed that the first to discover the Americas were the Clovis hunters, who crossed a land bridge from Siberia to Alaska and chased game south into the Great Plains. Before that, the theory goes, “there was nobody home” in either North or South America, as archaeologist Al Goodyear of the University of South Carolina, Columbia, puts it.

    The timing of the Clovis people's journey is pinned down by the melting of the great glaciers of the last Ice Age. The Clovis people might have trekked through a gap in the glaciers just east of the ice-covered Pacific coastal mountains and south of the arctic ice, to the Great Plains (see map). But they couldn't have gone very far south before the ice melted to open a path. Carbon dating of plant material in the glacier's path indicates that the gap probably did not open earlier than 13,000 years before the present (BP), says Arthur Dyke, a glacier expert at the Geological Survey of Canada in Ottawa. By 12,000 years BP, the path clearly was open.

    Skirting the ice.

    Clovis hunters must have arrived in North America after the ice melted on a path (light green) south from Alaska.

    CREDIT: A. S. DYKE/ NATURAL RESOURCES CANADA

    It is no coincidence, many archaeologists say, that widespread evidence of humans in the Americas appears just after this time. About 14,000 distinctive “fluted” stone points, typified by those found in Clovis, New Mexico, in 1932, have now been found at hundreds of sites across North America. The oldest are dated at 11,800 years BP, says C. Vance Haynes Jr., an expert at the University of Arizona, Tucson, who defined the Clovis culture. The Clovis imprint is so powerful that Haynes and many others insist that these hunters were the first people in the Americas.

    Given the glacial obstacles, those arguing for pre-Clovis settlement must explain how such people arrived. They almost certainly could not have crossed the ice. Other theories hold that they might have traveled from Asia or even from Europe, moving from point to point along the coast in primitive boats (Science, 19 November 1999, p. 1467). Unfortunately, evidence of such a passage may now lie under water and at the moment offers little data to advance the pre-Clovis argument.

  24. Tracking the Sexes by Their Genes

    1. Elizabeth Pennisi

    By comparing data from maternally and paternally inherited DNA, researchers are finding that in our ancestral populations, men and women didn't always travel together

    Men aren't really from Mars, nor women from Venus, but even though the two sexes hail from the same planet, they may not share a common homeland. Humans have been on the move for nearly 2 million years, expanding into new areas or retreating to warmer climes, and it would be reasonable to assume that men and women moved together. But molecular anthropologists tracking these ancient travelers by the trails left in their descendents' DNA are finding a surprise: striking differences in how the two sexes traveled about parts of this planet.

    By analyzing and comparing DNA passed on only through either the maternal or paternal lines, researchers can track down each gender's homelands and even trace their movements back hundreds of generations. Much more work needs to be done to ensure that the apparent differences between male and female migrations are real, but a few patterns are already emerging. “It's one of the most exciting things going on right now in [the study of] human evolution,” says David Goldstein, a geneticist at University College London (UCL). In some cases, the genes reveal how male explorers or warriors carried their genomes to distant places. But surprisingly, in general females seem to have stirred the genetic melting pot by dispersing their DNA more widely than their brothers dispersed theirs—perhaps as a result of thousands of years of moving to join their husbands' clans.

    This research took off about 15 years ago, when researchers began studying DNA found in a cellular organelle called the mitochondrion. This mitochondrial DNA (mtDNA) is passed only from mother to child and escapes the mixing and matching that goes on between pairs of chromosomes. By comparing sequence differences among living people, researchers can build family trees that help determine the ancestral group—and the ancestral homeland—of a population. Moreover, based on an estimated average mutation rate, they can get a sense of how long ago a group moved away from its native land. And in the past 5 years or so, molecular explorers have begun probing the Y chromosome, which is found only in men, constructing similar histories traced back through the male line (see Viewpoint on p. 1738).

    Now researchers are putting both kinds of analysis together. For example, in one of the first such comparative studies, Svante Pääbo, a geneticist at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, and his colleagues compared molecular markers in groups of Bedouins in the Sinai. Current Bedouin custom dictates that the wife join her husband's group, and in 1996 the DNA studies verified that this so-called patrilocal social structure had indeed been followed faithfully for centuries. Because women and their mtDNA moved about, the mtDNA is well mixed and diverse. “The mtDNA of the Sinai tribes has as much variation as that of all the people in the Nile,” Pääbo explains. But the genetic signatures in the Y chromosome data clustered into distinct geographic groups. Thus the study showed that living people's genes can offer a glimpse into the population structure of their ancestors.

    Women on the move

    Pääbo's results were widely accepted. But the following year, a study by Mark Seielstad, then a biology graduate student at Harvard University, and his collaborators kicked off an intense debate. Seielstad, now an evolutionary geneticist at the Harvard School of Public Health in Boston, looked for similar patterns on a much broader scale. He and his colleagues used an expanding set of Y chromosome studies of native Africans and Europeans that he and L. Luca Cavalli-Sforza's group at Stanford University in Palo Alto, California, had compiled, together with existing mtDNA data gathered over the years by various investigators. The researchers quantified the degree to which different geographic groups shared Y chromosome and mtDNA markers. Their bold conclusion: Over thousands of years, it was women who had dispersed genes most widely, not men as many had assumed (Science, 31 October 1997, p. 805).

    “There was demonstrably greater geographic structure to the Y chromosome [data] than to [the] mtDNA [data],” Seielstad explains. In one analysis, only 35% of the Y chromosome markers were shared among populations, and men from each location tended to have their own unique markers. That was much less true of mtDNA markers, leading Seielstad to suggest that over the past several thousand years, men have tended to have children near their own birthplaces. In contrast, women were the movers, if not the shakers, of traditional societies, eventually spreading their genes even to different continents. That implies that patrilocal societies like the modern-day Bedouins were common in human ancestry and have dominated during the past several millennia, most likely emerging with the growing dependence on agriculture, says Seielstad.

    “We have an image of men being more mobile,” Seielstad explains. “That happens, but prior to global exploration, what was more significant, long-range migration of males or day-to-day migration of women?; My feeling is that the latter is more important.”

    With these words, Seielstad threw down the gauntlet to his colleagues. Some applauded his study for its innovative use of the two types of genetic data. But critics felt, as population geneticist Lynn Jorde of the University of Utah in Salt Lake City complains, that “[Seielstad and his collaborators] went way too far in generalizing this to a worldwide pattern.” The skeptics also note that mtDNA and Y chromosomes mutate at different rates and are passed along at different frequencies, and Seielstad studied these markers in different-sized populations. Thus, he was “in danger of looking at apples and pears,” says Matthew Hurles, a population geneticist at the McDonald Institute for Archeological Research in Cambridge, United Kingdom.

    Others pointed out that even if the pattern is real, there could be other explanations for it. If, for example, a few men had most of the children, then their Y chromosomes would dominate, making it seem as though few new migrants with different Y chromosomes had entered the population. Recognizing these limitations, Seielstad is now conducting two projects that he hopes will help clarify the effect of polygyny and other marriage mores on the diversity seen in mtDNA and Y chromosome data.

    In one, with Daoroong Kangwanpong of Chiang Mai University in Thailand, he is obtaining genetic material from a region where tradition calls for the husband to move in with the wife and her parents. To date they have gathered only Y chromosome data, but as Seielstad predicted, Y chromosome markers from this population are more broadly distributed than those from a nearby region where patrilocality is the rule. Seielstad is also analyzing data collected from several regions of China, looking for the DNA traces of migrants into various communities, whose molecular markers should differ from those of their neighbors. If his hypothesis is correct, more of those migrants are women.

    Seielstad's proposal also sent some of his critics scrambling back to their labs to see what their own data had to say about whether the pattern really is worldwide. Some of these efforts do support Seielstad's conclusions. Hurles and his colleagues have looked at single base changes in noncoding regions of both the mtDNA and the Y chromosome of some 25 groups in Europe, gathering both types of genetic data from the same populations and thus addressing a key methodological criticism of Seielstad's study. Their findings so far are “pretty consistent” with Seielstad's, “[even] though we're looking at different loci and different rates,” says Hurles, who has yet to publish these results. The same pattern turns up in Asia, adds Hiroki Oota of the University of Tokyo and the Max Planck Institute for Evolutionary Anthropology, who compared his data on the mtDNA of 280 women from China, Vietnam, and Japan with Y chromosome data collected by others.

    But other researchers say the pattern is not the worldwide trend Seielstad suggested. It doesn't appear in South America, for example, contend Natalia Mesa and Andrés Ruiz-Linares of the University of Antioquia in Medellín, Colombia, who examined both mtDNA and Y chromosome markers in five Native American populations in Colombia and reviewed existing genetic data on South Amerinds. As they reported in the November 2000 American Journal of Human Genetics, they saw roughly equal distributions of diversity in the Y chromosome and mtDNA, suggesting that neither matrilocality nor patrilocality has predominated. Their newer work looking at people from Central and North America yields the same result. “We don't see what Seielstad and Cavalli-Sforza reported,” says Ruiz-Linares, who is now based at UCL.

    Men from afar

    Although the Y chromosome data suggest that men were often geographically restricted, it also reveals some cases in which men were the travelers. For example, for almost 25 years, anthropologists have debated the origins of a group of South Africans called the Lemba. Although the Lemba share the language and looks of the local Bantu people, they also have customs and folklore suggestive of Jewish origins. Early analyses of blood groups and mtDNA detected no difference between the Lemba and other Bantu, but a cursory look at Y chromosome markers hinted at Jewish ancestry.

    More recently, in the February 2000 American Journal of Human Genetics, UCL's Goldstein and his colleagues described how one of the Lemba clans has the so-called Cohen modal haplotype—a Y chromosome marker found in 10% or more of the men of various Jewish groups but not at all in non-Jews. “It appears that the story these people tell is at least partially true,” says Utah's Jorde. Furthermore, “it's the males, not the females, [who] probably came down [from the Middle East] and intermixed with the population” some 3000 to 5000 years ago, as the mtDNA seems to be Bantu-like.

    Other populations also reveal traces of long-distance male movement. For example, Jorde has examined the origins of caste populations in southeastern India. As his Utah colleague Mike Bamshad reported at a meeting* in November 2000, they find that the mtDNA in all caste groups is “most similar to [that of] other Asian populations,” Jorde says. But the Y chromosome data indicate that males in “the upper castes are more similar to Europeans than to Asians,” and that there is “less and less [similarity to Europeans] as you go down the castes.” He thinks that about 3500 years ago, immigrant traders, farmers, or warriors from Western Eurasia, likely with few women of their own kind, formed the basis of the upper castes.

    Researchers are turning up more and more such tales as they analyze different populations. However, some warn that, like many conclusions about gender differences, the underlying analyses may be shaky, particularly when anthropologists try to step back many thousands of years. Recent events can mask earlier ones, notes Goldstein. And Pääbo worries that over thousands of years, as for example in Seielstad's studies, differences in the mutation rates of the two kinds of genetic markers may throw off the results. For example, a researcher might unknowingly compare a mtDNA pattern from 5000 years ago with a Y chromosome pattern that was 50,000 years in the making. “On a deeper history level, I'm a little more worried” about Y chromosome and mtDNA comparisons, says Pääbo.

    But Jorde is less concerned. Researchers now have not just the Y chromosome and mtDNA but also gene sequences inherited from both parents at their fingertips—data that will help them discern whether the suggested differences between men and women are real. And because of that, he points out, “we can be much more secure in our interpretation of human history.”

    • * Cold Spring Harbor Millennium Meeting on Human Origins & Disease, 25–29 October 2000, Cold Spring Harbor, New York.

  25. The Peopling of the Pacific

    1. Ann Gibbons

    Archaeologists, linguists, and geneticists struggle to understand the origins of the bold seafarers who settled the remote Pacific Islands

    Polynesia, with its dramatic volcanic islands rising out of the South Pacific, was the last area of the world to be settled by people. The fossil and archaeological trail shows that humans first set foot in Fiji only 3000 years ago, then sailed on within 500 years to Samoa and Tonga, and later reached Easter Island, Hawaii, and the fringes of remote Oceania, exploring a realm stretching 4500 kilometers. But just who was in those outrigger canoes has long been a mystery. Even Captain James Cook mused about the islanders' origins on his last voyage from 1776 to 1780, noting the resemblance of language, customs, and appearance among the tall, fair Polynesians on such far-flung islands as New Zealand, Tahiti, and Easter Island. And he proposed his own theory that they had come from Malaysia or somewhere in the islands of Micronesia, such as the Marianas or Caroline Islands, where they had “affinities with some of the Indian tribes.”

    From such observations, Europeans such as French voyager Jules-Sebastien-Cesar Dumont d'Urville got the idea that these islanders could not be the descendants of the generally shorter, dark-skinned Melanesians living in islands of New Guinea, the Bismarck Archipelago, and the Solomon Islands. In 1832 Dumont d'Urville classified the people of the Pacific into three groups: Polynesians (“many islands”), the diverse Melanesians (“dark islands”), and Micronesians (“little islands”). This superficial classification stuck—even though the geographic terminology eventually changed—and ever since, many researchers have looked beyond the Melanesians of Near Oceania for the ancestors of the Polynesians who populate Remote Oceania (see map).

    For example, until recently many geneticists and linguists have looked to the “express train” model. In this view, the ancestors of Polynesians came from Taiwan, where farmers speaking Austronesian languages set sail 3600 to 6000 years ago, largely bypassing the indigenous Papuan-speaking people of Melanesia as they swept out into the Pacific and left behind a trail of distinctly decorated pots.

    Although this model was often touted as an interdisciplinary synthesis, in fact it is no favorite of archaeologists, many of whom have for years preferred a more “integrated” model, with at least some mixing between Melanesians and Austronesian speakers from Southeast Asia (a vast area that ranges from the coast of southern China to the islands of Indonesia and the Philippines). And now a flurry of studies of the Y chromosomes of Polynesians also favors the “slow boat” model, in which the ancestors of Polynesians originated in Asia but moved slowly through Melanesia, with time for genetic mixing among the peoples before the colonization of the rest of the Pacific. But even as these different kinds of data begin to point the same way, researchers are still groping for a true synthesis of the archaeological, linguistic, and genetic data. Each discipline tends to frame ideas in its own way, and at the moment each data set tends to favor a different homeland for the original voyagers. “I have to write a review myself of the spread of early farmers, and it's very difficult,” says archaeologist Peter Bellwood of the Australian National University in Canberra. “It's the genetics that is causing headaches.”

    Express trains and entangled banks

    Back in 1985, Bellwood was among the first to attempt such a synthesis. He proposed, on the basis of archaeological and linguistic evidence, that the legendary seafarers who first settled remote Oceania had roots in Taiwan and mainland China. The archaeological evidence was a trail of distinctive pottery, obsidian, and shell ornaments known as the Lapita culture, which first appeared 3500 to 3200 years ago in the Bismarck Archipelago in Near Oceania and spread in rapid succession to the islands of Vanuatu and New Caledonia. More than 200 radiocarbon dates show that the Lapita peoples then crossed 850 kilometers of open ocean in outrigger canoes to arrive in Fiji 3000 years ago. From there, they moved east, using numerous small islands as steppingstones to Tonga and Samoa and carrying red-slipped pottery decorated with intricate geometric patterns.

    Slice of life.

    Slices in each pie chart show the frequency of genetic markers in populations; for example, yellow and aqua slices are Melanesian, whereas the red slice is Southeast Asian, perhaps tracking Austronesian speakers.

    CREDIT: DAVID GOLDSTEIN AND CRISTIAN CAPELLI OF UNIVERSITY COLLEGE, LONDON

    This culture has never been found in Southeast Asia, including Taiwan. But Bellwood proposes that it was an offshoot of farming cultures that first appeared 6000 years ago in Taiwan and southern China, including 3000-year-old red-slipped pottery in the Philippines that resembles plain Lapita pots.

    This parade of pots, argues Bellwood, indicates the expansion of a particular group of people: the speakers of Austronesian languages, which today is one of the world's largest language groups, with 1200 languages spoken from Madagascar to Easter Island. And linguistic evidence also points to a wave of settlers sweeping out of Southeast Asia at the time of the Lapita culture. A new analysis of the historical relationships of the Austronesian languages by University of Hawaii, Manoa, linguist Robert Blust found that all 1200 languages fall into 10 subgroups; the languages in nine of those subgroups are spoken only by the non-Chinese natives of Taiwan. And all Austronesian languages outside Taiwan seem to be closely related, sharing changes in sound and meaning. Thus, Blust concludes that the Malayo-Polynesian languages all descend from the same ancestor—a proto-Austronesian language spoken “in or near Taiwan.” This figures in Bellwood's synthesis: “In terms of language and archaeology, we know there was a major migration from 2000 B.C. to 1000 B.C., starting from Taiwan and the Philippines.”

    Many archaeologists agree that there was a rapid, recent migration of Austronesian speakers into Remote Oceania. What is open to debate is precisely where the Lapita peoples came from and how much they intermingled along the way with the indigenous people whose ancestors had been living in Near Oceania for at least 33,000 years. “I don't think there's any question that the Austronesian expansion comes out of island Southeast Asia,” says archaeologist Patrick Kirch of the University of California, Berkeley. “The danger is getting too specific about Taiwan when we don't know enough archaeologically about the coastal China area, Taiwan, or the Philippines.”

    The oldest Lapita sites, after all, are in the Bismarcks in Near Oceania, where the culture appears to emerge as a fusion between the incoming seafarers and the indigenous Melanesians. The model of Lapita origins that many Pacific archaeologists now favor, says Kirch, is called the “triple-I” model—for the terms intrusion, innovation, and integration. It differs from the express train model in that it allows for intermarriage and integration of different Melanesian and Austronesian cultures, although Bellwood argues for only a little integration. Other researchers, such as archaeologist John Edward Terrell of the Field Museum of Natural History in Chicago, think that describing the events in terms of a fusion between only two people and two cultures is too simple. He argues that the Lapita culture and peoples arose from a diverse Melanesian population linked in complex social and trade networks with people from Southeast Asia over thousands of years; this idea is sometimes called the “entangled bank hypothesis,” although Terrell says he proposed that term as a “metaphor” in a 1998 Antiquity article; he calls it “my alleged entangled bank model.”

    Although they argue over the details, many archaeologists do agree that the Lapita culture arose from a fusion between indigenous Melanesian and Austronesian cultures—a fusion that was obviously a success: Within 450 years, “the whole thing burst out, with a very rapid expansion out into the Pacific,” says Kirch. Once the Lapita peoples reached Fiji, Samoa, and Tonga, they gave rise to offspring who eventually developed the distinctive Polynesian culture that Cook described.

    At this point, it's hard to find any archaeologist who admits to riding the express train, including Bellwood, whose name is tied most often to that model. The idea got its name from the renowned writer and physiologist Jared Diamond of the University of California, Los Angeles, School of Medicine in an article in Nature in 1988, and the term quickly spread. “I don't believe in express trains,” says Bellwood. “It was a fast movement out of Taiwan, but it wasn't totally closed off. Of course, there has been intermarriage. The express train was a kind of journalistic statement.”

    Hopping aboard a slow boat

    While the archaeological and linguistic evidence was converging to indicate some degree of intermixing, geneticists began to get into the act. Their results have recently suggested considerable intermarriage, but it wasn't always so: Until recently, they pointed in a different direction. Geneticists' approach, in fact, inadvertently increased polarization in the field, as they leapt to test what they saw as competing theories: the express train and its opposite, the entangled bank. Bellwood and other archaeologists grumble that this has always been a false dichotomy, yet almost every genetics paper on the subject in the past decade frames the debate in this way. Indeed, researchers from different disciplines have often talked past each other. “What struck me in reading the genetics papers,” says Kirch, “is [that] they cite my book on the Lapita peoples, but they haven't read it.”

    When geneticists first studied the maternally inherited DNA from the mitochondria—the energy-producing organelles outside the nucleus of the cell—in Polynesians, Melanesians, and Southeast Asians, their results seemed to support the express train model. Researchers found that about 90% to 95% of Polynesians have inherited a deletion seen in Southeast Asians, including Taiwanese, but rarely in Melanesians. Earlier studies traced that deletion of nine base pairs to Taiwan, thus fitting Bellwood's idea of a Taiwanese homeland and rapid expansion with limited intermixing.

    But recent Y chromosome studies have traced a different pattern. In four studies published in the past 2 years, geneticists have used newly discovered Y chromosome markers to trace Polynesians' paternal ancestry. By comparing changes in the DNA sequence of the chromosome, researchers can sort the DNA sequences in a phylogenetic tree or by another method of analysis to find out how the Y chromosomes cluster into closely related groups (called haplogroups). By surveying the frequency and diversity of markers in various islands, they can also trace these genetic variations back to their geographic sources of origin.

    Although samples of Polynesians are still small, all four studies report a “striking” lack of genetic diversity within the Polynesian haplogroups, suggesting that only a few men founded the Polynesian populations, notes population geneticist Matthew Hurles of the University of Cambridge, U.K., a co-author of one of the studies. The geneticists also picked up signs of what may have been the Austronesian expansion. In a study of 200 Polynesians, reported in the February issue of the American Journal of Human Genetics, a team headed by geneticists David Goldstein and Cristian Capelli of University College London found one haplogroup, called L, seen in about one-third of Polynesians, that is at its highest diversity and frequency in southern Chinese and in an aboriginal group in Taiwan called the Ami (see map on p. 1735). That pattern suggests that this marker probably was brought into Oceania by Austronesian farmers.

    But this study also found that two Polynesian haplogroups are apparently inherited from ancestors who had been in Near Oceania long before the Austronesians arrived. These markers, haplogroups C and F, are common in Polynesians and are also found in Near Oceania, and to a lesser extent, in Southeast Asia. But they occur in only 2% to 4% of men in Taiwan and southern China, so they presumably did not originate there.

    The idea of partial Melanesian origins was proposed earlier, last October in Current Biology, by geneticist Manfred Kayser and anthropologist Mark Stoneking of the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany. These researchers found a deletion of short tandem repeated sequences, associated with the marker Goldstein and colleagues call haplogroup C, in 82% of the men in the Cook Islands and in 70% of the men in Samoa. Outside of Polynesia, this combination of markers was only found in Melanesians and east Indonesians—never in East Asians, such as the Taiwanese or southern Chinese. The deletion occurs in the most diverse forms in Melanesians, and Kayser and Stoneking calculate that it arose in Melanesia 10,900 years ago—again, long before the Austronesians appeared there. “This is the sort of thing that would make Terrell happy—a Melanesian marker that didn't arise out of Southeast Asia, but then spread to Polynesia,” says Stoneking.

    But that study, like Goldstein's, also found a marker that supports Austronesian or Southeast Asian ancestry: a mutation seen at highest frequency in the southern Chinese and other East Asians, including Taiwanese. “This would make Bellwood happy—a genetic marker that arose in Asia, associated with Austronesians and found in Polynesia,” says Stoneking. “So, the question is, how do we reconcile the two· That leads us to propose the slow boat model: that the ancestors of Polynesians were Austronesians who moved out of Southeast Asia—not necessarily Taiwan—whose population expanded along the coast of New Guinea, intermingled, and then moved out into Polynesia. But it was not an express train, because as they moved through Melanesia, they moved slowly enough to mix extensively with Melanesians.”

    This slow boat model sounds much like the triple-I model, favored by many archaeologists. But like archaeologists, geneticists differ on the amount of mixing, with Goldstein arguing for more Melanesian influence and earlier gene flow from Southeast Asians into indigenous Melanesians. “Polynesia was colonized by Melanesian populations that were largely indigenous genetically,” he says.

    Additional support for putting men of Melanesian ancestry in the first canoes to Polynesia comes from the earliest genetic studies of the question—studies of nuclear markers inherited from both parents. One is a rare mutation in the genes for hemoglobin, a deletion that can result in an inherited anemia called _-thalassemia but that also provides some resistance to malaria. This mutation is most common in Melanesians, but it is seen in lesser frequency in Polynesians and never in Asians. “It's sort of satisfying that they are rediscovering with the Y chromosome what we thought we saw in the nuclear data 15 years ago,” says geneticist Jeremy Martinson of the University of Nottingham, U.K. As for the mitochondrial DNA (mtDNA), Stoneking now notes that earlier studies “glossed over” the 5% to 10% of Polynesians who did in fact have Melanesian mtDNA. It's clear that both men and women have mixed heritage, he says, although they show different frequencies of Melanesian and Southeast Asian genes.

    Original debate

    Although geneticists and archaeologists now agree on at least some degree of mixing, it remains a mystery where the seafarers initially set out from. Based on the Y data, it's not Taiwan. “We have trashed this idea for a Taiwanese homeland completely,” says Li Jin, a population geneticist at the University of Texas Health Science Center in Houston. Another model focuses on the so-called Polynesian motif in maternal mtDNA, which includes a suite of genetic changes in the mtDNA of almost all Polynesians and in that of some Melanesians and east Indonesians, but not in the mtDNA of any Chinese, Taiwanese, or Filipinos. This striking geographical distribution and the diversity of the motif lineages in each region suggest that the Polynesian motif arose more than 10,000 years ago in the islands near Indonesia, then was carried through Near Oceania to Polynesia. That idea is put forth by evolutionary geneticist Martin Richards of the University of Huddersfield, U.K., and Stephen Oppenheimer of the University of Oxford, both co-authors of the Goldstein paper and the first to propose a slow boat origin. “We would put the ancestors of Polynesians somewhere in eastern Indonesia, not Taiwan, because that is what the diversity in the mtDNA suggests,” says Richards.

    All this means that the synthesis of genetics and archaeology is slowly coming together. But the linguistics still point to Taiwan as the source for Austronesian languages, whereas genetics and archaeology rule it out as the place of origin for genes and Lapita culture. “I don't see a problem at all with that,” says linguist Blust. “Languages can spread without preserving the genetic makeup of the [original] speakers. Think of Latin.”

    Others say that as overly simple models are being disproven—if they were ever held—the two sides are drawing closer: “People have tried to make the data fit two different extremes; it's easier to test,” says Stoneking. “But they're merging to a common ground,” where the debate focuses instead on where the intermarriage took place and how extensive it was. Hurles at Cambridge agrees: “What happens at the early stages when people are looking into these kinds of issues in an interdisciplinary way, the field becomes dichotomized. You get two poles. Now, we're looking at the shades of gray, rather than black and white.” It turns out that although Captain Cook was right in spotting the cohesiveness of Polynesians from Hawaii to New Zealand, he was wrong about what he saw as their profound differences from Melanesians: The groups have a common genetic heritage, and their differences are only skin deep.

    ADDITIONAL READING

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution