News this Week

Science  02 Nov 2001:
Vol. 294, Issue 5544, pp. 970
1. BIOMEDICAL RESEARCH

Help Wanted: Departure of Top Officials Adds to Vacancies at NIH

1. Eliot Marshall

Two savvy biomedical leaders announced in the past few days that they are leaving the National Institutes of Health (NIH) to take jobs in the nonprofit academic world. Steven Hyman, director of the National Institute of Mental Health (NIMH), will become provost at Harvard University in December, serving directly under President Lawrence Summers. And Alan Leshner, director of the National Institute on Drug Abuse (NIDA), will become executive officer of the American Association for the Advancement of Science, the publisher of Science (see sidebar). The twin departures are a setback for an Administration already under fire for delays in filling scientific posts, and they have raised concerns among scientists about a growing leadership vacuum at NIH, which has been without a permanent director for 22 months.

The timing of the announcements was coincidental, but coming after the abrupt resignation last month of Richard Klausner as director of the National Cancer Institute (Science, 14 September, p. 1967), it looked like an exodus. And the depletion of the top ranks at neuroscience and mental health institutes is especially acute: In addition to the departure of Hyman and Leshner, Enoch Gordis, head of the National Institute on Alcohol Abuse and Alcoholism, announced last summer that he will retire in December after 15 years as director, and the National Institute of Neurological Disorders and Stroke continues a year-old search for a director, following the departure of Gerald Fischbach in 2000 to become health sciences vice president at Columbia University in New York City. In addition, the just-inaugurated National Institute of Biomedical Imaging and Bioengineering is without a permanent chief; the hunt for a director has only just begun.

View this table:

Bruce Alberts, a molecular biologist and president of the National Academy of Sciences (NAS) in Washington, D.C., last week acknowledged his own disquiet about vacant positions at NIH and other science agencies. Ever since Harold Varmus left NIH in December 1999 to head the Memorial Sloan-Kettering Cancer Center in New York City, the government's largest research agency has been run by an acting director, Ruth Kirschstein. She is the longest-tenured acting chief on record at NIH (see table). “We all agree that the NIH needs strong, scientifically sophisticated leadership,” Alberts wrote in an e-mail, “if only because [a permanent director] will be needed to keep great institute directors in place and replace those who leave.”

Robert Rich, executive associate dean of medicine at Emory University in Atlanta and president of the Federation of American Societies for Experimental Biology (FASEB), agrees. It's difficult for any acting director to recruit subordinates, Rich says. “I believe it will be difficult to fill the institute directorships [at NIH] with permanent persons until the most senior position, the NIH director, is filled,” he says. “The longer this goes on—with departures pending—the more urgent it will become” to find an NIH chief.

A group of prominent biologists made a plea for a new “permanent leader” at NIH to Health and Human Services (HHS) Secretary Tommy Thompson in a 5 October letter, according to Maxine Singer, president of the Carnegie Institution of Washington. The co-signers included, among others, Nobelist Paul Berg of Stanford University, genome scientist Eric Lander of the Massachusetts Institute of Technology, and Thomas Pollard of the Salk Institute for Biological Studies.

In private, many scientists are more outspoken. The head of a major research university, who asked not to be identified, said last week that he was “very concerned” about the NIH and its biggest component, the $4 billion National Cancer Institute (NCI). Currently, NCI deputy director Alan Rabson, Kirschstein's husband, is serving as acting director. The White House is rumored to be recruiting a new NCI chief from Texas without input from NIH or the broader biomedical community (see ScienceScope, p. 973). View this table: Earlier this year, according to Alberts, officials at the White House and HHS, of which NIH is part, consulted NAS about federal jobs. But the dialog has tapered off, he says. The terrorist attacks intervened, Alberts thinks, but he also believes the response to those attacks makes it clear that the Administration needs “better access to top scientists.” NIH acting director Kirschstein brushes aside worries about the recruitment of new staff. The spate of recent NIH resignations arrived at the same time by “coincidence,” she says, and they are part of the normal turnover of government staff. “I don't think anyone's unhappy with the decisions I've made,” she adds, and she says she has the “full confidence” of HHS Secretary Thompson. As for the concern that she's being excluded from helping with the search for the next cancer chief, Kirschstein acknowledges that she hasn't discussed the search with Administration higher-ups. The appointment, she notes, is the president's prerogative. Klausner and Hyman had once been widely viewed as possible internal candidates for the NIH directorship, as has Anthony Fauci, director of the National Institute of Allergy and Infectious Diseases. Fauci declined to comment on rumors that he is now the leading candidate. 2. BIOMEDICAL RESEARCH Leshner Named to Lead AAAS 1. David Malakoff A Maryland psychologist and neuroscientist will become the next head of the world's largest general science society. Last week the American Association for the Advancement of Science (AAAS, publisher of Science) announced that Alan Leshner, currently head of the National Institute on Drug Abuse (NIDA) in Bethesda, Maryland, will become its new chief executive officer. Leshner, 57, will succeed Richard Nicholson, 63, who is retiring on 3 December. During Nicholson's 12-year tenure, AAAS solidified its finances, built state-of-the-art headquarters in downtown Washington, D.C., and entered the Internet age, putting Science and other information services online. The organization now has nearly 140,000 members, 400 staff, and an$80 million budget.

Leshner hopes to build on that growth but isn't ready to discuss specifics. “AAAS is in terrific shape, and that provides an incredible opportunity to expand its leadership role in American science,” he told Science. Leshner led NIDA for 7 years, during which he became known as a national spokesperson on drug abuse prevention and treatment.

After earning a doctorate in physiological psychology at Rutgers University in New Brunswick, New Jersey, in 1969, Leshner spent 10 years at Bucknell University in Lewisburg, Pennsylvania. His research focused on the biological bases of behavior, and he wrote a textbook on the link between hormones and behavior. In 1979 he became a staffer at the National Science Foundation and a decade later moved to the National Institutes of Health. In 1998 he was elected a member of the National Academies' Institute of Medicine.

“He's a perfect choice [for AAAS],” says Alan Kraut, director of the American Psychological Society in Washington, D.C. “He's very excited about putting science front and center in national policy debates.”

3. BIOTERRORISM

New Law May Force Labs To Screen Workers

1. David Malakoff,
2. Martin Enserink

Molecular biologist Julia Hilliard has spent the past 20 years studying deadly viruses, including the monkey-borne B virus that can destroy a person's brain. But if the Georgia State University academic wants to keep working with such potential bioweapons, she may soon need to prove that she's not crazy, a convict, or using illegal drugs.

Last week, President George W. Bush signed into law an antiterrorism measure that gives spy and police agencies broad new investigative powers. It also bars several classes of people—including felons, the mentally ill, and those from nations deemed “terrorist” by the U.S. government—from possessing certain viruses, toxins, and microorganisms that could be used as weapons. Lawbreakers would face up to 10 years in jail. The new rules may force universities to conduct criminal background checks and drug tests on thousands of scientists and students who, like Hilliard, study the B virus, anthrax, and about 40 other deadly agents (see table).

View this table:

Many researchers say they welcome the added security if it keeps research materials from falling into the wrong hands. “It's overdue,” says Hilliard. But some scientists worry that the recent anthrax attacks may cause Congress to take additional steps—including barring non-U.S. citizens from handling certain materials—that could hinder academic research.

It is unclear how many researchers will be affected by the new law. Up to 300 universities—and several dozen more state and federal government labs—currently handle material classified as “select agents” by the federal Centers for Disease Control and Prevention (CDC) in Atlanta, according to Ron Atlas of the University of Louisville in Kentucky. He predicts that it will have “minimal” impact because many facilities already screen workers doing classified work for the military or conducting federally funded drug studies. But other university labs, including Hilliard's 16-person Biosafety Level 4 facility in Atlanta, currently don't require such measures. “It is probably the weakest link in our [security] program,” Hilliard says.

John Collier, who studies anthrax at Harvard University, says he “could live with” a background check, which security companies say can cost from $20 to thousands of dollars per person. But he fears that having to screen every worker in his lab “could create a huge bureaucracy” without significantly improving security. Anthrax and other potential bioweapons can be cultured from natural sources, he and others note, and don't necessarily have to be filched from a lab. A proposal to bar nonresident aliens from possessing a select agent also troubles some researchers. “People we may need to work with—including Canadian and British research—could be affected by this,” says Atlas, who was expected to testify this week before a Senate committee on behalf of the American Society for Microbiology (ASM). He notes that the bill (H.R. 3160), which passed the House last week, allows the Secretary of Health and Human Services to issue waivers but worries that the process could be “cumbersome.” One idea getting better reviews is to create a national registry to track select agents. Bioterrorism experts have long urged Congress to require researchers who possess deadly materials to register their collections with CDC, and the agency has been embarrassed by its inability to specify how many U.S. labs might have produced the anthrax that has contaminated U.S. mailrooms. A 1996 law requiring the CDC to license laboratories that ship or receive select agents didn't include an inventory reporting requirement; it also exempted researchers who had stockpiled strains in freezers but weren't planning to share them. The current attacks, says ASM's Janet Schumaker, make it prudent “to reexamine all the issues surrounding possession.” 4. U.S. SCIENCE POLICY Marburger Shakes Up White House Office 1. Andrew Lawler After winning unanimous Senate confirmation last week, presidential science advisor John Marburger has moved swiftly to make radical changes to his office. Marburger has eliminated two of the four senior positions within the Office of Science and Technology Policy (OSTP) that he heads, subsuming environmental matters and national security under either science or technology. “I felt the office was too fragmented to be effective, and I wanted to have more direct control,” says Marburger. The changes have unsettled some members of the science and technology community. Eliminating the national security position “is a big blow” to forging links to the powerful National Security Council, says one former OSTP official. The need to incorporate science into the burgeoning war on terrorism suggests that Marburger “is moving in the wrong direction,” says Al Teich, head of science and policy at the American Association for the Advancement of Science (which publishes Science). Dropping the environmental job, Teich adds, is a “surprising move” given the importance of global warming and related issues. Several science policy analysts and former OSTP officials also expressed concern about the nomination of Richard Russell, now OSTP chief of staff, to serve as technology chief. Russell worked for nearly 7 years on the House Science Committee, but unlike most of his forerunners, he does not have an advanced scientific degree or extensive experience in industry. Russell declined comment, but Marburger acknowledges that researchers have questioned the choice. “This is not an academic appointment, and dealing with academic aspects of technology is only part of what we do,” says Marburger, the former director of Brookhaven National Laboratory in Upton, New York. The search for a science chief is still on, he says, adding that his goal is to build a team with complementary skills. Marburger adamantly rejects speculation by some analysts that the White House dictated the changes at OSTP. “I was under no pressure to do this,” he says. “It was not suggested by anyone in the Administration.” 5. FORMER SOVIET UNION Cautious Optimism, But Progress Is Slow 1. Richard Stone LONDON—When he learned a few years ago that a molecular biology lab run by GlaxoSmithKline was due to close, Ivan Gout of the Ludwig Institute for Cancer Research in London made sure none of its equipment ended up on the scrap heap. Instead, with crucial support from Peter Campbell of the Federation of European Biomedical Societies, he shipped the surplus equipment—along with DNA and protein sequencers donated by the Ludwig—to the Institute of Molecular Biology and Genetics in Kyiv, Ukraine. This high-tech bonanza helped persuade a colleague in the United States, Valery Filonenko, to return to run the newly outfitted lab as a joint project with the Ludwig. “There are so many people who would go back to Ukraine if they could have even 10% of the capability of U.S. labs,” says Gout. “We have to build on the idea that scientists need to repatriate.” A short time ago, that idea would have been unthinkable. The dissolution of the Soviet Union in 1991 spurred a mass exodus of scientists, some fleeing persecution, others fleeing subsistence salaries, aging equipment, and poorly stocked libraries. Russian science “suffered the most precipitous decline in financial support known in modern history,” according to Loren Graham, a historian of Russian science at the Massachusetts Institute of Technology. That prompted some observers to augur the death of Russian science, but “we now know that these predictions were false,” says Graham. Indeed, at a meeting here last week on international support for Russian and Ukrainian science, there was cautious optimism about the future. Graham and Irina Dezhina of the Institute for the Economy in Transition in Moscow pointed out that after years of short-changing researchers, the Russian government fully paid institute budgets and salaries in 2000 and 2001, while graduate student enrollment in the natural sciences rose during the turbulent '90s. And the ranks of scientific staff in Russia have stabilized at roughly 500,000, about a third of the total 20 years ago. Some of the credit for saving science in Russia and other former Soviet states must go to “perhaps the largest program of scientific assistance the world has ever seen,” says Gloria Duffy, board chair for the U.S. Civilian Research and Development Foundation (CRDF). In the past decade, the International Science Foundation launched by financier George Soros, the CRDF, the European Union's International Association for Cooperation with Scientists from the former Soviet Union program, and scores of other players ploughed more than$3 billion into research in the region.

But recent developments could still undermine the optimism voiced in London. For example, at its annual meeting next month, the Russian Academy of Sciences (RAS) is expected to reelect Yuri Osipov to a third 5-year term as president, suggesting that there will be no change in its policy of keeping all its 325-odd institutes running, whatever the cost. “The hopes of the more radical reformers have turned out to be unrealistic,” says Graham. Putting the RAS's glacial pace of change in vivid relief are the rapid strides by the Chinese Academy of Sciences to cull deadwood and embrace peer review. “China is going ahead from a very low level at great speed,” says Sir Brian Heap, Foreign Secretary of the Royal Society, which hosted the London meeting with the Virginia-based CRDF. “Looking at Russia, there's just no comparison at the moment.”

And the attitude of the Russian government toward science continues to be capricious. Earlier this month, the government abruptly dissolved the post of science minister, leaving vice premier Ilya Klebanov in complete control of federal science policy. He's expected to continue a year-long tilt toward applied research. “The government wants science to provide not only new knowledge, but knowledge useful to industry,” explains Mikhail Alfimov, director of the Russian Foundation for Basic Research. The challenge is to build up high-tech industry on anemic government support for R&D (see chart). “The most important enemy of science in our country is the Ministry of Finance,” says physics Nobel Prize-winner Zhores Alferov, director of the Ioffe Physico-Technical Institute in St. Petersburg.

Lack of funding to replace and upgrade aging equipment continues to be a serious problem. “Everything we have—telescopes and other large equipment—was constructed during the Soviet period,” says Yaroslav Yatskiv, director of Ukraine's Main Astronomical Observatory. And the Western system of competitive grants and peer review has been slow to take root. Peer review “has been adopted to a very limited degree,” says Heap. There is, however, at least a glimmer of hope that the RAS may be warming to peer review: A new academy program allots a sliver of its budget to competitive projects in 11 priority areas.

All that suggests that reversing the brain drain of the past decade is still an unlikely prospect. Western agencies say they would rather build infrastructure than fund repatriation grants, banking on the hope that well-equipped labs will lure homesick talent. Sharing that sentiment is the Ukrainian government, which is considering setting up a fund for supporting top-gun expats. That's one indication, at least, that the worst is over. “The time for bailing out science in Russia and Ukraine has ended,” says CRDF president Gerson Sher. But the long rebuilding process has only just begun.

6. ASTROPHYSICS

Pulsar Pulls Mass From Distorted Companion

1. Robert Irion

The astrophysical zoo contains a dizzying variety of pulsars, spinning neutron stars that flash radio beams across the galaxy. Among the rarest of these dense stellar corpses are the “millisecond pulsars,” which can whirl hundreds of times per second. Now, astronomers may have spied one of these exotic beasts at a critical point in its development: It's locked in a dance with a bloated star that may have just finished revving up the pulsar to a breakneck pace.

Data from an Australian radio telescope and images from the Hubble Space Telescope (HST) suggest that the pulsar, called PSR J1740-5340, distorts its companion so severely that gas overflows the star's gravitational confines. Some of this gas may have swirled onto the neutron star in the recent cosmological past, accelerating its once-leisurely spin to 274 revolutions per second. “If that's true, it would be extraordinary,” says astrophysicist Deepto Chakrabarty of the Massachusetts Institute of Technology in Cambridge.

The millisecond pulsar is one of a dozen unveiled within the last year by a sensitive new survey at the Parkes radio telescope in New South Wales, Australia. Radio astronomers are hunting for the faint radio blips of millisecond pulsars from swarms of stars called globular clusters. The clusters are nurseries for exotic pulsar systems because the tightly packed stars interact, often forming close binary pairs. Such pairs give birth to all millisecond pulsars, astrophysicists believe, as gas from one star spirals into the deep gravitational pit of the neutron star and spins it up until some unknown trigger switches on the powerful, coherent radio beacons characteristic of a pulsar. Their spins are so rapid and stable that the pulsars can shine for billions of years. However, astrophysicists have never gotten close to seeing this theorized process happen.

In a pair of papers in the 1 November issue of Astrophysical Journal Letters, an international team reports that PSR J1740-5340 may offer that chance. Radio astronomer Nichi D'Amico of the Bologna Astronomical Observatory and his colleagues analyzed the pulsar's radio signature and found that it disappears nearly half the time, presumably eclipsed by a thick shroud of gas from its companion. Moreover, radio signals from the rest of the pulsar's 32-hour orbit around its neighbor are sketchy, as if the pulsar never fully exits the cloud of debris. Orbital calculations suggest that the companion has about 20% the mass of our sun—big enough to be a small star in its own right. In contrast, all other eclipsing millisecond pulsars found to date circle tiny objects, such as white dwarfs.

To learn more about the companion, astronomer Francesco Ferraro, also at Bologna, led an analysis of previously obtained HST images of the pulsar's host globular cluster, NGC 6397. The team found an unusually red star close to the pulsar's radio position. The star's brightness appears to flutter in synch with the pulsar's orbit, as if the star is so physically distorted that it appears first larger, then smaller, as it rotates.

The team concludes that the star has bloated into and beyond its entire “Roche lobe,” a teardrop-shaped region of space within which matter is bound to the star. “This is the first system in which both the companion overflows the Roche lobe and the millisecond pulsar is alive,” says team member Andrea Possenti of Bologna. The pulsar's powerful radio waves prevent more gas from accreting onto it, Possenti notes, but enough gas streams from the companion to blot the pulsar's signal.

Others find the logic compelling. “They have convincingly argued that the star is filling its Roche lobe, which hasn't been seen before,” says radio astronomer David Nice of Princeton University in New Jersey. However, Nice and Chakrabarty observe that more optical studies of the companion are essential to confirm its properties—and to verify that it is indeed bound to the pulsar.

Debate already is brewing about the system's origins. The Italians tilt slightly toward a scenario in which the pulsar is nearly newborn, having fed upon a whirlpool of gas from the companion that still orbits it. In that case, the gas started flowing onto the neutron star as the other star evolved and began to swell, sacrificing its outer layers to the neutron star's pull. Another idea, also put forward in the team's reports, holds that matter from a third star may have spun up the pulsar long ago in the globular cluster's crowded core. That donor star then became a dense white dwarf. Later, in a gravitational “exchange interaction,” the white dwarf was ejected and replaced by an ordinary star, which is now puffed up by the pulsar's intense radiation.

Astrophysicist Steinn Sigurdsson of Pennsylvania State University, University Park, views the second scenario as more likely, because computer simulations suggest that slingshot-like encounters deep within globular clusters churn out many such pairings between neutron stars and ordinary stars. Still, he won't disregard the fascinating possibility that PSR J1740-5340 is a cosmic newborn. “It may or may not be knowable,” he says. “But they've done very good detective work so far just to find it.”

Japan and Korea To Link Networks

1. Dennis Normile*
1. With reporting by Mark Russell in Seoul.

TOKYO—Japan and Korea are teaming up to create an Asian network of radio telescopes that will match the capabilities of existing arrays in the United States and Europe. Last month scientists from both countries announced their first joint observations using two antennas, the forerunner of what they hope will be a string of 10 dishes operating in unison by 2005. The observations mark a scientific coming of age for Korea in very long baseline interferometry (VLBI), which combines signals from two or more radio antennas into an image equivalent to what would be captured by a single antenna spread over the entire area.

“This is really a big step forward for Korean astronomy,” says Se-Hyung Cho, director of the Taeduk Radio Astronomy Observatory, one of two telescopes making the initial observations. “We hope this leads to more opportunities for our community to make important contributions.”

In VLBI, the wider the spacing of the antennas, the better the resolution. The United States currently operates the 10-station U.S. Very Long Baseline Array that stretches nearly 13,000 kilometers from Hawaii to the Virgin Islands, and the 18-station European VLBI Network covers an even larger region.

The Asian array, although smaller, is expected to be ideal for investigating silicon monoxide masers, sources of coherent radiation produced when energy from an expanding star excites silicon monoxide molecules within a surrounding dust cloud. These excited molecules release powerful radio waves, just as excited molecules within a laser release coherent light waves. Silicon monoxide masers are believed to eject mass from very old stars.

But widely spaced arrays are too powerful to image the entire maser, which are in our galaxy and, thus, relatively close. Katsunori Shibata, a radio astronomer at Japan's National Astronomical Observatory, Mitaka, compares it to training a very powerful telescope on a distant house and seeing only a section of wall instead of an outline of the entire building. “The 1000 kilometers separating Taeduk and Nobeyama is ideal for observing these masers,” explains his colleague, Hideyuki Kobayashi. Scientists hope that the observations will shed light on how the masers form and what drives them.

The larger network will incorporate a new array of four, 20-meter antennas scattered throughout Japan, as well as three, 20-meter diameter radio antennas being built in Korea (see map). On its own, the $58 million Japanese array, called VERA (VLBI Exploration of Radio Astrometry) and expected to come online next year, will try to pinpoint the location of masers throughout the Milky Way. In doing so, it will also plot the movement of the galaxy's spiraling arms. The$16 million Korean VLBI Network, to be completed in 2005, hopes to study active galactic nuclei and star-forming regions as well as being part of the larger array.

The Korean and Japanese observatories will also give a boost to the Asia-Pacific Telescope, an informal framework for cooperation among radio observatories throughout the Pacific Rim. “It makes a lot of sense to build in collaborations among regional neighbors as early as possible,” says David Jauncey, a radio astronomer at the Australia Telescope National Facility, Canberra, one of 21 observatories in 10 countries that belong to the consortia.

8. BIOMEDICAL RESEARCH

Tritium Lab to Close After Loss of NIH Funds

1. Jay Withgott*
1. Jay Withgott writes from San Francisco.

BERKELEY, CALIFORNIA—Long a target of local activists, a government-funded tritium labeling facility here is shutting down next month. Federal officials say the 19-year-old facility has outlived its usefulness, but supporters see it as a victim of political pressure founded on scientific ignorance.

The National Tritium Labeling Facility (NTLF) at Lawrence Berkeley National Laboratory develops reagents for biomedical researchers to label molecules with tritium, a radioactive hydrogen isotope used to trace the movements, activities, and binding sites of existing and potential drugs. Local officials have twice passed a resolution urging the government to shut it down for fear that its emissions of tritium gas and tritiated water pose a health hazard, and local Representative Barbara Lee (D-CA) has raised the issue with officials at the National Institutes of Health (NIH). But NIH officials say the facility is safe and that fiscal and scientific shortcomings, not politics, led to its decision to end funding.

“I did not consider the NTLF among our highest priorities in view of … resources needed for genomics,” says Judith Vaitukaitis, director of NIH's National Center for Research Resources (NCRR), which has supported the facility since its inception. “It was never mentioned during our workshops to set priorities for biomedical technology.” The NTLF also had become “too much of a service facility for industry,” she adds. Figures show that it has provided a total subsidy to users of $97,000 over the last 2 years. Michael Marron, NCRR's director of biomedical technology, says that the primary reasons for closure were low publication rates, inadequate service to NIH grantees, and failure to fill a safety position. Supporters of the center question that explanation and accuse NIH of caving in to outside pressure. They cite a 1999 NIH review laced with effusive praise that gave the center an exceptional score and say that the subsidy is a small part of a$1-million-a-year budget. “It's an extraordinary example of a bunch of extremely ill-informed and antiscience people destroying a precious scientific lab,” says Elmer Grossman, a professor emeritus at the University of California, San Francisco, and chair of Berkeley's Community Environmental Advisory Commission. “It's a real loss scientifically and an awful event politically.”

Marron and Vaitukaitis adamantly deny that they bowed to political pressure. But they admit that NCRR took notice of the political overtones of the conflict with residents during its review of the center's grant. On 30 August 1999, for example, Vaitukaitis wrote Lee that “the peer review process … was modified to address the concerns expressed by the Berkeley community about tritium emissions.” That modification consisted of a safety site visit by outside evaluators and a meeting with community activists. “We knew the [perceived emission dangers] weren't significant,” says Vaitukaitis, “but to them they were real, and you have to respect that.”

Although the safety panel concluded that radiation emission and risks were “extremely small,” it also concurred with activists that the lab could better monitor smokestack emissions of tritium, which have decreased 10-fold in the past decade. NIH told center officials to hire a Ph.D.-level health physicist to oversee the work, a condition that NTLF director David Wemmer says he accepted “because they said if you don't agree, you won't get the grant.” But the person hired was let go during his 6-month probationary period, and no replacement was found.

In addition, the center “has had a dwindling impact on the biomedical community,” says Marron. Wemmer admits that publication rates have fallen since 1999 but attributes it to time spent fighting community activism. “We've been busy going to city council meetings,” says facility manager Philip Williams, one of four staffers who will lose their jobs.

Not surprisingly, users praise its importance. “The NTLF has been invaluable in my research, and closing it down will destroy a decade's worth of investment and advancement,” wrote Jerome Parness of the University of Medicine and Dentistry of New Jersey in a letter of protest to NIH. Parness is using tritium to understand how the drug dantrolene treats a rare and potentially fatal muscle disorder.

The NTLF's doors will shut on 6 December, a deadline that supporters are desperately trying to forestall. Grossman, who has filed a Freedom of Information Act request to learn more about how the decision was made, says he hopes “we can get enough people enraged at high enough levels.” Although community activists believe that NIH's decision marks the end of a long fight to close the facility, they plan to monitor the shutdown and cleanup process.

9. PRION DISEASES

U.S. Gets Tough Against Chronic Wasting Disease

1. Martin Enserink

Veterinary officials in Colorado are anxiously trying to curtail an outbreak of chronic wasting disease (CWD), which affects deer and elk and is related to bovine spongiform encephalopathy (BSE) or “mad cow” disease. After the alarming finding that elk from an infected farm have been shipped to more than a dozen states, some fear that CWD may spread across the United States. In late September, U.S. Department of Agriculture (USDA) Secretary Ann Veneman declared the CWD situation an emergency, a measure that enabled her department to spend $2.6 million in federal funds to kick-start an aggressive eradication campaign. CWD is one of the transmissible spongiform encephalopathies, such as BSE and variant Creutzfeldt-Jakob disease—the human form of BSE that has now claimed more than 100 lives in the U.K. Currently, there's no evidence that CWD could pose a similar threat. Experiments suggest that it doesn't spread to cattle under natural circumstances, and there's no evidence that humans can get sick from eating infected deer or elk meat (Science, 1 June, p. 1641). However, nobody can exclude that possibility either. But even if it only affects elk and deer, CWD could ruin the elk industry, which raises the animals for their meat and velvety antlers, a popular ingredient in dietary supplements. Already, Canada has closed its borders to U.S. deer and elk. “If this is not dealt with, the industry is doomed,” says Wayne Cunningham, the state veterinarian at the Colorado Department of Agriculture (CDA). Believed to be caused by an aberrant protein called a prion, CWD causes listlessness, emaciation, and eventually death. It is thought to spread through direct contact between animals or through environmental contamination with the prion protein. The disease has been endemic for decades in wild deer and elk populations in northeastern Colorado, southeastern Wyoming, and a small part of neighboring Nebraska. Now, there is concern that infections on elk farms could spread the disease to wild populations of deer and elk anywhere in the United States, dealing a blow to the hunting industry. In Eastern states especially, which have huge populations of white-tailed deer, the disease could take a big toll, says Michael Miller, a veterinarian with the Colorado Department of Natural Resources. CWD has popped up at 15 different elk farms in Colorado, Montana, Nebraska, Oklahoma, and South Dakota since 1997. Since August, Colorado officials have found CWD in six elk, five of which originated at a single elk ranch in Stoneham. So far, CDA has quarantined nine ranches with ties to that ranch; the more than 1300 elk living there will be killed and tested. (The only way researchers can definitively diagnose the disease is by studying the animal's brain.) Over the past 5 years, the farm, one of the biggest in the country, has shipped some 160 animals to other parts of Colorado and more than 200 to elk farms in 15 different states as far east as Pennsylvania, says Cunningham. Because the animals can be infected for years without showing any symptoms, all of those are being tracked down to be tested. If found to have CWD, the herds they live in will have to be quarantined as well. Colorado also has imposed a moratorium on elk movements within the state. The recent outbreaks underscore the need for an eradication program for elk farms, which the USDA has had in the works since 1999, says Lynn Creekmore, a veterinarian with the agency. Currently, farmers often don't get full reimbursement when their herds are confiscated, says Creekmore—which means they have little incentive to report sick animals. Under the new program, USDA would implement an active surveillance program, pay farmers a fair price for their animals, and also pay for destruction of the carcasses and decontamination of their farms. By acting rapidly, says Creekmore, USDA hopes to control the CWD outbreak. It's not going to be cheap. For one thing, the soil in affected farms may have to be scraped off and decontaminated at high temperatures. But, she adds, “if we decide to wait, it will be a much more costly problem 10 years from now.” 10. PHILANTHROPY Caltech Lands Record-Breaking$600 Million

1. Jocelyn Kaiser

Semiconductor pioneer Gordon Moore and his wife Betty set a new record in philanthropy last week by announcing a $600 million donation to Moore's alma mater, the California Institute of Technology (Caltech) in Pasadena. The largest gift ever to a university, the money may fund everything from items on Caltech's wish list to projects not yet determined. The gift easily tops two other record-breaking university donations this year:$400 million to Stanford University and $360 million to Rensselaer Polytechnic Institute. Moore earned a chemistry Ph.D. in 1954 from Caltech, a science and engineering powerhouse with 900 undergraduates and 1000 graduate students. He and a colleague went on to design the first microprocessors and found Intel, based in Santa Clara, California. Half of the Moores'$600 million gift will be disbursed over 10 years by the Gordon and Betty Moore Foundation, established a year ago to fund environmental, science, and education projects. The foundation will fund mutually agreed-upon programs with “measurable results,” Moore says. The other $300 million, spread over the next 5 years, will be unrestricted. Moore says he was motivated by his “long association with Caltech” and his belief that the school “fulfills a unique position in the country” that's an “expensive endeavor.” Caltech president David Baltimore calls the donation “wonderful.” Baltimore says the money likely won't be used to expand the campus or “move in new directions.” Instead, he expects it will strengthen existing research, which ranges from plate tectonics to postgenomics biology. The funds may also be used to upgrade facilities and “help faculty realize their research dreams.” Caltech, he says, has a wish list that includes ideas such as a 30-meter optical telescope with the University of California. Money may also go to endowed professorships and the university's$1.5 billion endowment.

11. OBESITY RESEARCH

Fat Hormone Makes a Comeback

1. Trisha Gura

Like frustrated dieters, obesity researchers sank into bitter disappointment 2 years ago when the scales tipped against leptin as a potential weight-loss drug. The hormone, produced by fat cells, normally quells appetite and balances the body's supply of fat and energy. But when given to dieting, obese individuals in a clinical trial, leptin supplements had little effect except in a fraction of those people given the highest doses (Science, 29 October 1999, p. 881).

But a new study has renewed researchers' hopes for leptin's potential as a pound-shedding drug. Endocrinologist Stephen O'Rahilly of Addenbrooke's Hospital in Cambridge, U.K., and colleagues identified 13 people with defects in one copy of their leptin gene. These individuals make roughly half the normal levels of the hormone. Apparently as a result, they end up heavier and packed with a significantly higher percentage of body fat than family members with two normal copies of the leptin gene. The study suggests that at least in some people, low leptin levels—a treatable condition—can lead to obesity.

The results come as a surprise to many obesity researchers, who accepted the dogma that even a little bit of leptin was enough to regulate fat stores normally. “We now know that having a little less than the normal amount of leptin is enough to cause a problem with body fat and weight,” says obesity researcher Jeffrey Flier of Beth Israel Deaconess Medical Center in Boston.

To pinpoint genes that confer excessive body fat, O'Rahilly spent 10 years gathering a cohort of extremely obese individuals with body mass indices (BMIs, defined as weight/height2) of at least four times higher than normal. The team first hit the jackpot in 1997 with a publication describing two Pakistani cousins who carried defects in both copies of the leptin gene. The cousins—and several other people subsequently identified—produced virtually no leptin and showed the hallmarks of leptin deficiency first discovered in 1994 in mice: excessive body fat, extreme hunger, and sterility. But the children's parents weren't grossly obese, even though each carried one defective and one normal copy of the leptin gene. The conclusion, recalls O'Rahilly, was that leptin must operate under a threshold: “If you go from zero leptin to a smidgen, that is all you need” for normal fat metabolism, he says.

On the clinical front, the threshold theory explained why leptin apparently didn't work when given to most obese individuals. Most carry normal leptin genes, and many, in an apparent paradox, actually make higher than normal amounts of leptin—they just don't respond to the hormone properly. “So taking someone with leptin amount x and making it x-plus-something doesn't seem to make much of a difference,” O'Rahilly explains—or at least, it didn't appear to make a difference until now.

The new study, based on three unrelated families, two in the U.K. and the third in Canada, calls this received wisdom into question. Some members of these families carry a mutation in at least one copy of the leptin gene, decreasing—to varying degrees in different family members—the amount of leptin their bodies produce. Most are heavy but not grossly obese. The team measured the volunteers' blood leptin levels, BMI, and percentage of body fat. The lower the leptin level, the higher the BMI and percentage body fat, the researchers report in the 1 November issue of Nature.

People with both leptin genes knocked out respond “extremely well” to therapy, says O'Rahilly, who is preparing results for publication. Leptin injections damped down appetite, caused people to lose weight, and apparently spurred the onset of puberty. No similar assessments could be made with the new cohort of individuals with a single-gene defect: All refused leptin treatment. “These people are from a culture that considers it a status symbol to be chubby,” he explains. But he suspects that some chubby people—with or without a leptin gene defect—who would prefer to slim down might benefit from the research. As O'Rahilly says, “There might be an obese subgroup with equivalently low leptin levels, which at least might be worthy of a clinical trial.”

1. Constance Holden

Aided by brain imaging advances, scientists are looking for evidence that compulsive nondrug behaviors lead to long-term changes in reward circuitry

People toss around the term “addiction” to describe someone's relationship to a job, a boyfriend, or a computer. But scientists have traditionally confined their use of the term to substances—namely alcohol and other drugs—that clearly foster physical dependence in the user.

That's changing, however. New knowledge about the brain's reward system, much gained by superrefined brain scan technology, suggests that as far as the brain is concerned, a reward's a reward, regardless of whether it comes from a chemical or an experience. And where there's a reward, there's the risk of the vulnerable brain getting trapped in a compulsion.

“Over the past 6 months, more and more people have been thinking that, contrary to earlier views, there is commonality between substance addictions and other compulsions,” says Alan Leshner, head of the National Institute on Drug Abuse (NIDA) and incoming executive officer of the American Association for the Advancement of Science, publisher of Science.

Just where to draw the line is not yet clear. The unsettled state of definitions is reflected in psychiatry's bible, the Diagnostic and Statistical Manual IV. Addictions, obsessions, and compulsions—all related to loss of voluntary control and getting trapped in repetitious, self-defeating behavior—are scattered around under “substance-related disorders,” “eating disorders,” “sexual and gender identity disorders,” “anxiety disorders,” and “impulse-control disorders not elsewhere classified.” In that last grab-bag are compulsive gambling, kleptomania, fire-setting, hair-pulling, and “intermittent explosive disorder.”

Addiction used to be defined as dependence on a drug as evidenced by craving, increased tolerance, and withdrawal. But even some seemingly classical addictions don't follow that pattern. Cocaine, for example, is highly addictive but causes little withdrawal. And a person who gets hooked on morphine while in the hospital may stop taking the drug without developing an obsession with it.

Now many researchers are moving toward a definition of addiction based more on behavior, and they are starting to look at whether brain activity and biochemistry are affected the same way in “behavioral” addictions as they are by substance abuse. One who endorses this perspective is psychologist Howard Shaffer, who heads the Division on Addictions at Harvard. “I had great difficulty with my own colleagues when I suggested that a lot of addiction is the result of experience … repetitive, high-emotion, high-frequency experience,” he says. But it's become clear that neuroadaptation—that is, changes in neural circuitry that help perpetuate the behavior—occurs even in the absence of drug-taking, he says.

The experts are fond of saying that addiction occurs when a habit “hijacks” brain circuits that evolved to reward survival-enhancing behavior such as eating and sex. “It stands to reason if you can derange these circuits with pharmacology, you can do it with natural rewards too,” observes Stanford University psychologist Brian Knutson. Thus, drugs are no longer at the heart of the matter. “What is coming up fast as being the central core issue … is continued engagement in self-destructive behavior despite adverse consequences,” says Steven Grant of NIDA.

Not everybody is on board with this open-ended definition. For one thing, says longtime addiction researcher Roy Wise of NIDA, drugs are far more powerful than any “natural” pleasure when it comes to the amounts of dopamine released. Nonetheless, behavioral resemblances to addiction are getting increasing notice.

Gambling

In a class of its own as the disorder that most resembles drug addiction is pathological gambling. Compulsive gamblers live from fix to fix, throwing away the rest of their lives for another roll of the dice—and deluding themselves that luck will soon smile on them. Their subjective cravings can be as intense as those of drug abusers; they show tolerance through their need to increase betting; and they experience highs rivaling that of a drug high. Up to half of pathological gamblers “show withdrawal symptoms looking like a mild form of drug withdrawal,” says Shaffer—including churning guts, sleep disturbance, sweating, irritability, and craving. And like drug addicts, they are at risk of sudden relapse even after many years of abstinence.

Furthermore, what's going on inside gamblers' heads looks like what goes on in addicts' heads. Yale psychiatrist Marc Potenza finds that when pathological gamblers are exposed to videos of people gambling and talking about gambling, they show activity changes in some of the same frontal and limbic brain regions as do cocaine addicts exposed to images that stir up drug craving, as assessed by functional magnetic resonance imaging (fMRI). And a positron emission tomography study of pathological gamblers playing blackjack, conducted by psychiatrist Eric Hollander of Mount Sinai School of Medicine in New York City, showed significant changes in cortical arousal depending on whether they were just playing cards or betting with a \$100 stake. He says it resembles another study showing alcoholics' brain reactions to looking at a bottle of Coke versus a bottle of whiskey.

Like addicts, gamblers also respond to drugs that block drug highs. Suck Won Kim, a psychiatrist at the University of Minnesota Medical School in Minneapolis, has tried naltrexone, an opiate antagonist, on a variety of compulsive behaviors including gambling. In an 11-week trial on 45 gamblers, naltrexone inhibited both the urge to gamble and the high from it in 75% of the group—compared with 24% of a comparable group on placebo—suggesting that drugs and gambling stimulate some of the same biochemical pathways.

And finally, there's cognitive evidence: Gamblers, like drug addicts, do badly at a “gambling task,” success at which requires the ability to perceive that delayed gains will be larger than immediate ones.

Food

Can food be said to be an addiction? Overeaters Anonymous—which, like Gamblers Anonymous, is patterned on Alcoholics Anonymous—says yes. The experts, however, say it depends on the disorder.

Compulsive overeating certainly has the look of an addiction that can dominate a person's life. There's also biochemical evidence suggesting a kinship. Psychiatrist Nora Volkow of Brookhaven National Laboratory in Upton, New York, and colleagues found that in a group of compulsive overeaters, dopamine receptor availability was lower, an anomaly also seen in drug addicts. “Dopamine deficiency in obese individuals may perpetuate pathological eating as a means to compensate for decreased activation of these circuits,” Volkow's team suggests.

Bulimia, which is characterized by bingeing and vomiting, also looks a lot like an addiction, Hollander notes. Unlike anorexia, which involves rigidly controlled behavior and no high, “bulimia and binge eating have an impulsive component—pleasure and arousal followed by guilt and remorse.”

Patricia Faris, a gastrointestinal physiologist at the University of Minnesota, Minneapolis, believes that as with drug addictions, bulimic behavior is initially voluntary but is transformed into a compulsion because of changes that it wreaks on the nervous system. Bulimia clearly affects reward centers: Faris says patients become increasingly depressed and anxious before episodes; immediately following, they uniformly report a pleasant “afterglow.”

Faris has come up with a novel hypothesis: that bulimia disregulates the vagal nerve, which regulates heart and lungs as well as the vomiting impulse. She suspects that a binge-purge episode then brings the vagal nerve back to its normal role. This retraining of the vagal nerve also has long-term effects on the brain's reward circuitry, she believes, as suggested by the fact that bulimics have a high relapse rate and are very hard to help once they've been at it for a few years. Kim says that although the theory is speculative, he believes Faris is on the right track in approaching the problem “from neural system concepts” as opposed to a more traditional emphasis on biochemistry.

Sex

There's not much research on sex as an addiction, and some researchers are dubious about whether such a basic function can have that distinction. Sex is really a distinct subject because it's “wired separately,” in the opinion of Kim of Minnesota. He notes, for example, that the opioid antagonist naltrexone “really doesn't affect sexual desire that much,” so it doesn't follow the same pathways as, say, gambling.

Yet so-called sex addicts do display behaviors characteristic of addiction: They obsess about whatever their favorite practice is, never get enough, feel out of control, and experience serious disruption of their lives because of it. That leads Shaffer to conclude that some behaviors qualify as sex addictions: “I think those things that are robust and reliable shifters of subjective experience all hold the potential for addiction.” To be sure, he adds, sex trails behind drugs or gambling, being “relatively robust but unreliable” in delivering satisfaction.

Anna Rose Childress, who does brain imaging studies at the University of Pennsylvania in Philadelphia, says sex addicts resemble cocaine addicts and probably share with them a defect in “inhibitory circuitry.” In both instances, “people say when they're in this big ‘go’ state they feel as though there is override [of inhibition] … a feeling of being unable to stop,” says Childress.

Scientists are just beginning to use imaging to try to determine whether there's a tangible basis to these feelings. Childress has been comparing the circuits activated by cocaine in addicts and sexual desire in normal subjects in hope of identifying the “stop!” circuitry. And psychiatrist Peter Martin at Vanderbilt University in Nashville, Tennessee, says a preliminary study with normal subjects indicates that brain activity associated with sexual arousal looks like that accompanying drug consumption. He plans to do further comparisons using self-described sex addicts.

Shopping, running, clicking…

Although there is no shortage of therapies for every imaginable addiction, there is little or no published research on other disorders. One problem that afflicts a great many women, in particular, is compulsive shopping, says Kim. Compulsive shoppers typically end up with huge debts and their houses stuffed with unused merchandise. Shopping binges are very often precipitated by feelings of depression and anxiety, Shaffer says; the shopping itself can generate temporary druglike highs before the shopper—like a cocaine addict—crashes into depression, guilt, anxiety, and fatigue.

Some have no doubt this is an addiction. “In my clinical experience, [compulsive shoppers] have a similar kind of withdrawal,” says Shaffer. Kim agrees: “These people can't control it. We think it's essentially the same thing as gambling.” Kim thinks compulsive shoplifting (kleptomania) is also closely related and, in fact, has published the first formal study trying doses of naltrexone with kleptomania; 9 of 10 patients, he says, were much improved after 11 weeks of treatment.

Then there's Internet abuse, the country's fastest growing “addiction.” But whether any such phenomenon exists is something about which scientists—if not therapists—are cautious. There are indeed people who neglect the rest of their lives as they spend every waking moment at the monitor. But is it the technology or the behavior that the technology enables that people are really hooked on? The things people are addicted to on the Net are the same things people get hooked on without it: gambling (including day trading), pornography, and shopping, notes Marc Pratarelli of the University of Southern Colorado in Pueblo. His group is doing factor analysis of questionnaire responses by computer users to get at the “core issues” and to determine “if it is in fact just one more fancy tool” to enable a primary habit.

And what about “positive addictions”? Some years ago jogging was touted as one that raised endorphin levels (which in turn stoke up the dopamine) and resulted in a “natural high.” Although human behavioral addictions are difficult if not impossible to model in animals, Stefan Brené of the Karolinska Institute in Stockholm, Sweden, thinks he has done it with running. He says rats that have been bred to be addiction-prone spend much more time on the running wheel than other rats do. Furthermore, biochemical tests indicate the impulses both to run and to consume cocaine are governed by “similar biochemical adaptations.” He also says the work—most of it as yet unpublished—shows that in an addiction-prone rat, running can increase preference for ethanol—“indicating that a natural, nontoxic … addiction can under some instances potentiate the preference for a drug.”

The above by no means exhausts the list of behaviors that some scientists see as addictive. And it seems to be true across the board that having one addiction lowers the threshold for developing another, says Walter Kaye, who does research on eating disorders at the University of Pittsburgh Medical Center. Just what form addictions take has a lot to do with one's sex, says Pratarelli. Men are overwhelmingly represented among sex “addicts” and outnumber women by about 2 to 1 in gambling and substance abuse; women are prone to what psychiatrist Susan McElroy of the University of Cincinnati College of Medicine calls the “mall disorders”—eating, shopping, and kleptomania. (Kim says the ratio of females to males in kleptomania is 2 or 3 to 1; perhaps 90% of compulsive shoppers are women.)

To cast more light on the mechanisms of addiction, scientists have taken a growing interest in behavior of the brain's reward circuitry in normal subjects. In a much-cited paper in last May's issue of Neuron, Hans Breiter of Massachusetts General Hospital in Boston and his colleagues used fMRI to map the responses of normal males in a roulette-type game of chance. Blood flow in dopamine-rich areas, the scientists found, indicated that “the same neural circuitry is involved in the highs and lows of winning money, abusing drugs, or anticipating a gastronomical treat.” Other research has been showing that many types of rewards besides money—including chocolate, music, and beauty—affects those reward circuits.

Shaffer and others in his camp believe that if such a reward is powerful enough, it can retrain those circuits in a vulnerable person. Not everyone, however, buys the idea that nondrug stimuli really can be potent enough to generate what has been traditionally thought of as addiction. “Many people believe that [only] addictive drugs alter the circuitry in some critical way,” says Wise of NIDA. And, he says, drugs are far more powerful than “natural” rewards, increasing dopamine “two to five times more strongly.” Kaye also warns that the fact that certain disorders share the same pathways does not necessarily prove they're closely linked. After all, he notes, “stroke and Parkinson's also involve the same pathway.”

Despite the uncertainties, addiction research is “going beyond the earlier conceptual framework,” says neuroscientist Read Montague of Baylor College of Medicine in Houston. “Historically, these definitions have come out of animal behavior literature,” and addiction has been defined in terms of rats frenziedly pressing levers for cocaine. Now, he says, “we need a better theory of how the brain processes rewarding events,” one that involves discovering the “algorithms” people follow that lead them into and then keep them trapped in their disastrous behaviors.

Beyond the Pleasure Principle

1. Laura Helmuth

Sure, drugs feel good—but they're addicting because they co-opt memory and motivation systems, not just pleasure pathways

When it comes to kicking a drug habit, going through withdrawal is the easy part. The cold-turkey alcoholic shaking with delirium tremens might not agree, but only after the body detoxifies does the real challenge begin: staying clean. Ex-addicts with the strongest resolve—and plenty of external motivation in the form of frayed relationships, probationary jobs, or incipient lung cancer—struggle to resist cravings and are susceptible to relapse even years after their last dose.

Researchers have spent decades studying the immediate effects of drugs on the brain. Drugs cause short-term surges in dopamine and other brain messengers that signal pleasure or reward. But the brain quickly adapts to this deluge; pleasure circuits overwhelmed by drugs' signals desensitize—so much so that the brain can suffer withdrawal once the binge is over.

In the past decade or so, many researchers have started to focus on a more daunting problem: the long-term consequences of drug abuse. Many drugs don't induce much pleasure after prolonged use, in part because of desensitization, or tolerance. So why do addicts keep taking their drug of choice, even when they try to abstain? To find out, researchers are seeking clues in parts of the brain that help control motivation, looking for changes that happen after weeks, months, and years of exposure to drugs.

Some of the neural changes they've found look very familiar: Addiction seems to rely on some of the same neurobiological mechanisms that underlie learning and memory, and cravings are triggered by memories and situations associated with drug use. Recent studies have revealed a “convergence between changes caused by drugs of addiction in reward circuits and changes in other brain regions mediating memory,” says neuroscientist Eric Nestler of the University of Texas Southwestern Medical Center in Dallas. For instance, both learning and drug exposure resculpt synapses, initiate cascades of molecular signals that turn on genes, and change behavior in persistent ways. Understanding these processes could help addicts conquer relapse, “the core clinical problem” of addiction, says Steven Hyman, director of the National Institute of Mental Health in Bethesda, Maryland (soon to depart for Harvard University; see p. 970). “If we want to focus on the clinical issue that matters, we have to understand how associative memories are laid down that change the emotional value of drugs and create deeply ingrained behavioral responses to those cues [that trigger relapse].”

Memories you remember

Memory researchers divide memories into those you consciously remember and those you generally don't. Consciously, people may remember a past drug-induced burst of euphoria and seek out the drug again, or they may remember that drugs stop them from feeling crummy. This type of memory is “good at explaining why people take drugs, but it doesn't explain addiction,” says Terry Robinson of the University of Michigan, Ann Arbor. Plenty of people dabble in drug use for just such pleasure-related reasons, but addiction is different. Once addicted, people compulsively seek out and take drugs, even if they don't provide pleasure anymore and despite a strong will to quit—a defining feature of addiction that ties it to other compulsive behaviors (see p. 980).

Nonconscious memories are much more insidious, Robinson points out, and are more likely to underlie the compulsive aspect of addiction and the cravings that lead to relapse. For instance, the paraphernalia of drug use—crack pipes, syringes, the sound of ice tinkling in a glass full of scotch—can act as cues that induce craving much like the sound of a bell caused Pavlov's dogs to salivate. Even though addicts can become conscious of the relationship between some drug-related cues and their cravings, other cues might be less obvious; for instance, they might not recognize that a certain place or smell wakens a hunger for the drug. Cues “can goad an individual to drug seeking in the absence of conscious awareness,” says Robinson.

When former addicts see videos evocative of drug use, they report craving and show signs of stress, such as increased heart rate, says psychiatrist Charles O'Brien of the University of Pennsylvania in Philadelphia. Positron emission tomography (PET) shows that parts of the reward system are unusually active when people experience craving. Other researchers, particularly psychiatrist Nora Volkow of Brookhaven National Laboratory in Upton, New York, also see hyperactivation of the orbitofrontal cortex when recovered addicts see cues that induce craving for cocaine. This part of the brain is closely connected to reward pathways and is disrupted in people with obsessive-compulsive disorder. Volkow suggests that the orbitofrontal cortex is responsible for the craving and compulsion that make addicts so susceptible to relapse.

Another type of nonconscious memory, called sensitization, is less intuitive. If an animal receives a big dose of a drug—say, amphetamine, morphine, or cocaine—for several days in a row, each successive dose causes a stronger response. Behaviorally, the rat bobs its head and runs around the cage more, and inside the brain, more dopamine is released even though the drug dose is the same. The effect lasts a long time, says Robinson. His group sensitized rats to amphetamine, waited 1 year, and then gave the animals another dose. Even then, they responded more strongly than did animals without previous drug experience. Sensitization “alters neural circuitry involved in normal processes of incentive, motivation, and reward,” says Robinson. This neural circuitry isn't a simple hardwired response to the drug, however: It depends on context. If Robinson's team gives a sensitized rat another dose of the drug in a different cage from the one where it received the training doses, the animal responds normally, as if it had never experienced the drug before. Sensitization, in an environment where an animal and presumably a person has learned to expect a drug, “renders brain circuitry hypersensitive to drugs and drug-associated paraphernalia,” says Robinson.

Technicolor short-circuits

Although each drug of abuse has its idiosyncratic effects, all specialize in bombarding the brain's dopamine-mediated reward circuits. Long-term abuse can wear out these pathways, reducing the number of receptors that respond to dopamine. Some of Volkow's more chilling PET scan images show the brains of former methamphetamine users: Some have been drug free for months but their dopamine systems are still not firing on all cylinders. Dopamine fuels motivation and pleasure, but it's also crucial for learning and movement. Volkow reported in the March issue of the American Journal of Psychiatry that the loss of dopamine transporters, a measure of how disrupted the dopamine system is, correlates with memory problems and lack of motor coordination.

Once the brain becomes less sensitive to dopamine, it “becomes less sensitive to natural reinforcers,” Volkow says, such as the “pleasure of seeing a friend, watching a movie, the curiosity that drives exploration.” The only stimuli still strong enough to activate the sputtering motivation circuit, she says, are drugs.

Understanding that drugs “rearrange someone's motivational priorities” can help explain some of the senseless behaviors addicts engage in, such as neglecting their families, jobs, and health, says Alan Leshner, director of the National Institute on Drug Abuse in Bethesda, Maryland. As Leshner explains, “it isn't the case that the crack-addicted mother does not love her children. She just loves drugs more.”

Microscopic memories

Many recovered drug users say they fight cravings for the rest of their lives. Addiction researchers aren't sure how drugs change the brain in ways that can last a lifetime, but they aren't alone. As Nestler points out, “our field [of addiction research] and the learning and memory field have not made a lot of progress … in identifying molecular changes underlying long-term changes in memory.”

They have some hunches, though, and some reasonable evidence for mechanisms that might pitch in to construct long-lasting memories or compulsions. Research on the sea snail Aplysia and the mouse hippocampus has uncovered a range of cellular signals that accompany learning; some of the same players are active in addiction. For example, the transcription factor CREB is necessary for learning in mice and Drosophila, and CREB is also boosted by drug use, suggesting that it contributes somehow to drug-mediated neural changes.

One of the best studied models of memory at the cellular level is called long-term potentiation (LTP). Memories are stored in the brain, researchers suggest, in part by changes in how neurons are interconnected. In LTP, hyperstimulation makes synapses, those points of near-contact where neurons communicate, more responsive to future stimulation—that is, it changes the connection between the two cells. In the 31 May issue of Nature, Mark Ungless of the University of California, San Francisco, and colleagues showed that a single dose of cocaine induced LTP in dopamine cells in a part of the brain called the ventral tegmental area that is critical for addiction (see figure on p. 982), suggesting that the same cellular mechanisms—albeit in different parts of the brain—are at work in memory and addiction.

Other changes in synaptic connections are more concrete: Memory researchers have found that a neuron's dendrites build more branching projections and have more synapses that connect to neurons with which the cell communicates regularly. Robinson and colleagues have found that drugs produce the same effect. When they sensitize animals to a drug, they see more dendritic branches and a greater density of neurons in the nucleus accumbens and the prefrontal cortex. Both areas are key players in processing reward signals and making decisions. These morphological changes last at least a month, Robinson says, and probably longer. Addiction researchers and memory researchers suspect that such physical changes in neurons and their connections are crucial for understanding both fields. “To me, that is the core,” says Hyman. “There is no more important question in the field of addiction than understanding the mechanisms that produce and maintain altered patterns of synaptic connections.”

Nestler says the basic molecular processes constructing these changes are probably the same in learning and memory and in addiction. After all, he says, “the brain is conservative” and probably has a “finite repertoire of molecular changes that it can mount in response to environmental perturbations.” Nestler and his colleagues have found at least one molecule that appears to be specific for addiction, however. The protein, called Δ-FosB, builds up in the reward pathway after repeated exposure to drugs and sticks around longer than other proteins—for as long as 4 to 6 weeks after the last dose. The protein increases an animal's sensitivity to drugs and can also induce relapse if injected.

In some cases, the line between memory systems and addiction is hard to draw. For instance, Stanislav Vorel of Albert Einstein College of Medicine in New York City and colleagues reported in the 11 May issue of Science (p. 1175) that stimulating the hippocampus—the archetypal seat of memory in the brain—makes formerly exposed but now drug-free rats seek out cocaine. And other researchers have discovered that in some cases, dopamine in so-called pleasure circuits appears to be more important for learning—or what different labs call prediction or anticipation—than reward.

At the top of many addiction researchers' “to-do lists” is to detangle all the threads of learning and memory, motivation, and reward that make addiction addiction: Find the switch. As Leshner puts it, “we know a tremendous amount about the differences between the addicted and nonaddicted brain, both behaviorally and biologically. We know less about the transition process between the two.” The processes involved in learning and memory may eventually be the key to figuring out how an often pleasurable experience—taking a drug—can change from a somewhat self-destructive hobby to a life-threatening compulsion.

14. BIOTERRORISM

Smallpox Vaccinations: How Much Protection Remains?

1. Jon Cohen

Immunity from smallpox vaccination is widely assumed to decline rapidly, but studies suggest that some protection may persist for decades

If one of the worst nightmares of biowarfare experts is realized with the release of smallpox in a major city, the virus would have a field day. The disease would sweep through an entirely unprotected population: Anybody born after the early 1970s—when most countries stopped vaccinating against smallpox—would be completely vulnerable, and the vaccinations given to older people would by now offer no protection. That, at least, is the conventional wisdom. But some experts believe this picture is excessively bleak.

There's no question that unvaccinated people are especially vulnerable: The virus kills about 30% of those who have no immunity. But just how much protection decades-old vaccinations still provide has become a matter of debate. The U.S. Centers for Disease Control and Prevention (CDC) echoes a widely held assumption: “Most estimates suggest immunity from the vaccination lasts 3 to 5 years,” it says on a new public Web site devoted to smallpox. But a handful of researchers who have examined the scientific literature, including century-old studies as well as state-of-the-art explorations of immunologic memory, believe protection may be far more durable.

At first glance, the debate may seem irrelevant. A smallpox outbreak would still be horrific: In the United States alone, roughly 120 million of the 275 million people in the country were born after routine vaccination ended, and no public health official would risk not revaccinating those who received a smallpox shot 30 years or more ago. But if a substantial fraction of the population does have significant immunity, the disease might not spread as fast, and the epidemic would not be as deadly as many fear. And that knowledge could shape decisions about who should receive vaccines first if an outbreak occurs when supplies are still limited. “It should influence the discussion, and this should prompt doing some studies,” says Anthony Fauci, head of the National Institute of Allergy and Infectious Diseases.

Frank Fenner, a leading smallpox authority based at the John Curtin School of Medical Research in Canberra, Australia, is among those who believe vaccine-induced immunity to variola, the smallpox-causing virus, is quite long-lasting. Fenner, who co-authored Smallpox and Its Eradication, the seminal book about the disease, points to a study first published in 1913, “Studies in small-pox and vaccination,” a monograph by William Hanna. In that work, researchers analyzed an outbreak of smallpox in Liverpool, England, in 1902–1903. At that time, notes Fenner, Great Britain immunized people only once, during infancy. The study looked at severity of disease in 1163 people, comparing the vaccinated to the unvaccinated in different age groups. The data show that immunity did wane over time, leading to the idea that booster shots were needed at least every 10 years. But 93% of the people 50 or older who had received the vaccine escaped severe disease and death; in contrast, six of 12 unvaccinated people in that age bracket came down with a serious case of smallpox and all of them died (see table).

During the past 11 years, two studies of the immune systems of vaccinated people also found evidence of long-lasting immunity. One arm of the immune system produces antibodies in response to foreign “antigens,” preventing invaders from infecting cells, while another marshals T cells to find and destroy infected cells. A study conducted by Israeli researchers found that antibody levels declined in the first 3 years after vaccination, but after that initial drop, this type of immunity “remains stable for at least 30 years after the last revaccination,” Baruch El-Ad and his colleagues reported in the Journal of Infectious Diseases in 1990.

View this table:

In 1996, Francis Ennis and co-workers at the University of Massachusetts Medical School in Worcester reported in the Journal of Virology that T cell immunity in response to the smallpox vaccine also remains intact for decades. The researchers stimulated white blood cells from vaccinated people with vaccinia, the virus that serves as the smallpox vaccine and found “striking” responses. “The data presented here are perhaps the first clear evidence that virus-specific T cell memory can persist for up to 50 years in humans in the presumed absence of antigen,” they concluded.

Rolf Zinkernagel, an immunologist at the University of Zürich in Switzerland who won the Nobel Prize for his discoveries about the role of T cells in immunity, cautions that these measures don't necessarily correlate with protection. Still, Zinkernagel believes that substantial immunity to smallpox does exist in the U.S. population. If a smallpox attack occurs, he says, “it will not be as bad as the epidemics in the Middle Ages.” Immunologist Rafi Ahmed of Emory University in Atlanta, Georgia, agrees. The U.S. population probably has much more immunity to smallpox than many people realize, he says. “We're probably not as badly off as we have thought.” But coming up with a better sense of how much protection exists remains conjecture, as even Ahmed and Zinkernagel have strikingly different perspectives about what triggers immunologic memory and how it persists.

James Leduc, the leading smallpox authority at the CDC, says, “we're now looking very closely at the historical data” on the durability of immunity. And Leduc, who says CDC hopes to update its Web site next week with scientifically based information about these questions, stresses that many uncertainties will remain: “We have a disease and a scientific database that really stopped progressing 20 years ago.”

15. APS DIVISION OF NUCLEAR PHYSICS

Elusive Particles Yield Long-Held Secrets

1. Charles Seife

WAILEA, HAWAII—At a lush resort on the west coast of the island of Maui, Japanese and American physicists discussed the latest news in nuclear physics. At the first joint meeting of the nuclear physics divisions of the Japanese Physical Society and the American Physical Society,* surf and waves took a back seat to neutrinos and nuclei, for a few days at least.

Neutrinos Show Their Stuff

Physicists from three groups announced at the meeting that their latest data about neutrinos—also known as ν's (rhymes with “news”)—all but confirm earlier indications that the elusive particles have mass. The data, from different neutrino-hunting groups, mark the beginning of a new phase in neutrino physics: Now that the big question is settled, scientists are finally beginning to understand the finer properties of the most mysterious members of the particle zoo. “It's a whole new frontier now,” says Kevin Lesko, a neutrino physicist at Lawrence Berkeley National Laboratory in Berkeley, California.

The frontier has remained untamed mainly because neutrinos seldom interact with matter. Most would pass through Earth unhindered, barely noticing the tons of rock and iron in their path. But once in a while, a neutrino does interact and signal its presence. For instance, if a type of neutrino known as an electron neutrino happens to strike a deuterium atom in the 1000-ton sphere of heavy water in a mine in Sudbury, Ontario, it can split the atom, yielding two protons and an electron. By measuring the signature of that electron, scientists at the Sudbury Neutrino Observatory (SNO) can figure out the incoming neutrino's path and energy (Science, 22 June, p. 2227).

Over years and months, physicists at SNO and at Super-Kamiokande in Kamioka, Japan, have provided strong evidence that neutrinos oscillate among three “flavors,” changing from electron neutrinos to mu neutrinos to tau neutrinos and back again. That can happen only if the neutrinos have mass, a question that the standard model of particle physics leaves open. The latest evidence, presented here last week, supports the earlier results. In an experiment called K2K, physicists at the KEK neutrino laboratory in Tsukuba, Japan, have been shooting a beam of muon neutrinos at the Super-K detector 250 km away since June 1999 (Science, 12 February 1999, p. 928). According to Kenzo Nakamura of KEK, scientists have seen 44 of those neutrinos in the Super-K detector thus far. If neutrinos didn't oscillate, scientists should have seen 64. “The probability of no oscillations is less than 3%,” says Nakamura.

Now, with the big question all but settled, scientists are eager for more specific information. They still don't know how much mass the neutrinos have, nor do they know an important quantity called the “mixing angle,” which determines just how the neutrinos oscillate. But a new phase in neutrino research is finally yielding some answers.

According to theory, each of the three flavors of neutrinos, electron, tau, and muon, is a mixture of three “basis” elements, ν1, ν2, and ν3. The “mixing angle” gives a measure of how deeply blended these basis elements are. For example, with a low mixing angle, an electron neutrino might be composed almost entirely of ν1; with a high mixing angle, an electron neutrino might be, say, roughly equal parts of ν1 and ν2, with a little ν3 thrown in for good measure. The mixing angle has profound effects on neutrinos' behavior; for example, neutrinos with a low mixing angle are affected by their passage through matter much more than those with a large mixing angle. Most theorists preferred small mixing angles, largely because quarks have them. “[Small mixing angles] were everybody's favorite,” says Lesko. But no one knew for sure.

To find out, scientists at Super-K checked whether neutrinos coming from the sun behaved differently by day and at night, when they have to pass through Earth on their way to the detector. If the mixing angle were small, the physicists expected to see a fairly distinct day-night variation. They didn't. “There is no indication of a strong day-night flux difference,” says Yoichiro Suzuki of the Kamioka Observatory. Combined with data from other neutrino experiments, such as SNO, the Super-K results imply with 95% confidence that the mixing angle is in fact large, says Suzuki. Although it's not an open-and-shut case, small mixing angles are clearly in trouble.

Furthermore, Super-K, K2K, SNO, and other efforts are eliminating some of the more complicated neutrino theories. One idea, that neutrinos oscillate into a fourth, noninteracting, “sterile” neutrino, has been ruled out at the 99% level. Other experiments are hoping to spot a rare nuclear decay that could prove that neutrinos are the same thing as antineutrinos—an assumption contrary to plain-vanilla neutrino theory.

The upshot, Lesko says, is that neutrino physics has moved beyond the basics—proving that neutrinos oscillate—to finding out the details about neutrinos. “That's what's left to us now,” he says. “We're getting a theoretical understanding of what must be, and we will have to expand the standard model to account for neutrinos.” For the next few years, it is clear that ν's will yield some of the hottest news in physics.

Doubly Strange Particle

Nuclear physics is twice as strange as ever before. For the first time, scientists are confident that they have created doubly strange nuclei: atoms that contain two particles bearing “strange” quarks. This feat has enabled them to make the first rough estimate of the attractive force between these strange particles, filling a gap in scientists' understanding of the basic properties of subatomic particles.

The key to the discovery is an exotic particle known as lambda (Λ). Whereas ordinary neutrons and protons are combinations of so-called up and down quarks—familiar varieties of quark the forces of which physicists understand—a Λ particle is made up of an up quark, a down quark, and a more mysterious particle called a strange quark. Nuclear physicists don't have a complete grasp of the properties of strange matter, such as how strongly particles with strange quarks attract each other. For this reason, they have been scrambling to get two Λ particles together to measure how tightly they are bound. “It's the number that everyone's after,” says Robert Chrien, of Brookhaven National Laboratory in Upton, New York.

Unfortunately, scientists can't produce beams of Λ particles needed to measure the attraction directly. “The only way to get the lambdas to interact is to put them in the same nucleus,” says Chrien. “You can't do the measurement any other way.” Though scientists have inserted a single Λ particle into nuclei such as beryllium-7 (Science, 9 March, p. 1877), creating “hypernuclei,” they have failed to get two Λ particles into the same nucleus—until now.

Last week, Ken'ichi Imai of Kyoto University in Japan presented a picture of an emulsion that provides good evidence of the creation and destruction of ΛΛ6He (helium-6-lambda-lambda): an ensemble of two protons, two neutrons, and two Λ particles. At the KEK high-energy accelerator in Tsukuba, Japan, Imai and his colleagues smashed kaons—two-quark particles with a strange component—into a diamond target, creating xi-minus (Ξ) particles, which each contain two strange quarks. An emulsion, an expensive version of a photographic plate, captured an event that looks for all the world like a Ξ particle being absorbed by a carbon atom, creating a ΛΛ6He along with less interesting byproducts. From this first glimpse of the ΛΛ6He, “the lambda-lambda interaction energy was determined for the first time,” says Imai. “It's pretty clean,” says Chrien, whose team at Brookhaven has come up with slightly weaker evidence of a different doubly strange hypernucleus, ΛΛ6H (hydrogen-4-lambda-lambda).

According to these experiments, the lambda-lambda attractive energy seems to be fairly weak—about 1 million electron volts (MeV), much less than the estimate of 4 or 5 MeV reported in an earlier, dubious claim of a doubly strange hypernucleus. The new number is more in line with expectations. “One MeV will make a lot of theorists happy,” says Chrien.

Both groups stress that these events are still early results; scientists will need to produce more doubly strange hypernuclei to be certain about the binding energy. But Ed Hungerford of the University of Houston says the recent advances in strange physics are extremely encouraging. “It's an extra degree of freedom for illuminating nuclear structure,” he says. So for once, physicists can be forgiven their extra dose of strangeness.

• * 17–20 October.

16. ARCHAEOLOGY

Spreading the Word, Scattering the Seeds

1. Ben Shouse

Did civilization follow the plow? An alluring model of the dispersal of language and agriculture meets resistance

CAMBRIDGE, U.K.—In 1987 Colin Renfrew's story sounded compelling, like a logical extension of Napoleon's observation that an army marches on its stomach. As the Cambridge University archaeologist first framed it then, throngs of farmers, grown strong on newly domesticated crops—wheat and barley in the west, rice in the east—swept across the land beginning 100 centuries ago. Armed with seeds, genes, and language, they pushed aside indigenous hunter-gatherers like a plow through virgin soil. Renfrew's “farming-language dispersal hypothesis” became a leading explanation for the present distributions of language and culture in Europe, Africa, and Polynesia.

But not everyone was eager to jump on the oxcart. Many scholars were suspicious of the grand aspirations of the farming-language hypothesis and of its archaeologist proponents, who they say tend to ignore unfavorable linguistic data. “The phenomenon of major expansion is very real,” says linguist Roger Blench of the Overseas Development Institute in London, but “it's inconceivable that there is just one explanation.”

Renfrew and Peter Bellwood of Australian National University in Canberra—the pioneer of the hypothesis in the Pacific islands—recently held a conference* here to confront the challenges and foster more genuine collaborations. But new studies presented from India and Southeast Asia further threaten the hypothesis, weakening the case for cereal crops as engines of linguistic dispersal. Along with ongoing controversy over Europe, this adds a heavy burden of complexity to the Renfrew-Bellwood model. “The initial hope of easy answers is being replaced by the realization that there's more to do,” Renfrew says.

His original hypothesis—framed in the 1987 book Archaeology and Language—pictured culture, biology, and language marching in triumphal lockstep. Testing and elaborating the theory required interdisciplinary input. Archaeologists map the movement of cultures by following a trail of pots, tools, and seeds. Geneticists map the movement of populations by comparing genetic markers of people in one region with those of people in their hypothesized homelands. And linguists map the movement of languages by reconstructing ancient tongues from the shared vocabularies of modern ones. When a family of languages has similar agricultural terms—“wheat,” say, or “harvest”—the linguists infer that before the languages branched apart, the ancestral speakers were farmers.

If Renfrew's hypothesis is right, then when these disparate scientists put their maps together, the arrows should point in the same direction, indicating a concerted agricultural dispersal of genes, crops, and words. A wealth of interdisciplinary evidence gathered since 1987 has led Renfrew to refine his hypothesis, allowing individual arrows to sometimes wander off on their own. New reports from the Cambridge conference suggest that wandering arrows may be the rule and lockstep spread the exception.

In India, for example, Renfrew and Bellwood have proposed migration pathways from the fertile crescent—where the Near Eastern agricultural “package” of wheat, barley, sheep, and cattle originated 10,000 years ago—along the Arabian coast, reaching India as early as 8000 years ago. The hypothetical Elamo-Dravidian language family—which includes the Dravidian languages Tamil in India and Brahui in Pakistan, and the extinct Elamite language in Iran—shows a nice, sweeping distribution in the same direction.

Dorian Fuller, an archaeobotanist at University College in London, offers a different story. His excavations show that indigenous southern Indian crops such as mung bean and foxtail millet appeared in southern India 4800 years ago, with wheat and barley arriving 600 years later. The Near Eastern crops apparently stalled for 3000 years in northwest India before farmers developed monsoon-tolerant wheat. Also undermining Renfrew's hypothesis is new work on Dravidian linguistics. Preliminary analyses suggest that the Dravidian words for native southern Indian crops are older than the words for the Near Eastern agricultural package. So Dravidian may be native to India and unrelated to Elamite. Finally, genetics does reveal a migration from the Near East to India, but the large margin of error means it could have happened 20,000 years before the birth of agriculture, says Toomas Kivisild, a geneticist from the Estonian Biocenter in Tartu.

The latest picture from India “snaps quite a sizable arrow,” says Cambridge archaeologist Martin Jones. And one of Jones's former students is bending an even bigger arrow: the “Austronesian” expansion into the Pacific, a centerpiece of the farming-language hypothesis.

Victor Paz, an archaeologist at the University of the Philippines in Quezon City, questions the significance of rice agriculture as a force behind the early Austronesian dispersal. Speakers of this language family—which includes more than 1000 languages spanning Madagascar to Easter Island—left Taiwan about 4500 years ago in outrigger canoes carrying distinctive red pottery and reached Polynesia about 1500 years later. Reconstruction of the Austronesian mother tongue reveals ancient words related to rice, confirming that these mariners were initially rice farmers.

But when Paz inspected the relevant archaeological sites, he found almost no rice alongside Austronesian-style pottery between Taiwan and Borneo. Instead, residents were eating tubers such as yams and taro, according to recent images of unearthed plant remains using scanning electron microscopy. “Every time you see the pottery in an archaeological context, it's almost second nature to assume that you have rice agriculture. But now we know for a fact that that's not true,” Paz says.

Genetics further complicates the picture. Studying mitochondrial DNA sequences, Stephen Oppenheimer of Oxford University and his colleagues have shown that contemporary Polynesians—a small Austronesian offshoot—hail from eastern Indonesia, not Taiwan, and predate the Austronesian expansion (Science, 2 March, p. 1735). So crops, languages, and genes were often moving in quite different directions, ruling out the sweeping, unified expansion of the early farming-language model.

Bellwood disputes many of the dates offered by geneticists but has given some ground on the archaeology. “I am willing to agree that rice might not have been as important as we once thought,” he says. “But rice was still there at the beginning.” And, he agrees, the Austronesians abandoned it at some point. What's unclear is whether they retained their farming way of life or simply foraged, fished, and farmed in whatever combination suited their latest island home. Bellwood and Paz both plan new excavations to clarify what crops, if any, drove the expansion.

Europe, the continent for which Renfrew conceived the hypothesis, is under siege as well. Complicating the picture, European languages and crops come from different homelands, at least according to most linguists. And conference participants raised other reasons for skepticism over how tightly culture, genes, and language are bound together in Europe. For example, “language shift,” in which a group adopts an outside language but not outside genes, is more common than most archaeologists accept, linguists say. This is how Hungarian and Turkic languages spread in Europe, millennia after farming arrived. Furthermore, hunter-gatherers may often have held their ground against the advance of farmers, as evidenced by the strong contribution of preagricultural genetic markers to modern European genotypes.

Some experts see all this complexity as a crippling blow to the hypothesis. But Renfrew views complexity as the inevitable product of better data. “It's always the case with a simple explanation that you have to look at a more detailed level,” he says. “And things are more detailed at the detailed level.” Many of the objections are isolated examples that don't threaten the larger model, he argues. According to Renfrew, on the continental and global scales, language and agriculture move together in consistent if imperfect synchrony.

Researchers from both sides of the debate have now rejected what Jones calls “the amoebic view of culture,” in which civilizations spread without ever interacting with the people around them. But for the farming-language dispersal hypothesis to survive, it will have to accommodate such complications as local domestication and the discontinuity of crops, languages, and genes.

Renfrew thinks his hypothesis will survive these growing pains. “One can say that models are made to be used,” he says, “and we've gotten some good mileage out of this one.”

• * Examining the Farming/Language Dispersal Hypothesis, 24–27 August.