# News this Week

Science  28 Feb 1997:
Vol. 275, Issue 5304, pp. 1261
1. AIDS Research

# Exploiting the HIV-Chemokine Nexus

1. Jon Cohen

Researchers now have an understanding of the intricate mechanism by which the AIDS virus enters cells, and they are racing to turn this understanding into new therapies

When authors report basic biomedical research results, they traditionally end their papers with a few words about how their abstruse findings may one day benefit human health. This discussion often seems to be an afterthought, designed more to please funders than to offer real possibilities of new treatments. And there's usually good reason for reticence: The gap between basic and applied research is generally vast, and talk of bridging it often is highly premature. Yet, a basic research revelation can sometimes spin a field on its head and immediately open up new possibilities for important applications. Just such a development is now energizing the world of AIDS research.

It began just 14 months ago, with a paper that ended on a laconic note: “[These results] may open new perspectives for the development of effective therapeutic approaches to AIDS.” The paper uncovered a link between HIV and the then-obscure immune system messengers called chemokines. Since then, a surge of results has shown just how intimate this relationship is: HIV slips into cells by commandeering receptors on their surfaces that normally bind to chemokines. And these findings have answered one of the big mysteries of AIDS research: how HIV infects cells.

A pack of academic teams, biotechnology companies, and big pharmaceutical houses are now racing to develop treatments that exploit this HIV/chemokine nexus. Researchers are also aggressively investigating whether chemokines can help explain why some AIDS vaccines work in primates and others do not. And intense efforts are under way to use the chemokine discoveries to genetically engineer a small animal to make it susceptible to HIV infection—research that could lead to a long-sought model for studying the disease.

“The field is moving so rapidly it's painful to keep up,” says Anthony Fauci, head of the National Institute of Allergy and Infectious Diseases (NIAID). But he notes that efforts to apply all this new knowledge are running into plenty of complications. “Every week, you see the complexity of the receptor [story] get more intense,” says Fauci. Adds the University of Pennsylvania's Robert Doms, whose lab has helped trace the connection between chemokine receptors and HIV, “The whole surface is bubbling here. We'll see what erupts.”

Entry criteria

Like most AIDS researchers, Robert Gallo knew next to nothing about chemokines in the fall of 1995. But he got a crash course in these molecules when Paolo Lusso, Fiorenza Cocchi, and other researchers in his lab, then at the National Cancer Institute, first discovered that certain chemokines powerfully abated the growth of HIV in lab cultures.

Chemokines, which are produced by a wide variety of cell types, are the paging system of the inflammatory process, recruiting white blood cells to injured or ailing tissues. As Gallo, Lusso, and their colleagues detailed in a seminal 15 December 1995 Science paper, three chemokines known as RANTES, MIP-1α, and MIP-1β have an uncanny knack for inhibiting strains of HIV recently isolated from patients. Oddly, though, they found that these chemokines had little effect on HIV that had been grown in immortalized lab cultures of white blood cells called T lymphocytes.

At the time, the Gallo group didn't know how the chemokines kept HIV in check or why they inhibited only “primary” HIV isolates. A huge piece of the puzzle fell into place the following spring, when Edward Berger and colleagues at NIAID answered the question of how HIV enters cells.

Researchers have long known that HIV uses a T-lymphocyte receptor called CD4 to infect cells, but it has also been clear for more than a decade that the virus needs another factor—possibly a second receptor—to do its dirty work. Berger and co-workers identified a receptor now known as CXCR4 as that missing factor. And they correctly surmised that it belonged to the chemokine family based on its amino acid sequence. Yet, Berger's results added a new twist: CXCR4 seemed to provide a point of entry for HIVs grown in cell lines, but not primary HIVs.

The direct tie-in to the Gallo lab's work came in late June 1996, when five labs, including Berger's, reported in back-to-back Science, Nature, and Cell papers that primary HIVs use a different chemokine receptor, now dubbed CCR5. This receptor normally binds RANTES, MIP-1α, and MIP-1β, suggesting that these chemokines inhibit HIV by blocking some of its entrances to the cell.

The crucial role that CCR5 plays in early infection was made crystal clear a couple of months later. That August, independent research teams reported in Nature and Cell that several people who had repeatedly had sex with infected partners but remained uninfected themselves had a mutation in the gene that codes for CCR5. Several studies since then have confirmed that mutant CCR5s make people highly resistant to HIV infection. “That's a staggering observation,” says primate researcher James Stott from the U.K.'s National Institute for Biological Standards & Control. (This defect may not completely protect people, though: The March issue of Nature Medicine has a letter from Robyn Biti and colleagues at Westmead Hospital in Australia about an infected man whose cells have the mutant CCR5s.)

A great deal of work now has connected the dots between different strains of HIV and the chemokine receptors they rely on. HIVs that cause the initial infection predominantly use CCR5, while—for reasons that are still being keenly debated—the HIVs that predominate in the final stages of disease resemble the viruses grown in T-cell lines and bind to CXCR4. Virologist Robin Weiss at the Chester Beatty Laboratories in London cautions, however, that this picture probably will prove simplistic. “It's not as though the work over the past 6 months is going to be overturned, but things are sure to get more complicated,” says Weiss. “Watch this space.”

Many inhibitions

Provisional as these basic research findings are, researchers are tripping over each other to translate them into practical applications, such as vaccines and drugs to treat people who are already infected. Many believe that treatments are the more promising avenue. “That's the one that's most likely to come to fruition the fastest,” says virologist Joseph Sodroski, a veteran HIV-entry investigator at Boston's Dana-Farber Cancer Institute. It's also a critical area: Despite recent progress with combinations of drugs that cripple HIV enzymes (Science, 20 December 1996, p. 1988), the treatments don't work for everyone, and, as time goes on, drug-resistant strains of the virus are sure to become an ever greater problem.

Big pharmaceutical companies are going after this challenge with great gusto (see table), largely because they're on familiar turf: They have already developed enormously profitable drugs—including leading ulcer medications—that target “7-transmembrane” receptors, the family to which chemokine receptors belong. “The biggest drugs in the world are inhibitors of 7-transmembrane spanners,” says Thomas Schall, a pioneering chemokine investigator who works at DNAX Research Institute in Palo Alto, California, a division of the drugmaker Schering-Plough.

View this table:

“As soon as these [HIV] coreceptors were described in the literature, given our expertise in 7-transmembrane receptors, we put together a screening strategy [for drugs to block them],” says Lawrence Boone, a virologist at Glaxo Wellcome in Research Triangle Park, North Carolina. What's more, Glaxo and several of its competitors already had programs under way looking specifically for chemokine-receptor inhibitors to treat such inflammatory diseases as asthma, rheumatoid arthritis, and psoriasis. Harvard University inflammatory disease specialist Craig Gerard, who now collaborates with Sodroski on HIV, says that companies also are aware that one such drug could reap enormous profits if it proved effective against both an inflammatory disease and AIDS, which he says is a “distinct possibility.”

Drug developers are strongly encouraged by the fact that people with CCR5 mutants don't have obvious health problems, which suggests that blocking the receptor will not directly cause harm. “Drug companies would ordinarily spend a lot of money” addressing the very question that nature has already answered, says Gerard. Indeed, molecular virologist Richard Colonno of Bristol-Myers Squibb in Wallingford, Connecticut, says the finding that people with defective CCR5s appear to be both highly resistant to HIV and healthy has had a big impact on his company's decision to enter this race.

Like most of its competitors, Bristol-Myers is looking for a small molecule “antagonist” that blocks CCR5 and ideally can be given as a pill. The search typically begins with assays—which often owe much to other 7-transmembrane work—that can screen hundreds of thousands of compounds to see whether they can bind the receptor. Those that show promise are then put through a more complicated battery of tests to determine whether they can prevent HIV from infecting cells. Drugs that make it past that stage are tested in animals to analyze metabolism rates and toxicities. “Most folks are in the same phase: They've gone through the primary screen, and they're looking to see if they can inhibit HIV,” says Schall, who notes that his company is looking for drugs against CXCR4 as well.

Some companies are trying variations on this theme. LeukoSite, a Cambridge, Massachusetts, biotech, has teamed up with Warner-Lambert's Parke-Davis to look for a small-molecule CCR5 inhibitor, but it is also searching for monoclonal antibodies to CCR5. The company already has identified eight antibodies that bind to CCR5 and block it in test tube studies. LeukoSite immunologist Charles Mackay acknowledges that antibodies have several disadvantages compared to small molecules: They have to be injected, they are expensive, and they can only be used for a few months before the immune system mounts a response against them. Still, he says small molecules typically have more toxicities than natural molecules like antibodies.

Boone says his company is taking a different cue from nature: It is looking for an “agonist” that, by mimicking natural chemokines, would hit HIV with a double whammy. Not only would it block CCR5, but the binding process would trigger the receptor to send out a signal to tell the cell to hunker down and express fewer of its CCR5s—the same signal normally generated by a chemokine. Indeed, Gallo, who now heads the Institute of Human Virology at the University of Maryland, Baltimore, thinks that chemokines themselves may be promising drug candidates. Although he notes that many researchers have warned that giving chemokines could lead to serious toxicities, he says, “We don't have any toxicities yet, and we've gone up to pretty high doses [in animal tests].”

Two companies, wary that inappropriate signaling by natural chemokines could have dire consequences, are developing modified versions of chemokines that bind CCR5 but do not act as agonists. At Glaxo Wellcome in Geneva, Timothy Wells and co-workers are working on variants of RANTES that bind CCR5. “My best guess is the sheer amount of material you have to give is still an issue,” says Wells. The other company, British Biotech, already is doing human testing of a MIP-1α variant called BB-10010 in cancer and HIV studies. “People expected it would be inflammatory, but it's just not,” says Lloyd Czaplewski, who is heading the project.

Researchers caution, however, that even if some of these potential treatments lower HIV levels and are well tolerated, they could be tripped up by the same factor that has sent many anti-HIV drugs to an early grave: resistance. Indeed, in theory, HIV mutants might resist drugs that block, say, one part of CCR5 but not another. Even worse, a CCR5 drug could encourage the growth of a virus that prefers CXCR4; while it's far from clear-cut, HIV strains that use CXCR4 may cause disease more quickly.

Biochemist John Moore of the Aaron Diamond AIDS Research Center (ADARC) in New York City worries that companies are going to exaggerate their early findings in HIV trials with chemokine-receptor blockers. “I think there's going to be a lot of hot air and smoke,” says Moore. “Exploitation clinically? Come back in a couple of years.”

Vaccine dreams

The wait for a payoff likely will be even longer when it comes to vaccines. But some researchers believe the time line can be shortened if the new chemokine work helps answer a big mystery: Why do some AIDS vaccines protect animals from “challenges” with infectious doses of the AIDS virus?

AIDS vaccines have been tested most extensively in monkeys, which develop an AIDS-like disease when they are infected by a close kin of HIV called SIV. Although several vaccines have protected monkeys from SIV infection, no one has yet convincingly elucidated the mechanism behind that protection. Some studies suggest that the protection correlates with vaccine-induced anti-SIV antibodies, which “neutralize” the virus before it infects cells. Other experiments point to cytotoxic T lymphocytes (CTLs), which selectively kill already-infected cells, as a key correlate of protection. But in yet other studies, neither CTLs nor antibodies explain much of anything. Now, primate researchers are looking for a correlation in chemokine levels—and they are finding potentially promising leads.

The first such study appeared in last July's Nature Medicine. Thomas Lehner of United Medical & Dental Schools of Guy's Hospital in London reported that high RANTES, MIP-1β, and possibly MIP-1α levels correlated with the complete or partial protection of seven monkeys. In 13 unvaccinated control animals that easily became infected by the challenge virus, chemokine levels were much lower. This suggests that the vaccine, by some unknown mechanism, stimulated the immune system to produce higher levels of these chemokines, which in turn blocked receptors needed by SIV and prevented infection. “I do not believe any single candidate is the correlate of protection,” says Lehner. “But I think [these chemokines] are at least as good a candidate as any of the others.”

Lehner currently is conducting experiments to follow up on this work but, for competitive reasons, declines to describe them publicly. “I'm amazed at the speed at which this is moving, and there's a total silence about what people are doing,” he says. Jonathan Heeney, an AIDS researcher at the Biomedical Primate Research Centre in Ryswick, the Netherlands, is tight-lipped, too, but says he recently completed a study that looked at chemokines and the protection offered by an AIDS vaccine in monkeys. “We've got a hint that there's something interesting going on there,” says Heeney.

Marc Girard of the Pasteur Institute in Paris says he also has intriguing preliminary data from studies of chimpanzees given HIV vaccines. Girard challenged four vaccinated animals and one control chimp with HIV, which readily infects these primates but doesn't usually cause disease. Three vaccinated animals were protected, and all had higher levels of RANTES, MIP-1α, and MIP-1β than the one that became infected. (Unfortunately, the control animal did not become infected, confusing the results, but it, too, had elevated levels of these chemokines.) “We had very good correlation between high level of secretion of chemokines and protection,” says Girard. But he doubts that chemokines are the sole explanation and says he needs to repeat the experiment.

Gallo is convinced that chemokines play a large role in protection, which he is attempting to prove by directly injecting them into monkeys and then challenging them. “I think chemokines and CTLs are going to be the answer for vaccines,” says Gallo. If indeed his challenge experiments succeed, the next hurdle—and it too is a high one—will be to design a safe vaccine that can teach the immune system to boost production of these chemokines should it ever meet HIV.

New models

The third quest invigorated by the chemokine discoveries is the search for an animal that can develop AIDS. Experiments with monkeys and chimps have provided critical data for AIDS drug and vaccine developers, and for researchers studying disease progression. But these primates are expensive and, except for a few cases in chimps, they do not actually get sick from HIV. So, several groups now are trying to use the new chemokine-receptor advances to genetically engineer a small animal that would provide a more practical model. The aim is to create animals that sprout CD4s and the various HIV-related chemokine receptors on their cells. This effort, too, is far from a shoo-in.

Most researchers working in this area have focused on genetically engineering HIV-infectable mice. Although several groups are believed to have succeeded in getting these receptors expressed, that's just the first step. “For those who think it's just sticking these genes in and making a mouse that's infectable, I think they'll be disappointed,” says Dan Littman of New York University's Skirball Institute, whose lab is a leader in this field.

One major problem is that even if HIV can be induced to enter a mouse cell, it has great difficulty copying itself because some viral genes don't work well in murine cells. “Clearly, there are blocks [to viral replication] that are very important, and I think they'll prevent the mouse from being an excellent model for AIDS pathogenesis,” says Didier Trono of the Salk Institute for Biological Studies in La Jolla, California. Others are more hopeful. “With a bit of work, we may be able to overcome postentry replication restrictions,” says the University of Pennsylvania's Doms, who is working with Frank Jirik of the University of British Columbia to make HIV-receptive mice. “It's well worth trying.”

Mark Goldsmith of the Gladstone Institute of Virology and Immunology in San Francisco and colleagues hope to exploit the chemokine discoveries to create a different animal model for AIDS: a transgenic rabbit. Several years ago, NIAID's Thomas Kindt developed a transgenic New Zealand white rabbit that expressed human CD4 receptors. Although the animals did not develop disease, HIV could replicate more efficiently in their cells than in the mouse. Now, Goldsmith and others at the Gladstone have teamed up with Kindt to add chemokine receptors to these animals. “The challenge associated with rabbits is transgenesis methodology is substantially less efficient [than in mice] for reasons that aren't clear,” says Goldsmith. Rabbits also have longer gestations, smaller litters, and nearly 20 times the housing costs of mice. Still, says Goldsmith, “we're optimistic.”

On every front, the revelation that HIV and chemokines have an intimate relationship holds an equal measure of promise and problems. But the gap between these basic studies and their application is narrowing fast.

2. AIDS Research

# HIV Experts vs. Sequencers in Patent Race

1. Eliot Marshall

HIV researchers have electrified the field for the past year with a string of discoveries that revealed in detail how the AIDS virus grapples onto and enters certain human cells. At least five scientific teams zeroed in on one molecule in particular—the CCR5 receptor on immune system cells—and found that it acts like a key, opening the cell to HIV infection. If the receptor is absent or altered, the invader has trouble getting in.

This is high-impact science, with high commercial stakes as well: Some observers predict that the CCR5 discovery will lead to new drugs designed to block HIV infections (see main text). It should come as no surprise, therefore, that half-a-dozen groups are vying for priority on CCR5, and many are filing patents. But these competitors may themselves be surprised to learn that a company that was not directly involved in these HIV studies—Human Genome Sciences (HGS) of Rockville, Maryland—appears to have beaten everyone to the patent office.

William Haseltine, HGS's chair, confirms that HGS applied for a patent on the DNA sequence coding for the CCR5 receptor back in June 1995, long before the recent scientific reports were published. HGS's early claim points up an issue that's likely to be more and more vexing to DNA patent seekers. Since the early 1990s, companies doing large-scale DNA sequencing have been filing claims on thousands of genes and gene fragments, often without knowing exactly what the DNA codes for. HGS has been among the most aggressive in this game, and CCR5 may be one of the big fish it has snagged.

HGS's chief patent counsel, Robert Benson, declines to talk about the company's pending application at the U.S. Patent and Trademark Office. (The U.S. review process is confidential.) But Benson did provide a copy of HGS's international patent filing (WO 96/39437). It was released in December, in compliance with an international treaty requiring that such applications be published 18 months after submission. An expert in this field, Edward Berger of the National Institutes of Health (NIH), after hearing the sequence, confirmed that it is the same CCR5 sequence he and others have reported.

HGS said in its patent application that it had found a gene for something it “putatively had identified as a chemokine receptor.” HGS asked for rights to variations on the sequence and claimed a list of wide-ranging applications, from uses in gene therapy to drug manufacturing to disease monitoring. But HGS did not guess at CCR5's role in HIV infection. In fact, it didn't even mention HIV.

This omission, according to HIV experts like Robert Gallo—director of the Institute of Human Virology at the University of Maryland, Baltimore—ought to limit HGS's commercial rights to uses of CCR5 that do not involve HIV. But Gallo himself has a stake in this matter. He headed a team of NIH scientists that discovered in 1995 that chemokines play a key role in HIV infection. After the report was published, other researchers zeroed in on the chemokine receptors. In 1996, they identified two of them—CXCR4 and CCR5 and their variants—as key to HIV infection. Many of these teams have now filed for patents on these discoveries, including NIH, Gallo's new institute, and a group led by Marc Parmentier at the Free University of Brussels—the first to make the CCR5 sequence public last spring.

But the quality of this scientific research may have little bearing on the authors' commercial rights. As HGS's Benson says: “Scientific credit is one thing; patent law is another.” HGS's outside attorney, Jorge Goldstein of the Washington, D.C., firm of Sterne, Kessler, Goldstein & Fox, explains that whoever is first to patent a DNA sequence—for any use—can lock up subsequent uses. A patent of this type is called a “composition of matter patent,” and it prevents anyone from using the DNA sequence without the patentee's permission. If a later inventor patents a new use, Goldstein says, it may create a stalemate in which neither patent-holder prevails. The common solution is to negotiate a cross-licensing agreement and share royalties.

It remains to be seen whether HGS will actually win a patent on the CCR5 sequence. If it does, several other teams of biologists will be disappointed. But Goldstein says that “for 100 years, chemists have known that getting a [composition of matter] patent on a compound is the key.” And he adds that it's time for biologists to wake up and “discover the patent system in all its glory.”

3. Mathematics

# In Mao's China, Politically Correct Math

1. Barry Cipra

SAN DIEGO—Karl Marx may be best remembered for inspiring the 20th-century revolutions in Russia and China. But during another upheaval, the Cultural Revolution in China during the 1960s and 1970s, his little-known musings on calculus may have saved mathematics.

According to Joseph Dauben, a historian of mathematics at the City University of New York, mathematicians in China seized on Marx's comments about “dialectical” processes in mathematics, along with related passages in the writings of Chairman Mao, to justify research activity that might otherwise have been denounced as a decadent, imperialist abstraction. They did so, Dauben said here at the joint meetings of the American Mathematical Society and the Mathematical Association of America, with the help of a highly abstract theory imported from the capital of Western imperialism, the United States.

The starting point for Dauben's account, which Chinese mathematicians corroborate, is Marx's own fascination with the interplay of thesis and antithesis—the dominant process in history, as he saw it—in certain mathematical procedures. Taking a limit, for example, entails thinking of a variable as both zero (thesis) and nonzero (antithesis—or possibly the other way around). “Marx regarded the analysis of the derivative, for example, as the analysis of a dialectical process, as the negation of a negation,” notes Dauben. Mao, too, commented favorably on the ideological implications of mathematics, linking the “internal contradictoriness” of positive and negative numbers with the paramount importance of motion and incessant, revolutionary change.

So when revolutionary change threatened intellectual endeavors during the Cultural Revolution, Dauben says, mathematicians in China took refuge in research that arguably carried Marx and Mao's stamp of approval. Their safe haven, he says, lay in an approach to calculus known as nonstandard analysis, which epitomizes the mathematical qualities that appealed to Marx and Mao. The method, developed in the early 1960s by Abraham Robinson at the University of California, Los Angeles, uses sophisticated principles of mathematical logic to create a model of the real number system that includes infinitely large and infinitely small numbers along with such familiar values as 1, 2, √5, and π.

The model's infinitely small numbers provide a rigorous basis for the original concept of vanishingly small increments that Newton called “fluxions” and Leibniz labeled “infinitesimals”—they put flesh on what Bishop Berkeley, an early critic of calculus, derided as “the ghost of a departed quantity.” As such, Robinson's theory gives a precise meaning to some of the intuitions that mathematicians and physicists bring to the study of functions, and it has led to solutions to a handful of problems that had eluded standard approaches. But it hasn't caught on widely, at least in the United States.

For Chinese mathematicians, though, Robinson's extension of the standard real numbers could be viewed as the dialectical synthesis of zero and nonzero—a use of the theory that Dauben learned about from Chinese mathematicians while he was writing a biography of Robinson. Nonstandard analysis provided “a means of reinterpreting the infinitesimal calculus within a materialist framework, to justify and promote their own mathematical study,” Dauben says. “This saved many of them from being shipped out to the countryside.”

Marx's mention of infinitesimals “made nonstandard analysis seem more acceptable than other fields to the authorities at the time,” agrees Renling Jin, a nonstandard analyst at the University of Wisconsin, who grew up in China during the Cultural Revolution. Although he was too young then to witness what was going on in the Chinese universities, he says it is “very likely” that many mathematicians held on to their jobs by being politically correct à la Mao.

Interest in Robinson's theory led to an all-China symposium on nonstandard analysis in 1976. And although the revolutionary fervor of those days is long past, enthusiasm for nonstandard analysis remains high, as indicated by meetings in 1984, 1987, 1989, and 1996. Most of the research presented there has appeared only in Chinese, Dauben notes. But he thinks it's only a matter of time before its influence will be felt in the West. As Mao put it (albeit in a different context), “We must encourage our comrades to think, to study the method of analysis, and to cultivate the habit of analysis.” Even bad times, it seems, can give birth to good—if nonstandard—mathematics.

4. Extinctions

# Cores Document Ancient Catastrophe

1. Richard A. Kerr

Last week, cores of ancient sea-floor sediment made a splash in the media, when a first look at deep-sea samples unloaded from the drill ship JOIDES Resolution revealed a layer of debris ejected from the great meteorite impact 65 million years ago. The cores, from off the U.S. southeast coast, were heralded by some as proof of the impact's potency. But researchers hardly needed proof beyond the 180-kilometer crater itself, identified nearly 5 years ago (Science, 14 August 1992, p. 878); for them, the real controversy is not whether the impact happened but whether it caused all or only a few of the extinctions that took place at the end of the Cretaceous period, 65 million years ago. And while public attention spotlighted the Resolution cores, another group of paleoceanographers has already retrieved—and analyzed—a similar core that they say convicts the impact of slaughtering most of the extinction's marine victims.

In a core drilled from a former seabed that now lies high and dry in southern New Jersey, Richard Olsson of Rutgers University and his colleagues in the New Jersey Coastal Plain Drilling Project found that many species of microfossils—remains of one-celled organisms such as foraminifera, nannoplankton, and dinoflagellates—flourished right up to the debris layer, then vanished. “It would be very hard to argue now that the impact did not occur precisely at” the time of the extinctions, says Olsson.

The chronology, Olsson says, should buttress earlier records, some of which had been disturbed, chemically altered, or partially eroded. Still, there's sure to be debate as other scientists get a look at both the New Jersey and the deep-sea cores. The New Jersey results were only just submitted to Geology, and the Resolution crew “just got off the boat,” notes paleoceanographer Gerta Keller of Princeton University. “Any scientist will have to be skeptical” until the data become public.

Already, the cores establish the U.S. East Coast as a rewarding place to study the effects of the impact, which struck several thousand kilometers to the southwest on the Yucatán Coast. Closer to the crater, around the Gulf of Mexico and the Caribbean, the sea-floor record is a jumble of victims, survivors, and putative impact debris, perhaps because it was scrambled by giant tsunamis rushing out from the shallow-water impact (Science, 11 March 1994, p. 1372). “The effects [of the impact] were so large—boulders were moved around in some places—that there has been some uncertainty as to when the extinctions were in relation to the geologic effects,” notes Kenneth Miller of Rutgers, chief scientist of the New Jersey drilling team. The new cores, which are further from the impact and so undisturbed, should help remedy the dearth of convincing records.

Miller and his colleagues retrieved their core last November when they used a modest truck-mounted rig to drill in Bass River State Park just north of Atlantic City, New Jersey. Like the deep-sea cores, this core has each layer of sediment in the expected order, as indicated by each interval's distinctive microfossils. Between the last denizens of the Cretaceous and the few survivors of the subsequent Tertiary period are 6 centimeters of sand-size spherules of now-solidified melt: debris that splashed out of the crater while white-hot from the impact. The geologic instant of the catastrophe is so well preserved, says Olsson, that each spherule at the base of the impact layer can be seen to have left its own depression in the soft Cretaceous mud it settled on. “You can't get much finer physical resolution than that,” says paleoceanographer Steven D'Hondt of the University of Rhode Island, who has seen the Geology manuscript. “It's impressive.”

Olsson and his colleagues find that the denizens of the latest Cretaceous disappear precisely at the debris layer, while in the Tertiary new species appear thousands of years after the impact layer. The record “establishes a unique tie between ballistic ejecta from the Chicxulub crater and the extinction of marine organisms,” says Miller, who concludes that the impact caused all the extinctions. That work will have to be confirmed by others, but between the New Jersey core and the three returned by Resolution, there should be plenty of slices of impact debris to go around.

5. Particle Physics

# Deep Within the Proton, a Flicker of New Physics?

1. James Glanz

For weeks, tantalizing rumors have been leaking out of the Deutsches Elektronen-Synchrotron in Hamburg, Germany. Now, DESY researchers have raised the curtain on what they have been seeing, and the speculation has shifted to a new plane. Last week, at a DESY seminar, researchers from the two particle detectors on DESY's HERA accelerator reported that 3 years of smashing positrons—the antimatter counterpart of electrons—and protons together at high energy have produced a handful of collisions too “hard,” or violent, to be easily explained within the current theory of the fundamental structure of matter, called the Standard Model. And now theorists, who have long speculated about particles and forces beyond the Standard Model, are discussing some tempting possibilities.

In the most dramatic one, the excess could be the first hint of a particle called a leptoquark—long a topic of speculation—that might combine the properties of quarks, the building blocks of protons, and leptons, the particle family that includes positrons and electrons. Or it might be the track of another beast called a stop, the hypothetical counterpart of the top quark in a family of theories called supersymmetry. Then again, it could be a sign that quarks, the presumably pointlike building blocks of protons and neutrons, are actually made up of smaller pieces. And there are a host of more mundane possibilities as well—including a 1% to 7% chance that the signal is due to statistical fluctuations, says Bruce Straub of Columbia University and coordinator of the “exotics” experimental group on HERA's ZEUS particle detector.

The excess of hard collisions, in which the positron rebounds nearly straight back from the collision, is far too small to let physicists distinguish among the many possible explanations. As Herbi Dreiner of the Rutherford-Appleton Laboratory in the United Kingdom puts it, “I'm excited, but the data don't tell you at all what it is.” The results “are not a clear indication that the Standard Model has been overturned,” agrees Harry Weerts of Michigan State University and a spokesperson for the D0 particle detector at the Fermi National Accelerator Laboratory in Illinois. “On the other hand,” says Weerts admiringly, “that's a pretty nice ‘fluctuation’ they've got there.”

In HERA's 6.3-kilometer ring, buried beneath the outskirts of Hamburg, bunches of protons and positrons whirl in opposite directions and collide with nearly a trillion electron volts of energy. Two detectors, ZEUS and H1, each staffed by a multinational collaboration of hundreds of physicists, detect and analyze the collisions. Most are glancing, says Lothar Bauerdick, an experimentalist in the ZEUS group, but sometimes the positron “is really violently scattered back.” Those deeply penetrating, hard collisions function like a fantastically powerful microscope, he says, probing the proton's structure down to a thousandth of its overall size and laying bare the interactions between positrons and quarks during their closest encounters.

Because interactions between quarks and leptons are supposed to be relatively feeble in the Standard Model—involving only the weak nuclear force and electrostatic repulsion—such hard collisions should be rare. Indeed, the ZEUS group would have expected just one such event over twice as much running time as HERA has accumulated, says Bauerdick. “You must already be very lucky to get two” by statistical chance, says Bauerdick, “and we see five events.” And there is no mistaking the signature of these collisions in the ZEUS detector, he says: “They really look spectacular. The [positron] gets an enormous momentum from the quark.”

The second detector, H1, sees a similar anomaly: seven events rather than the one the Standard Model predicts, says Yves Sirois of the École Polytechnique in Paris and the H1 collaboration. When the two independent data sets were merged for the first time last week, he says, “the combination seemed to give a signal which is at least comparable to or better than a single experiment”—a relief for both groups. Still, the events seemed to be showing up in slightly different energy ranges in the two detectors, a discrepancy that Straub calls “somewhat confusing.”

As to the reason for the excesses, says Sirois, “theorists will speculate, and this will be interesting.” The initial blue-skying centers on previously unseen particles like stops and leptoquarks. By wedding the two families of matter represented by the colliding proton and positron, such particles might explain the extra apparent collisions. If a leptoquark were spawned by a colliding positron and proton and then quickly decayed into the same particles, for example, the positron might be flung almost straight backward, as if the original particles had violently crashed together. Stop formation could also provide a temporary resting point for the energy of the quarks and positrons, making hard collisions look more frequent.

Detecting these particles “would be a revolution that would change everybody's way of thinking,” says Michael Barnett, a particle theorist at Lawrence Berkeley National Laboratory. By melding two disparate forms of matter, for example, leptoquarks “would give tremendous insight into what's missing in the Standard Model” and point to a “unified” theory that would transcend it, he says. But Barnett adds that, based on the few recorded events, the case for such an upheaval is “not statistically compelling.”

Other theorists note that the hard events might not be caused by a particle at all: They could conceivably point to unexpected structure—hard “pits,” for example—within quarks, which are taken to be pointlike in the Standard Model (Science, 9 February 1996, p. 758). Or they might call for a radically new picture of how many short-lived “virtual” quarks are rattling around inside a proton at a given instant. Most of the explanations being kicked around “would be a big surprise,” says Frank Wilczek of the Institute for Advanced Study in Princeton, New Jersey. “It's not what anybody predicted.”

There is only one way to resolve these mysteries, says John Dainton, of the University of Liverpool in the United Kingdom and the H1 team at DESY: “We just need very, very much more data.” These data should continue to pour in from HERA, and later from Fermilab, whose Tevatron accelerator is scheduled to stoke up again in 1999 after a major upgrade. Until the evidence one way or another becomes overwhelming, the speculation will continue to swirl.

6. Paleoceanography

# Did a Blast of Sea-Floor Gas Usher in a New Age?

1. Richard A. Kerr

About 55 million years ago, the environment went topsy-turvy and evolution took a leap. A host of modern mammals—from primates to rodents—abruptly appeared in the fossil record of North America. At the same geologic moment, near the end of the Paleocene epoch, tiny, shelled creatures called foraminifera suddenly went extinct at the bottom of the sea. And various temperature indicators record a sudden burst of warming on both sea and land, while isotopic signals in forams and mammal teeth suggest a sharp shift in the global carbon cycle.

Now, a new mathematical model points to a single explanation for all these events: a giant release of methane gas from the ocean. In the March Geology, paleoceanographer Gerald Dickens, water chemist Maria Castillo, and geochemist James Walker of the University of Michigan use a model of the global carbon cycle to show how a gradually warming ocean might have altered its circulation and triggered a 10,000-year-long burst of methane from the sea floor. Because methane and its oxidation product, carbon dioxide, are greenhouse gases, such a release would have turned the ocean warming into a pulse of greenhouse heating that helped alter the course of evolution on land.

Although the Michigan gas-blast calculations don't prove this scenario, they “confirm what a lot of us had been suspecting,” says paleoceanographer James Zachos of the University of California, Santa Cruz. “That's the first time someone has actually done a numerical analysis [of the methane hypothesis]. The results match what we see in the sedimentary record.” The new plausibility of the methane mechanism will bolster efforts to pin down what happened in the sea 55 million years ago and spur pursuit of other possible gas bursts, says Zachos.

For several years, paleoceanographers have suspected that a belch of climate-altering methane from the oceans could unite their pictures of what happened on land and in the sea. In the ocean, sea-floor forams suffered a mass dying near the end of the Paleocene, and researchers found a striking coincidence between the extinctions and a warming of bottom waters by several degrees, as indicated by a shift in the oxygen-isotopic composition of foram shells. They suggested that this abrupt shift was due to a change in ocean circulation in which warm equatorial waters grew salty and dense enough to sink to the bottom and displace the existing cold, polar bottom water. In response, many of the deep-dwelling forams just winked out, researchers concluded.

On land, temperature indicators such as oxygen isotopes in surface-dwelling forams in the sea and changing leaf shape on land implied a burst of warming. Paleontologists suspect that it was this warming that spurred the explosion of North American mammals, perhaps by opening up a high-latitude land route to allow immigration from another continent (Science, 18 September 1992, p. 1622).

The thread linking land and ocean was a spike in the relative abundance of the lighter carbon isotope, carbon-12. In the ocean, researchers have detected the spike in the skeletons of the surviving forams and, on land, in fossil mammal teeth dating from the precise geologic moment when the new mammals appeared. Nothing but a burst of methane, it seemed, was able to produce such a large and abrupt isotopic shift. “If you go through the usual suspects, none of them works,” says paleoceanographer Ellen Thomas of Wesleyan University in Middletown, Connecticut. Erosion of carbonate rocks on land would be too slow, for example, and the carbon dioxide of volcanoes is not light enough.

The lightest carbon on Earth is in methane produced by bacteria in low-oxygen environments, such as bogs and sea-floor muds. And some sea-floor methane forms a vast reservoir that—in theory—could be released catastrophically, because it is frozen into buried water ice that traps methane molecules within the cages of its crystalline structure. In this hydrate form, 15 trillion tons of methane are thought to lie buried beneath the sea floor today. Paleoceanographers realized that the sudden switch to warm bottom waters like those of the Paleocene might melt enough hydrate to release methane to the ocean and atmosphere. That, in turn, could create the carbon-isotopic spikes, as well as the burst of greenhouse warming.

To see if such arguments held up quantitatively, Dickens, Castillo, and Walker adopted a mathematical model developed by Walker and James Kasting of Pennsylvania State University to calculate the fate of anthropogenic carbon dioxide. Given the geologically instantaneous transformation of methane into carbon dioxide, the Michigan group introduced an extra 160 cubic kilometers of carbon dioxide per year into the model during a period of 10,000 years, which was the rise time of the isotopic spike. They then watched for a million years as light carbon built up in the model's atmosphere, mixed into the oceans, and reacted with sediments and with rock on land.

In the end, the model behaved much as the world did near the end of the Paleocene. Assuming that Paleocene hydrates were as voluminous as today's, release of just 8% of the total—less than the bottom-water warming is estimated to have caused—was enough to lighten the ratio of carbon isotopes in the ocean and atmosphere by 2.3 parts per thousand, compared with the 2.5-parts-per-thousand change recorded by forams. And the isotope signal in the model slowly faded over 200,000 years, just as observed in the rock record, presumably as dissolution of sea-floor carbonate and weathering of rocks on land removed the light carbon dioxide. The model's atmospheric carbon dioxide peaked at a concentration that would have warmed the surface by 2°C, a good part of the observed 4°C warming.

Given the model's performance, “I really like the idea,” says Thomas, which leads her to wonder if there were other gas bursts. If ocean circulation were poised delicately enough to switch once, says Thomas, it might have switched many times. The subsequent Eocene epoch was exceptionally warm and supposedly climatically tranquil, but the ocean record of it is sketchy. The primates and sea-floor forams, having already gone through a sudden warming, might not have reacted, but smaller gas bursts might have played other mischief. Thomas and others will be seeking the signs.

7. Archaeology

# Tooling Around—Dates Show Early Siberian Settlement

1. Constance Holden

Human evolution is usually considered a tropical affair, a story that unfolded in the mild African climate starting perhaps 2.5 million years ago. Most anthropologists have thought that humans didn't venture into bitter subarctic regions until at most 30,000 years ago. But a Report on page 1281 would stretch human history at the edge of the Arctic close to a mind-boggling 300,000 years ago. This “puts humans in the far north way earlier than anyone had anticipated,” and suggests that these ancient people were surprisingly resourceful, says archaeologist Rob Bonnichsen of Oregon State University in Corvallis.

The evidence comes from a site on the Lena River called Diring Yuriakh, which lies at 61 degrees north latitude—the same as Anchorage, Alaska. Geoarchaeologist Michael Waters of Texas A&M University and geologists Steven L. Forman and James M. Pierson of the University of Illinois, Chicago, came up with the startlingly early dates using a technique called thermoluminescence (TL) on sediments surrounding the site's thousands of stone fragments and artifacts. The new dates, if they hold up, could trigger fresh speculation about the settlement of northern regions and even of North America. But not everyone is convinced. Although dating experts say the work looks sound, anthropologists such as Stanford University's Richard Klein are reluctant to rewrite human history on the basis of a single data point. That any of the fragments are humanmade is “unlikely,” says Klein.

Diring Yuriakh was discovered in 1982 when Russian archaeologist Yuri Mochanov, director of the Lena River Archaeological Expedition, came upon a 10,000-year-old burial site. Mochanov dug deeper and found an occupation surface containing chipped pebbles that looked far more primitive. Using bulldozers to unearth the stratum, he created a dig about 2 kilometers square—what Waters calls “the biggest excavation I've ever seen.” About 4000 artifacts have been uncovered, including many simple choppers and scrapers.

While similar finds have been dismissed by some as nonhumanmade “geofacts,” some archaeologists who have been to Diring say the site is genuine. Bonnichsen, for one, believes the stones are indeed homo-made. Archaeologist Robert Ackerman of Washington State University in Pullman, another visitor, says he had “no difficulty recognizing this material as artifacts.”

But there has been no consensus about the age of the site because there are no materials, such as bones or volcanic matter, suitable for other dating methods such as uranium-thorium dating. Mochanov has argued that the tools may hark back to 2 million years ago, in part because they resemble the 2.5-million- to 1.6-million-year-old Oldowan stone tools found in East Africa. But many anthropologists dismissed the idea because all other high-latitude sites are much younger. For example, the stone tools and animal bones found in the Mousterian cave sites in southern Siberia are perhaps 60,000 years old, says Ackerman. Farther north, the oldest sites are those Mochanov has found near the Lena, dated by radiocarbon methods to 10,000 to 30,000 years ago.

To settle the controversy, Mochanov invited other scientists to test the Diring site for themselves, and Waters took him up on it. Because the only raw material for dating was rock and sediment, dating expert Forman used TL, which gauges how long quartz-containing rock has been buried from the number of electrons trapped in the defects of the quartz crystals. These electrons build up at a regular rate over time but are wiped out by sunlight. Samples of windblown sand covering the artifacts yielded conclusive dates ranging from 240,000 to 366,000 years, says Waters.

TL dating has pitfalls, however. One is that the sediment's clock must be thoroughly zeroed before burial by exposure to sun. That is usually not seen as a problem for wind-deposited sediments, which get plenty of light before burial. But dating expert Jack Rink, a geologist at McMaster University in Ontario, Canada, says he would like to see the dates verified with yet another method—infrared stimulated luminescence dating of the feldspar contained in quartz grains—to be sure the TL clock was properly set.

Other experts are persuaded that the work is reliable. “I have zero doubts,” says Bonnichsen. That leaves him and others wondering who these ancient settlers could have been. With no human bones to go by, it's anybody's guess. “At 200,000 years [or beyond], you can put whoever you want there. It depends on which theory of human origins you believe in,” says Klein. Ackerman says it could have been Homo erectus. These early humans are known to have existed from about 1.5 million to perhaps as recently as 30,000 years ago. They had fireplaces and built huts—and were probably intelligent enough to survive in a cold climate, he says, noting that the nearest known human remains from this time are 400,000-year-old H. erectus fossils from the Zhoukoudian site in northern China. Other archaeologists, such as Alan Bryan of the University of Alberta, Edmonton, think the Siberian toolmakers were more likely a transitional post-erectus form known as archaic Homo sapiens.

But whoever the toolmakers were, they raise the fascinating idea that premodern humans may have been more intelligent and resourceful than scientists had thought. That notion gains support from a report in this week's issue of Nature, describing sophisticated 400,000-year-old wooden spears from Schöningen, Germany.

To some researchers, having even a limited population of this antiquity in Siberia raises the question of when people migrated from Asia to the Americas. Bonnichsen goes so far as to say that the new date “sets the stage” for the migration to have occurred as long ago as 300,000 years. But most others are doubtful, because there's no undisputed evidence of humans in the Americas before perhaps 11,000 or 12,500 years ago (see p. 1256).

Indeed, the gap between the Diring date and those of other sites is simply too wide for many archaeologists to accept. Ted Goebel of Southern Oregon State College in Ashland says that the hominids of 250,000 years ago were not capable of surviving on the limited resources of the subarctic. It “doesn't fit the patterns that I see elsewhere in Northern Eurasia, where the first time you see humans above 60 degrees” is about 25,000 years ago, he says. Waters himself acknowledges that “basically Diring stands out there as an enigma.” But whatever the final verdict on the Diring tools, says Bonnichsen, “it's neat stuff.”

8. Cognition

# Scientists Probe Feelings Behind Decision-Making

1. Gretchen Vogel

Intuition may deserve more respect than it gets these days. Although it's often dismissed along with emotion as obscuring clear, rational thought, a new study suggests that it plays a crucial role in humans' ability to make smart decisions.

Neuroscientists Antoine Bechara, Hanna Damasio, Daniel Tranel, and Antonio Damasio of the University of Iowa College of Medicine in Iowa City set out to shed light on the role of intuition and emotion in normal decision-making by studying a group of brain-damaged individuals who seem unable to make good decisions. Some drift in and out of marriages; others squander money or often offend co-workers inadvertently. On page 1293, the researchers unveil what seems to be the missing element in their decision-making. The patients lack intuition—that ability to know something without conscious reasoning—which many cognitive psychologists think may be based on memories of past emotions. “These findings are really exciting,” says psychologist Stephen Kosslyn of Harvard University. “Emotion apparently is not something that necessarily clouds reasoning, but rather seems to provide an essential foundation for at least some kinds of reasoning.”

Psychologists have long known that when people make decisions, whether it's choosing whom to marry or which breakfast cereal to buy, they draw on more than just rational thought. Indeed, says Harvard psychologist and author Howard Gardner, the new work “fits in with an impressive heap of individual studies” showing that people rely on a variety of emotional cues—ranging from a general sense of déjà vu to specific feelings like fear—when making decisions.

The Damasios are well known for their registry of more than 2000 brain-damaged patients who participate in experiments designed to unravel how the brain works by determining what goes wrong when parts are missing (Science, 18 May 1990, p. 821). For several years, they have been trying to discover why patients with lesions of the ventromedial prefrontal cortex—the area of the brain right above the eyes—can perform well on intelligence-quotient and memory tests, but when faced with real-life decisions, they at first waffle, then make unwise choices. The same patients also display little emotion, and the team wondered if emotional—rather than factual—memories might be missing.

To figure out what is going wrong with these patients, and, by extension, what goes right in uninjured brains, the researchers asked patients and a group of normal controls to perform a gambling task. Each subject was given $2000 and four decks of cards. They were told to turn over cards from any deck and to try to win as much money as possible. Although the subjects didn't know it, there were two types of decks. Most cards in the two “bad” decks gave the subjects a reward of$100, although a few told subjects to hand over large sums of money. Most cards in the two “good” decks, by contrast, carried rewards of only \$50, but the penalty cards were less severe, too. In the long run, choosing cards from the bad decks resulted in an overall loss, while the good decks gave an overall gain. The task was “designed to resemble life,” in its uncertainty, risks, and rewards, says Antonio Damasio. The players did not know when a money-losing card would arise in a deck and had no way to know when the task would end.

Previous work had shown that the brain-damaged patients were just as bad at choosing between good and bad decks as they were at life decisions. While normal subjects tended to pick from the good decks as soon as they had turned over a large penalty card, the patients kept opting for cards from the bad decks. The earlier work further hinted that emotion played a role. During the task, the patients didn't exhibit much stress or nervousness, as measured by skin conductance response (SCR)—a sort of microsweating that accompanies changes in emotion—even after they'd turned over several big penalty cards. By contrast, once normal players had encountered penalties, they began showing large SCRs just before choosing from a bad deck.

In the current study, the team tried to determine whether the emotional response and the card choices were based on conscious reasoning by introducing a new element into the task: They interrupted the game periodically to ask players what they thought was going on. Interestingly, the normal players began picking more often from the good decks and showing high SCRs well before they could articulate to the researchers that picking from the good decks seemed to be a better long-term strategy. And although three of the 10 normal subjects never had more than a hunch that some decks were good and some bad, they still picked more cards from the good decks and showed high SCRs before turning over bad-deck cards.

The brain-damaged patients, on the other hand, never expressed a hunch that some decks seemed to be riskier. Further, even after they had a theory as to which decks were bad, they continued to choose from them part of the time. (When asked to explain their choices, Damasio says, the patients said they thought it was more exciting to play from the risky decks, or that one could never tell when the rules might change.)

Although not all the results were statistically significant, the authors say the overall findings suggest that in normal people, nonconscious emotional signals may well factor into decision-making before conscious processes do. Antonio Damasio believes the ventromedial prefrontal cortex is part of a system that stores information about past rewards and punishments, and triggers the nonconscious emotional responses that normal people may register as intuition or a “hunch.” Read Montague, a neuroscientist at Baylor College of Medicine in Houston, agrees: “Something has collected the statistics … and starts nudging behavior all before [the subjects] know what is happening.” But when that ability is gone, says Gardner, the person has no “early-warning system” to guide their reasoning and, in the face of uncertainty, have difficulty making any choice at all.

Damasio stresses that the early-warning system does not act alone. Humans, after all, are set apart from animals by their ability to reason, he says. Still, “human beings are also the sum of all their previous emotional experiences of rewards and punishments”—experiences from which we learn, it seems, whether we know it or not.