News this Week

Science  07 Feb 1997:
Vol. 275, Issue 5301, pp. 751

You are currently viewing the .

View Full Text

Log in to view the full text

Log in through your institution

Log in through your institution

  1. Molecular Biology

    Obesity Sheds Its Secrets

    1. Trisha Gura
    1. Trisha Gura is a free-lance writer based in Cleveland.

    While the hormone leptin has dominated obesity research, new findings suggest it has help in regulating body weight—results that may also provide potential avenues for obesity therapies

    Every New Year, people make the same resolution. Glutted by holiday food and drink, they vow to shed their extra pounds by embarking on some sort of monastic weight-loss venture. And every year, most dieters fail. Figures from the National Institutes of Health show that half of the people living in the United States are overweight and one-third are clinically obese, weighing at least 20% more than they should—a problem that is both financially and medically costly. Not only do dieters spend $30 billion annually on their weight-loss endeavors, but those who remain obese are at high risk for diabetes and cardiovascular diseases, including hypertension, all of which can lead to premature death.

    But in universities and biotech labs, researchers are on the trail of a different and perhaps more effective kind of weight-loss remedy—one aimed directly at the mechanisms by which the body controls its weight. In a flood of recent reports, investigators have announced telltale clues to how the body regulates appetite, metabolism, and activity. The first in the chain of findings—the discovery of the fat-busting hormone leptin 2 years ago—raised extravagant hopes that have yet to be realized. “Unfortunately, news reports let people believe that they could sit on the couch, eat whatever they want, inject themselves with leptin, and look like Cindy Crawford,” recalls Art Campfield, who heads up the leptin project at drug industry giant Hoffmann-La Roche in Nutley, New Jersey.

    A shade of difference.



    The yellow agouti mouse, while fatter than a normal mouse (bottom photo, right), is not as fat as the ob/ob mutant, shown at top with its normal counterpart.

    While findings since then suggest that human obesity might not be treated so simply with leptin itself, in the past year, researchers have learned much more about how and where the hormone works in the body—information that may help them design drugs that mimic leptin's action. And the field continues to expand faster than a frustrated dieter's waistline as researchers have identified several additional molecules involved in weight control, which appear to link up with leptin in a complicated web of biochemical and brain-signaling circuits. Among these are neuropeptide Y—which jacks up appetite and slows metabolism when a person's fat stores seem to be decreasing—and a family of peptides called the melanocortins, whose receptors appear to have the opposite effect—damping down appetite in response to increased fat stores.

    Defects in some of these molecules or their targets might help explain human obesity, because most overweight people have high levels of leptin itself. But even if they don't, their discovery might aid in the design of new drugs that can trim body fat. Indeed, researchers are already working on drugs that block a brain receptor of the melanocortins. “Each experiment seems to open up more and more opportunities,” says Campfield. “There is so much work to do, and it will be going on like this for a long time.”

    A hormonal “lipostat”

    Leptin still remains at the center of it all, however. Originally identified late in 1994 after an 8-year search by Jeffrey Friedman's team at Rockefeller University in New York City, leptin is a 16-kilodalton protein produced by the obesity (ob) gene of mice. Animals with defects in both copies of this gene act as if they are in a state of perpetual starvation, unable to reproduce, stay warm, grow normally, and, of course, restrain their appetites. Thus, they get grossly fat, weighing as much as three times more than normal animals.

    Friedman's group and others traced the source of leptin to the body fat itself. The investigators found that fat cells called adipocytes manufacture leptin and spew it into the bloodstream, so that leptin levels in the bloodstream normally increase as the fat deposits grow. “When you get fat, mainly what happens is the adipocytes enlarge and store more lipids,” says Jeffrey Flier, a leptin researcher at Beth Israel Deaconess Medical Center and Harvard Medical School in Boston. “The amount of leptin is usually in proportion to how big the adipocytes get.”

    These findings led to the idea that leptin functions as a lipostat: When fat stores rise, adipocytes produce leptin, and the hormone in turn tells the brain that it's time to stop eating and increase activity levels. Conversely, when fat stores decline, leptin concentrations go down, too, and that signals the brain to try to counteract the weight loss with increased feeding and lowered activity.

    Subsequent work on the receptor that receives leptin's signal in the brain supports this view of how the molecule works. A year after leptin's discovery, Louis Tartaglia's team at Millennium Pharmaceuticals in Cambridge, Massachusetts, and their colleagues at Hoffman-La Roche found the receptor and cloned its gene in mice and humans. Shortly thereafter, three groups of investigators—Tartaglia's, Friedman's, and a third headed by Rudolph Liebel, also at Rockefeller—independently traced the obesity seen in another strain of mice, the so-called db (for diabetic) mouse, to mutations in the leptin-receptor gene (Science, 16 February 1996, pp. 913 and 994). Investigators have since mapped leptin-receptor mutations in two other grossly fat rodents—the corpulent rat and the fatty Zucker rat. All three types of animals presumably become fat because they can't respond to the hormone.

    In addition, the leptin receptor's location in the brain is consistent with the idea that the hormone plays a role in weight control. Denis Baskin at the Veterans Administration Puget Sound Health Care System in Seattle and others have found that the leptin-receptor gene is expressed in four main regions of the hypothalamus, including two—the arcuate nucleus and the paraventricular nucleus—that had previously been implicated as hot spots for the regulation of feeding and metabolism.

    But despite all the evidence implicating leptin and its receptor in weight control, mutations in their genes do not seem to be at fault in human obesity. “So far, no one has identified a human being with a mutated Ob receptor or mutated ob gene,” Campfield says. And while a study of Pima Indians reported in the February issue of Nature Medicine by a team led by Eric Ravussin of the National Institute of Diabetes and Digestive and Kidney Diseases suggests that low leptin levels may have contributed to weight gains by some members of this population, other investigators, including Jose Caro, then at Thomas Jefferson University in Philadelphia, have generally found that overweight people have high leptin levels in their bloodstream, not the low levels seen in the ob mice. The large fat stores in the people seem to be pumping out the hormone normally, which implies that leptin itself won't work as an obesity cure. Instead, something must be going wrong in obese people's response to the hormone, although the difficulty does not seem to be in their leptin receptors.

    One possibility is that leptin has problems making its way from the bloodstream into the brain. Michael Schwartz at the University of Washington Medical Center in Seattle and Caro found, for example, that the spinal fluid of obese people contains only slightly more leptin than that of their thinner counterparts, despite the fact that overweight individuals had, on average, a fivefold higher concentration of the hormone in their bloodstreams.

    Alternatively, the brain may lose its ability to respond to whatever leptin does get in. “Somewhere between the circulation and the brain, there is decreased sensitivity,” says Campfield. “We're trying to identify where the weak places are.” One place to look is among the other molecules that interact with leptin to control weight, such as the potent appetite-stimulating neurotransmitter called neuropeptide Y (NPY).

    Possible leptin partners

    Normally, NPY levels are held in check by leptin. Production of the appetite stimulator skyrockets, for example, in leptin-lacking ob/ob mice. And in October 1995, Thomas Stephens and his colleagues at Eli Lilly Co. in Indianapolis reported that giving the animals leptin suppresses NPY production, a finding confirmed a few months later by Schwartz's group. These results suggest, Schwartz says, “that if [the animals] don't have any leptin, the NPY system is completely unrestrained,” and that in turn leads to their obesity. Baskin's mapping of the brain locations of the leptin receptors fit neatly with this idea, because one of those locations, the arcuate nucleus, is a major site of NPY production.

    More recently, Richard Palmiter's group at the University of Washington provided more direct evidence that leptin works through NPY to regulate appetite. Early in 1996, the researchers had made a puzzling observation when they created a strain of mice in which the NPY gene was inactivated. Rather than being thin, as expected, these animals appeared normal in body weight, fat levels, and food intake.

    But results reported later in the year by Palmiter and his colleagues help resolve that puzzle (Science, 6 December 1996, p. 1704). When the investigators mated the normal-weight NPY knockout mice with the obese, leptin-lacking ob/ob mutants, they found, says Schwartz, that “a deficiency in NPY attenuated the defect in the ob/ob mouse.” The appetites, general metabolism, and fat levels of the double-mutant offspring were midway between those of normal and ob/ob mice. This result indicates that part—but not all—of the weight gain of the ob mouse results from the increased NPY production induced by the animal's low leptin levels.

    “I think it makes a strong argument that excessive NPY is at least a component of the obesity syndrome that results when you don't have leptin,” says Schwartz. And because the NPY defect didn't completely counteract the leptin defect in the double-mutant offspring, the finding also suggests that NPY isn't the only obesity regulator that responds to leptin, a circumstance that might explain why normal mice do just fine without NPY. “Obviously,” says Schwartz, “other things in the brain are also sensitive to leptin.”

    Two papers published just a month ago provide some indication of what those other things might be. Roger Cone and his colleagues at Oregon Health Sciences University in Portland and Dennis Huszar's team at Millennium and Hoffmann-La Roche simultaneously unearthed another obesity signal. This one comes from a group of molecules that normally regulate hair color—the melanocortins and their receptors.

    The agouti connection

    The story of the melanocortins' link to obesity actually began with a mutant mouse known as agouti, which has a genetic defect that endows the animals with bright yellow fur. (The name agouti comes from the banded pattern of the fur of the South American rodent.) They also have an obesity problem that resembles that of many humans, as it doesn't show up until late in life and is often accompanied by diabetes, particularly in males.

    Researchers at Glaxo Wellcome in Research Triangle Park, North Carolina, cloned the agouti gene in 1992 and determined that its product is a 131-amino-acid protein produced by hair follicles. At the time, however, the Glaxo team, led by biochemist Bill Wilkison, couldn't figure out what a protein normally made by hair follicles had to do with obesity, although they did note that in the agouti mutants the gene is “on” all the time in every tissue in the body. It wasn't until after Cone's group cloned the first melanocortin receptor later that year that researchers had even an inkling of how agouti could affect body weight.

    Cone's group found that the agouti protein binds to a melanocortin receptor (MCR-1) located on skin cells called melanocytes. By doing so, it gets in the way of a melanocortin known as α melanocyte-stimulating hormone (α-MSH), which binds to the receptor and signals the melanocytes to produce black pigment. Normally, that inhibition comes on briefly to cause yellow banding in the mouse's fur. But because agouti is constantly turned on in the mutant mice, the MCR-1 receptor is always blocked and the animals are yellow instead of black. And Cone speculated that additional MCRs may be located in the brain, where they would have a different function: regulating weight gain and metabolism. “It's typical that hormone systems serve multiple physiological roles,” says Cone. “For example, NPY is also in the [brain] cortex affecting learning and other functions clearly unrelated to feeding.”

    Cone's team went on to confirm his speculation by showing that two MCRs—MCR-3 and MCR-4—are produced in the arcuate nucleus of the hypothalamus, a prime target of leptin action as well as a seat of NPY production. And in 1994, he and his graduate student Dongsi Lu reported that agouti can work on those brain receptors.

    That work led to Cone's most recent paper, published in Nature on 9 January. When Cone and his colleagues injected synthetic peptides mimicking melanocortins into the brains of both normal and ob mutant mice, they found that the molecules bind to MCR-4 and suppress feeding, even when the mice are given extra NPY to boost their appetites. At the other side of the spectrum, molecules that mimic the agouti protein and block the receptor can bolster the animals' appetites during the night or after a prior fast.

    Further evidence that the MCR-4 receptor is important in regulating weight comes from Huszar's team, working in collaboration with Campfield and others. In results that appeared in the 10 January issue of Cell, they found that mice in which MCR-4 had been knocked out mimicked the weight gain and other symptoms of the mutant agouti mice. “What is remarkable about this is that we got a phenotype that virtually mirrors the agouti effect,” says Huszar. “The agouti mouse itself is one of those happy accidents that points to an interesting pathway.” Cone agrees: “This finding all of a sudden puts melanocortin-producing neurons in the center of weight homeostasis and metabolism. That was never known before.”

    Still to be established is what ties, if any, MSH and its receptor have to leptin, although Cone's team and others have preliminary evidence suggesting that MCR-4 knockouts and ob/ob mice both have high NPY expression in a region of the hypothalamus called the dorsal medial nucleus, indicating that all three are connected. Human obesity might then be due to either overactivity of the NPY pathway or too little activity in the melanocortin branch, perhaps caused by as yet unidentified mutations—or to problems in other hormonal-system pathways to which leptin is now being linked (see sidebar).

    But even if all these systems are working properly in most overweight people, they might still provide a target for a weight-loss drug that exploits the pathways to lower appetite and raise metabolism. “I think this is a big deal,” Campfield says of the melanocortin work. “Every pharmaceutical company in the world is probably now getting the [MCR] receptors and screening them” for drugs that mimic melanocortins' effects for possible use in weight-reducing therapies.

    Part of the appeal of the MCRs is their nature. Unlike leptin receptors, MCRs are so-called G protein-coupled receptors, which traditionally are the easiest targets to block with small-molecule drugs. “If you had your choice, G-coupled [receptors] are the best possible target,” Campfield says. And drugs that bind and stimulate MCR-4 could be developed even before the details of the receptor's mechanism are fully worked out.

    There are likely to be plenty of other opportunities for biotech and basic science lurking in the tangle of pathways investigators have uncovered. “There may be multiple limbs, and the limbs themselves are likely to branch and interconnect,” says leptin discoverer Friedman. Since his discovery set it all in motion, he says, “it's been a great deal of fun to see it happen.”

  2. Molecular Biology

    Leptin's Other Hormonal Links

    1. Trisha Gura

    During the 2 years since its discovery, the protein hormone leptin has paraded through headlines because of its possible role in keeping fat levels in check (see main text). But that may not be its main function, says endocrinologist Jeffrey Flier of Harvard Medical School: “Throughout evolution, starvation—not obesity—has always been a problem.” And recent work by Flier's team and others indicates that leptin, working through glands around the body, including the adrenals, the thyroid, and even the ovaries and testes, helps the body deal with the stresses of starvation.

    Starving animals commonly experience a host of endocrine changes. To help conserve energy, for example, they lose their sex drives as production of the sex hormones by the ovaries and testes plummet. At the same time, starving animals turn down production of the thyroid hormones that stimulate cells' metabolic activities and turn up the production of adrenal stress hormones, such as cortisol, which perform a number of functions, such as maintaining blood pressure, that help the body adapt to starvation.

    These changes, Flier and his colleagues found, may be triggered by a drop in leptin, which normally occurs when fat stores decline during starvation. By giving leptin to starving animals, the researchers were able to reverse the reproductive, metabolic, and stress responses. “The three classical endocrine changes that you see with starvation are apparently caused to a very large degree, if not entirely, by the fact that with starvation, leptin levels go down,” Flier concludes.

    In addition to helping explain the loss of sex drive in starving animals, the leptin work is also shedding light on normal sexual development. Flier's team and that of Farid Chehab at the University of California, San Francisco, have evidence that the hormone appears to signal, and perhaps orchestrate, the onset of puberty (Science, 29 November 1996, p. 1466; and 3 January, p. 88). Both teams found, for example, that giving the hormone to normal female mice could speed up the attainment of sexual maturity.

    The researchers haven't yet worked out exactly how low leptin levels bring about starvation-induced hormonal changes. Flier's work suggests that they may lead to increased activity of an important regulator of stress-hormone production. He has found that normal leptin levels damp down production of a brain peptide called corticotropin-releasing hormone that works through the pituitary gland to boost production of the adrenal steroids. Stephen Woods and Michael Schwartz at the University of Washington, Seattle, have contrary results, however, although both Flier and Schwartz agree that more time and research will probably reconcile the disparity. “Either one of us might be wrong, and I don't feel dogmatic on this,” Schwartz says.

    Even if these connections between leptin and other hormonal systems evolved to cope with starvation rather than obesity, they may help guide obesity therapies. For example, Woods's team has evidence from studies on rats that adrenal steroids blunt leptin's activity in starving animals. Because, for reasons not yet understood, “obese people often have high levels of steroids,” Woods says, this may help explain why they are resistant to their high blood leptin levels. It also suggests that drugs that block the steroids' effects might help in weight control.

    This tangle of effects on hormones and neuropeptides hints that leptin's true role may be less of an obesity meter and more of a starvation shield, says Flier. “What we are saying is that there is this huge biology of leptin that may not intrinsically have anything to do with obesity or the prevention of obesity,” says Flier. “Even apart from starving or overfeeding, leptin is an important way for there to be communication between the periphery and the brain.”

  3. Climate

    A New Driver for the Atlantic's Moods and Europe's Weather?

    1. Richard A. Kerr

    Last November, oceanographer Michael McCartney was enjoying a strangely peaceful voyage across placid waters off southern Greenland. Placid is not the usual word for these latitudes in winter; most of McCartney's previous winter voyages here had met with stormy seas for weeks on end. McCartney already knew part of the reason for this trip's fine weather: Only 1 year before, the atmosphere over the North Atlantic had jumped from the circulation pattern it had favored for 20 years to another mode of operation, which steers storms away from the seas off Greenland. This unheralded atmospheric seesawing—one swing of the so-called North Atlantic Oscillation, or NAO—wasn't entirely benign: While McCartney was savoring calm seas, Europeans were entering their second brutal winter in a row, and skaters in the Netherlands raced long-distance along canals frozen for the first time in a decade.

    What drives these decade-long mood swings in North Atlantic wind and weather? McCartney, of the Woods Hole Oceanographic Institution (WHOI) in Massachusetts, and some other oceanographers suspected it wasn't a random whim of the atmosphere but a complex interaction between the ocean and the atmosphere, something like the shorter oscillations in the tropical Pacific Ocean, dubbed El Niño. In higher latitudes, the existence of such an ocean-atmosphere link has never been clear. Now, new data from ocean voyages and new modeling efforts suggest just such a connection. Deep within the North Atlantic, McCartney and others are finding parts of a fluid clockwork that may pace the decades-long swings of the NAO of the past century.

    Current conditions.

    Temperature in the Labrador Sea Water (circles) has seesawed in time with the North Atlantic Oscillation index, now running low (blue) and bringing cold winter to Europe; the high index (red) means warmer winters.


    If researchers can understand what triggers the NAO switch, they may one day be able to predict it. And they may gain insights into how it may be modulating the warming of the Northern Hemisphere and perhaps even confusing the current search for signals of increasing greenhouse warming (see sidebar). Still, there's a long way to go before oceanographers' studies of the Atlantic catch up with their understanding of the tropical Pacific. McCartney's “suggestions are provocative,” says Hugo Bezdek of the Atlantic Oceanographic and Meteorological Laboratory in Miami, one of the oceanographers picking up signs of a link. “There's some evidence to support his ideas, but it will be some time before we can make an ocean-atmosphere connection in the midlatitudes that is robust.”

    Although the North Atlantic's part of the story is only just coming into view, the atmospheric part of the NAO has long been on display. Typically, low pressure and a counterclockwise wind circulation are centered over Iceland, in contrast to higher pressure and clockwise circulation near the Azores off Portugal. As air swirls around each pressure center, strong winds blow west to east across the latitudes in between. The NAO, as meteorologists see it, is a seesaw that modulates this pressure gradient. To start, a bit of extra air mass might pile up over the Azores. Because contrasts in pressure drive winds, this “high-index” extreme of the NAO drives stronger than normal westerly winds, especially during winter when the NAO is most clearly expressed.

    These westerly winds blow over warm Gulf Stream waters as they meander across the northern North Atlantic. The stronger the wind, the more of the Gulf Stream's heat is delivered to Eurasia, so the NAO's high-index years of 1980 to 1995 created unusually mild winters there. But when the NAO swings to the other extreme, as it did with a vengeance late in 1995, air pressure builds up over Iceland, filling in the low and weakening the gradient and warming winds. The effects of a record-low NAO are now on display: a bitter winter in Europe and unusual calm near Greenland.

    Although meteorologists could describe the shifts in the NAO over the past century and point to their effects, the origins of the oscillation have largely eluded them. The NAO seesaws on time scales from months to decades, but the atmosphere, with its relatively small mass and rapid response time, “can't ‘remember’ things year to year or decade to decade, only month to month,” says McCartney. So, oceanographers say that the source of long-term oscillations must be in the ocean. Ponderous flows and a huge capacity for storing heat require the ocean to react at a stately pace, providing long-term “memory” as the system operates in the same mode year after year.

    McCartney's proposed oceanic oscillator is a vast “pipeline” of warm water fed by the Gulf Stream: the so-called subpolar gyre, a massive, counterclockwise system of currents spanning the ocean off Ireland to the Labrador Sea (see map). All along its route across the Atlantic, past Britain, and back westward by Iceland and the southern tip of Greenland, the pipeline continually loses its heat to the atmosphere and so moderates Europe's climate. By the time it nears completion of the loop, pipeline water is so cold and therefore dense that a single winter's chill in the Labrador Sea is enough to send it plunging beneath the surface, sliding southward out the end of the pipeline.

    All told, this trip takes about 20 years, says McCartney, just the kind of timing needed for the NAO's long-term swings. To work as an NAO pacemaker, however, the pipeline would need two other working components: a means of driving the atmosphere into NAO-like modes and a way for the timer to switch between modes.

    Now, new oceanographic data suggest just such a potential driver, in the form of unusually warm or cold masses of water running through the pipeline. In work published last year in the Journal of Geophysical Research, Donald Hansen of the University of Miami and Bezdek compiled sea surface temperatures of the wintertime North Atlantic since 1949. In the records, they were able to trace a huge patch of anomalously warm water that appeared in the earliest 1950s. They followed it from the Gulf Stream to Newfoundland and across the Atlantic; by the late 1960s, the warm water had made a circuit of the subpolar gyre, following the pipeline around to the Labrador Sea. In the late 1960s, a cold patch of surface water appeared in the subtropics and followed a similar path. And in work they discussed at December's meeting of the American Geophysical Union, McCartney and Ruth Curry of WHOI took a deeper look at these anomalous masses. They used temperatures measured at a depth of 400 meters by instruments lowered from ships to show that Hansen and Bezdek's warm and cold patches are not just wimpy surface skins but are thick enough to deliver a thermal punch to the atmosphere.

    The timing suggests that's just what they did, says McCartney. Throughout recent decades, the pipeline and the oscillation have stayed in step, although the relation might seem counterintuitive at first. As the pipeline ran warmer in the 1950s and '60s, the NAO grew progressively more negative, with weaker westerlies across the Atlantic. As the westerlies weakened, so did their influence on Europe, until more northerly winds out of the Arctic replaced them, and colder winter weather settled into Europe; when the pipeline ran cold in the next 2 decades, the NAO switched and became more and more positive. That's not so surprising, says McCartney; the crucial factor for the strength of the NAO—and for Europe's weather—is the strength and the direction of the westerlies, not the warmth of the ocean water.

    Just how the pipeline running hot or cold could have tipped the oscillation isn't clear, McCartney admits: “There's a burden on the modelers to confirm or deny within the physics of the system whether [a connection] exists.” That may take a while, he adds, given the difficulty modelers have had in realistically linking ocean and atmosphere.

    If the pipeline's temperature does determine the NAO's state, what might trigger the temperature switch in the ocean? A deep-ocean mechanism proposed in the Journal of Physical Oceanography late last year by Michael Spall of WHOI might offer an answer. After the chilled water exits the pipeline in the Labrador Sea, it hugs the deep edge of North America, where it must eventually pass beneath the shallower Gulf Stream. But that's no small task, notes McCartney. In his model of western North Atlantic currents, Spall found that this Deep Western Boundary Current (DWBC) can either be caught up by the Gulf Stream and swept offshore, strengthening the Gulf Stream in the process, or continue southward unimpeded.

    This duel of currents creates a valve at the beginning of the pipeline whose position—wide open or throttled down—depends in part on the thickness of the DWBC. And that raises the possibility of a 20-year feedback loop, says McCartney. A warm signal sent down the pipeline as the Gulf Stream strengthens could 20 years later make the DWBC a bit thinner and so weaken the Gulf Stream. That cold signal might then come back another 20 years later to complete the cycle.

    If all this is not just plausible, but also true, then by carefully charting conditions in the Gulf Stream, DWBC, and deep North Atlantic, researchers could perhaps one day predict the North Atlantic's moods—and forecast severe winters for Europe years in advance. But at this early stage, most of McCartney's colleagues are quite cautious. Meteorologist Timothy Palmer of the European Center for Medium-Range Weather Forecasts in Reading, England, notes that it isn't clear yet that the ocean's behavior drags the atmosphere along with it. The atmosphere may still be far less predictable than in El Niño, he warns. “I want to believe” that the North Atlantic strongly affects the circulation of the atmosphere, says Bezdek, “but I don't think a concrete case has been made yet. There's evidence for [an ocean-atmosphere] connection, but it's not overwhelming.” To win the day, oceanographers will need many more trips to sea, regardless of whether the NAO is calming the waters.

  4. Climate

    A Case of Mistaken Identity?

    1. Richard A. Kerr

    Two years ago, researchers investigating the suspicious worldwide warming of recent decades concluded that they had glimpsed the culprit's fingerprint. A distinctive geographical pattern of warming pointed to increasing amounts of atmospheric greenhouse gases as the guilty party. As a result, in late 1995, the international scientific community declared that “the balance of evidence suggests that there is a discernible human influence on global climate” (Science, 8 December 1995, p. 1565). But is the fingerprint really unique to the greenhouse? Natural agents of climate change, such as the atmosphere's decades-long oscillations in circulation (see main text and Science, 28 October 1994, p. 544), may be masquerading as a strengthening greenhouse, according to James Hurrell of the National Center for Atmospheric Research in Boulder, Colorado, and others.

    If so, says Hurrell, the recent indictment may have been premature. “It's really important work,” says Benjamin Santer of Lawrence Livermore National Laboratory, whose work was pivotal in last year's indictment. “I think it's an issue the [greenhouse] detection community is going to have to get more involved with.”

    In the 1970s, a pattern of temperature changes began to emerge that recently was shown to match computer predictions of greenhouse warming. As predicted, in winter the land was warming faster than the oceans, which react slowly to temperature change, and the warming was concentrated at high latitudes, where the retreat of ice and snow would reduce the amount of solar energy reflected back to space. But natural-looking circulation changes began appearing at about the same time, notes Hurrell. Low-pressure centers over the Aleutian Islands and Iceland intensified as the atmospheric oscillations there switched to new modes. The resulting changes in circulation pumped more warm, moist air over northern North America and Eurasia while sending more cold Arctic air over the North Pacific and North Atlantic oceans.

    The land warms quickly, but the ocean cools slowly. Accordingly, the new circulation produced a net warming centered over the northern continents—a pattern very similar to the greenhouse fingerprint. “The changes in the Aleutian low and the [Icelandic low] explain a lot of the warming in the Northern Hemisphere of the last 2 decades,” says Hurrell. He presented his case in Geophysical Research Letters last year, and John Wallace and Yuan Zhang of the University of Washington had reached similar conclusions in an earlier study.

    These changes can't explain the whole greenhouse fingerprint, such as changes in temperature higher in the atmosphere or in the tropical Pacific. But they do present a “sort of chicken-and-egg problem,” says Santer. Are the shifts in circulation entirely natural, or is greenhouse warming driving the changes in the oscillations? Computer modeling may address the issue, but as Hurrell notes, the North Atlantic may soon give its own answer. The atmospheric circulation there swung to the opposite mode about a year ago. If that shift holds, the Northern Hemisphere will cool, he says; if global warming continues apace anyway, the greenhouse will clearly be the prime suspect. Time may tell who left the fingerprint.

  5. Medicine

    Researchers Seek New Weapon Against the Flu

    1. Robert F. Service

    There may be a magic bullet after all, at least when it comes to the flu. Scientists at a California biotech company announced last week that they have developed a new compound that may ward off the flu by halting the spread of influenza viruses between cells in the mucous membranes of the nose and lungs. A report outlining the drug's design and synthesis appears in the 29 January issue of the Journal of the American Chemical Society.

    So far, the drug, which is designed to be taken as a pill, has been tested only in animals. But if it proves to be safe and effective in human clinical trials set to begin this spring, it could have a huge impact: Each year, 20,000 to 40,000 Americans die from the flu, making it the deadliest infectious disease. And the illness costs as much as $12 billion a year in health care and lost productivity, according to the National Science and Technology Council. The new compound “is very exciting,” says Robert Sidwell, a virologist with Utah State University in Logan who has tested it in his lab. Unlike vaccines, which only protect against certain strains of the virus, “it's effective against all types of influenza we've thrown at it.”

    Drug of choice.

    Compound (pink) blocks action of key viral enzyme (blue) by selectively binding with its active site.

    Gilead Sciences

    Public health experts caution, however, that even if the new compound proves to knock out the virus in humans, it and a similar compound also under development shouldn't eclipse current vaccination programs. By preventing infection in the first place, vaccines limit the spread of seasonal epidemics. Also, if the drugs were used widely, drug-resistant strains of the virus could arise and become prevalent.

    Developed by scientists at Gilead Sciences in Foster City, California, in conjunction with researchers at The Australian National University in Canberra and the University of California, Berkeley, the new compound blocks the activity of a viral enzyme called neuraminidase. The enzyme plays a vital role in the spread of infection between cells in the nasal and respiratory tracts by enabling newly formed viral particles to bud from an infected cell. The enzyme's task is to snip a chemical leash—a bond between a cell-bound sugar molecule and a surface protein on the virus called hemagglutinin—that tethers each particle to the cell. By mimicking the sugar, which is the neuraminidase's normal target, the drug binds to the enzyme and diverts it from its task.

    Animal and cell culture studies reported at a conference last fall show that the new compound can shut down this process in many different influenza strains from both of the main A and B subtypes. That success stems from the fact that neuraminidase's active region “is almost identical for all the strains of the virus,” says Gilead medicinal chemist Choung Kim. The drug, provisionally christened GS 4104, also quickly eliminated flu symptoms, such as fever and coughing, in ferrets, the best animal model of the illness.

    Gilead's drug is actually the second of its kind. The first neuraminidase blocker, which has a similar chemical structure and binds to the same active site, was developed by researchers in Australia and the United Kingdom in 1993 and is now in clinical trials run by the British pharmaceutical giant Glaxo Wellcome. Called zanamivir, it must be given as a nasal spray or nose drops, or via an inhaler, largely because it cannot be absorbed by the gut. In early trials, zanamivir prevented the onslaught of flu symptoms when administered before infection, and it shortened their duration when given later. The compound also seems to be well tolerated and to have no adverse side effects, says Charles Penn, who heads medical strategy for Glaxo Wellcome in Uxbridge, United Kingdom. Glaxo researchers hope to begin final-stage clinical trials later this year.

    Because patients often prefer the convenience of popping a pill, in 1994, Gilead scientists launched an effort to design a neuraminidase blocker that could be taken orally. To do so, they had to fashion a molecule with hydrophobic, or fatty, chemical groups, which would ease its passage through the gut wall. But they faced a challenge: The structure of the influenza neuraminidase enzyme, known since 1983, suggested that its active site prefers to bind “hydrophilic” chemical groups that have just the opposite character. Zanamivir's ability to bind the enzyme and inhibit it, in fact, seemed to depend largely on four hydrophilic groups.

    In their search for a way around this roadblock, the researchers stripped the hydrophilic groups from a zanamivir-like starting molecule and then began adding back fat-friendly groups. After several failures, the team hit pay dirt. Adding a hydrocarbon-based alkyl group and a nitrogen-containing amine yielded a new molecule, GS 4104, that bound just as tightly to neuraminidase's active site as zanamivir but was hydrophobic enough to be absorbed by the gut.

    GS 4104's binding ability “was completely unpredictable,” says Kim's Gilead colleague Norbert Bischofberger. As it turns out, when GS 4104 binds to neuraminidase, the enzyme's conformation is changed slightly—forcing one amino acid near the active site to rotate. This shift opens up a small cleft in the enzyme, causing it to bind the drug's hydrophobic alkyl side chain.

    Researchers believe that the tight binding of both GS 4104 and zanamivir to influenza neuraminidase should prevent the compounds from interfering with human enzymes and creating side effects. Cells in everything from the liver to the immune system depend on related versions of the enzyme, says Gilead molecular biologist Dirk Mendel, but lab tests with both neuraminidase blockers have shown that they bind to the influenza enzyme 1 million times more readily than to human versions. “I've looked at a lot of different antiviral drugs,” says Utah's Sidwell of the Gilead compound. “This is one of the most selective drugs I've seen.”

    If the new drugs pass the rest of their tests, they would be a welcome addition to doctors' antiflu arsenal, says Frederick Ruben, who is the medical director of infection control at the University of Pittsburgh Medical Center. Two prescription drugs, amantadine and rimantadine, can shorten the duration of symptoms of type A strains of the flu. But they don't work against type B strains, which account for about 30% of infections. And flu strains that are resistant to both drugs already have cropped up, says Ruben.

    He and others caution that strains resistant to neuraminidase inhibitors also could arise if the drugs came into wide use. “If they were available over the counter year in and year out, that would [foster] the resistant strains,” says Ruben. Gilead's Kim counters that the resistant strains are less likely to appear with the new antivirals because their target, neuraminidase's docking site, is virtually identical among different flu strains. To become resistant to the drugs, the virus would have to acquire genetic mutations that affect the docking site—changes that would be more likely to cripple the mutated strain than to give it a selective advantage. But Ruben points out that Glaxo researchers have already shown in the lab that flu viruses can develop resistance to their compound. “Resistance will be an issue, but nobody knows yet how important an issue it will be,” says Alan Kendal, an epidemiologist and flu expert at the Emory School of Public Health in Atlanta.

    Even if resistance doesn't develop, Kendal worries that the new drugs could end up boosting the overall number of flu sufferers by discouraging people from getting vaccinated prior to the flu season. Many of those infected might be able to limit the severity of their illness by taking antiviral compounds, but most would pass along the virus to other people before they experienced symptoms and took an antiviral. For that reason, he predicts, even if a magic bullet for the flu does make it to market, public health officials are likely to keep their focus on prevention.

  6. Plant Research

    First Nematode-Resistance Gene Found

    1. Anne Simon Moffat

    Nematodes are hearty eaters. Although barely visible to the human eye, these voracious threadworms annually destroy about $100 billion in crops worldwide. Some have a particular taste for sugar beets, while others also enjoy munching on a wide variety of other crops, including oilseed rape and cauliflower. And because classical plant breeding has so far failed to produce commercial crop varieties that resist the onslaught, protection relies largely on toxic chemical pesticides or crop rotation. But that could soon change.

    On page 832 of this issue, plant breeder Christian Jung of the Institute of Crop Science and Plant Breeding at the University of Kiel, in Germany, and his colleagues at the University of Aarhus in Denmark and the Center for Plant Breeding and Reproductive Research in Wageningen, the Netherlands, report that they have cloned a nematode-resistance gene that originated in a wild beet plant. In the past 2 to 3 years, researchers have identified more than a dozen genes that make plants resistant to pathogens, including bacteria, viruses, and fungi (Science, 23 September 1994, p. 1804). But the new gene, called Hs1pro-1, is the first one for resistance to an animal pest. The work “rounds out description of the range of plant genes that confer resistance to disease,” says Purdue University plant biologist Gregory Martin.

    Heavy eaters.

    The center rows of beets (top) show damage caused by nematodes such as those pictured (bottom).

    Photos by Ed Caswell-Chen

    The sequence of the protein encoded by the new gene provides a few clues to its function: Like the proteins made by other resistance genes, it may detect chemical signals made by the pest and then trigger an as-yet-unknown defensive reaction. But even before the resistance mechanism is known, says Jung, the gene might be engineered into major commercial crops, such as oilseed rape, to create resistant varieties.

    In hunting down the gene over the last 8 years, the team transformed adversity into opportunity. Plant breeders have long tried to capitalize on the natural nematode resistance of plants such as wild beets by breeding them with susceptible crops such as the sugar beet, but the hybrids only had partial resistance and were unsuitable for commercial use. Out of that failure, however, came a clue to the location of the resistance gene. Jung's team and the Netherlands team found that some of the hybrids carried the nematode-resistance gene from wild beet on a tiny chromosomal translocation formed by two chromosomes breaking and joining abnormally. This was “a very, very fortunate find,” says nematologist Valerie Williamson of the University of California, Davis, as it allowed researchers to focus their gene hunt on a small bit of the wild-beet genome that carried the breakpoint.

    That lucky break, together with a particular marker sequence that was always inherited with nematode resistance, ultimately helped Jung and his colleagues home in on a candidate gene. They only found the suspect DNA in resistant plants, for example, and the predicted sequence of the protein it encoded revealed some motifs, such as repeated sequences rich in the amino acid leucine, previously found in other resistance genes. The team eventually confirmed its finding by transferring the gene into susceptible beet roots in culture and showing that the transformed roots resisted damage by nematodes.

    The researchers found that nematodes trying to feed on the resistant roots were thwarted when their feeding structures degenerated. What causes that degeneration is unclear, but the products of other resistance genes have turned out to be receptors located in the plant cell membrane that set off a defensive reaction when tweaked by some product made by the pathogen. Similarly, the researchers suggest, the Hs1pro-1 protein may serve as a membrane receptor for compounds injected into root cells by the nematode's piercing mouthpart. They note that the protein's overall structure suggests it resides in the membrane, and that the leucine-rich repeats it carries may be involved in protein-protein interactions between host and pathogen in other resistance proteins.

    The goal now, besides getting a better understanding of the resistance mechanism, is to use the gene to create new lines of resistant sugar beets and other crops. Jung cautions that that may not be easy, mainly because it may be difficult to regenerate whole plants from the cells used for the gene transfers. “Sugar beet is a notoriously recalcitrant variety,” he notes. Also, there are various strains of nematodes, and a gene that offers resistance to one may not work against others. Still, Jung has organized a large team of plant biologists to work on the problem and expects to tame nematodes' appetite: “We hope to have disease-resistant plants in the lab by the end of the year.”

  7. Astronomy

    Dusty Roads of the Cosmos Lure New Explorers

    1. James Glanz

    TORONTO—Householders have an aversion to dust, and astronomers have never much liked it either. Says Asoka Mendis of the University of California, San Diego (UCSD): “For some reason, [astronomers have] neglected it” in favor of the universe's more glamorous denizens. “Maybe it's psychological—dust is a nuisance to normal people,” he jokes. Yet, like the stuff that earthlings battle with feather dusters, cosmic dust is ubiquitous. It is left over from star birth, it blows from aging stars in sooty winds, and it is scattered by supernovas. And over the last few years, a growing band of researchers has started to find inspiration in the clouds of dust drifting between the stars. Their findings are gaining new respect for cosmic dust.

    Hot lines.

    In a dense star cluster, stellar gravity focuses radiation from x-ray and ultraviolet sources (red and green), creating high-intensity beams that could destroy dust.

    Wang and Turner

    What captivates these researchers is the interplay of dust and starlight. At an American Astronomical Society (AAS) meeting here last month, that was the theme of presentations on topics ranging from how dust seems to line up with the weak magnetic field of the Milky Way, creating a giant polarizer for starlight, to the way gravitational “magnifying glasses” in clusters of stars could be burning away the clusters' dust. The new work shows, says Mendis with a touch of modesty bordering on defensiveness, that “in many places where interesting things are happening, you find dust.”

    Dusty Polaroids. For 50 years, says Bruce Draine of Princeton University, astronomers have known that they must be seeing starlight through a ubiquitous veil of dust, because the polarization of the light—the direction in which the electric field of its waves points—isn't random. Instead, it is oriented along the pervasive magnetic field that parallels the plane of our galaxy. Because the magnetic field alone can't polarize the light, astronomers believe that it does so with the help of interstellar dust.

    Somehow, the dust “acts like a Polaroid filter,” says Draine, snuffing out starlight polarizations that point across the galactic magnetic field. Because irregularly shaped dust grains, like tiny antennas, preferentially absorb light waves whose polarizations point in the longest dimensions of the grains, the dust itself must be oriented across the field lines. But that picture presents another puzzle: How could the weak galactic field align grains made of things like silicates, hydrocarbons, and ices, which make poor compass needles?

    At the AAS meeting, Draine and his Princeton colleague Joseph Weingartner offered an answer. Astronomers already suspected that, to make an effective Polaroid filter, the dust would have to be spinning rapidly. Like a tiny gyroscope, each spinning grain would tend to maintain the same orientation, with its long dimension perpendicular to the spin axis. Spin could also produce electronic effects that might make the grains into better compass needles, more likely to align with the galactic field lines. But none of the mechanisms astronomers explored for spinning the grains—bombardment with gas atoms, for example, or the ejection of molecules from the dust's surface—seemed quite up to the job, says Draine. So he and Weingartner tried something else: a subtle torque generated by starlight.

    If the grains were exactly spherical, or if starlight streamed with equal intensity in all directions, they realized, the grains would feel no torque. But space-dust grains are thought to be jagged and irregular, like their earthly counterparts, and the patchy distribution of stars on the sky ensures that there will always be asymmetries in the light that strikes them. The effect of starlight on the grains, says Draine, then becomes “like wind blowing on a pinwheel.”

    He and Weingartner found that after 100,000 years or so—a tick of the clock in galactic terms—the starlight could spin the grains up to millions of rotations a second. That's fast enough for substantial numbers of electrons in the grains to siphon off some of this kinetic energy and align their “magnetic moments,” turning each grain into a weak bar magnet. By calculating how the tugs of the starlight and the magnetic field would combine—an effort that took “several [computer workstations] running flat-out for months,” says Draine—they found that the grains' spin axes end up either parallel or antiparallel to the field, on average, with their long axes whirling across the field lines.

    “We think we have found a mechanism that is potent enough” to create a polarization filter for starlight, says Draine. Wayne Roberge of Rensselaer Polytechnic Institute in Troy, New York, agrees. He points out that because very small grains would feel less torque in this picture, they shouldn't align as easily, so the shorter wavelengths most affected by these grains should be less polarized—and that's just what is observed. Roberge thinks that other alignment mechanisms, such as the breezes of background gas in some parts of the galaxy, could also be acting on the dust. Still, he says, “it's a plausible solution to a problem that's been around for half a century.”

    Clean clusters. That scenario has dust leaving its mark on starlight. But in the dense knots of old stars in our galaxy known as globular clusters, starlight may have the upper hand, Princeton astronomers Yun Wang and Edwin Turner propose. Astronomers have long wondered why globular clusters are nearly dust-free, as the spectra of their starlight imply, in spite of the thousands of stars there that should be churning out dust. It's like finding a coal town where the inhabitants wear spotless white shirts. In search of an answer, Wang and Turner turned to the destructive effects of radiation pumped out by the cluster's brightest members.

    A typical globular cluster, Wang explained during her presentation here, has five to 10 especially powerful sources of x-rays or ultraviolet light, each powered by matter streaming into a superdense neutron star or perhaps a black hole. On their own, the sources would have little effect on the dust. But Wang noted that, according to Einstein's theory of general relativity, the gravity of each of the million-odd ordinary stars in the cluster should bend the rays of strong radiation streaming in their direction. The result is a million huge magnifying glasses. “If I'm sitting behind [one such] lens,” says Wang, “then the flux I get can be a million times larger than if there were no gravitational lensing.”

    The very gradual bending of the light by a star's gravity leads to a long, slender focus rather than a pointlike one. Because the stars in globular clusters are packed so densely, these hot, narrow beams of radiation, each a few light-years long and 40 or 50 kilometers across, form a close-stitched network extending throughout the cluster. The network “zaps the dust” as it flows through, says Wang, breaking it down and vaporizing it fast enough to eliminate most of it. “If there was dust there, it would probably be destroyed,” says Turner. “The concept is interesting,” says A. G. W. Cameron of the Harvard-Smithsonian Center for Astrophysics in Cambridge, Massachusetts, although he says that details of how radiation heats and destroys the dust still need to be pinned down.

    Rosy lenses? Dust may also be affecting our view of the distant universe, well beyond our galaxy, as Turner, Sangeeta Malhotra of Caltech, and James Rhoads of Kitt Peak National Observatory suggested in a separate presentation. On its way to Earth, light from quasars—brilliant, galaxylike objects at the far edge of the universe—sometimes passes through vast lenses created by the gravity of entire galaxies along the line of sight. The multiple quasar images that result are often surprisingly red, Turner and his colleagues found. They suggest that dust in the lensing galaxies is responsible, reddening the light just as dust in the atmosphere tints sunsets on Earth.

    If they are right, then the redness of the quasar spectra could serve as a probe of distant dust, which seems to be far more abundant in the lensing galaxies than in nearby massive galaxies. That might imply more frequent galaxy collisions in the past, which would keep the large lensing galaxies well stocked with dust. What's more, Turner and colleagues reason that if dust reddens some of the images, it may completely block others. That effect would hamper efforts to use the frequency of optical lenses to gauge how the “shape” of space has changed over time because of changes in the universe's expansion rate—a technique akin to estimating the size of a flock of birds at night by how often one of them is silhouetted against the moon. (Lensed radio sources would not suffer from this limitation, because they are unaffected by dust.)

    So far, says Masataka Fukugita of Kyoto University in Japan, the evidence that dust is altering our view of the quasar images is “quite suggestive” but not yet convincing, as it's hard to be sure what the quasars behind the lenses really look like. Settling the question could require some detailed sleuthing. But the devotees of dust are likely to welcome the task, says UCSD's Mendis: “What is a nuisance to some people is a wonderful area of study for others.”

  8. Mathematics

    Homage to an Itinerant Master

    1. Dana Mackenzie
    1. Dana Mackenzie is a former mathematics professor at Duke University and Kenyon College and is now a free-lance writer in Santa Cruz, California.

    SAN DIEGO—The mathematics community paid tribute here last month to one of its legendary figures. Unique in the world of modern mathematics, Paul Erdös, who died on 20 September at the age of 83, is considered the most prolific mathematician ever. Yet, Erdös, who was born in Hungary and came to the United States in the 1930s, had no permanent home and no formal job after 1954. Ronald Graham of AT&T Research, the moderator of the special session in Erdös's honor at the joint meetings here of the American Mathematical Society and the Mathematical Association of America, described his working style as “one lifelong, continuous lecture tour. He traveled from one mathematical center to another, passing along news, squeezing out all the mathematical juice he could, and moving on.”

    A love for beautiful problems.

    Mathematician Paul Erdös.

    George Csicsery

    Along the way, Erdös found time to author or co-author more than 1400 research papers. His willingness to share credit led to the development of a new concept in the sociology of mathematics, the “Erdös number.” A person who wrote an article with Erdös had Erdös number 1, one who wrote an article with someone who had written an article with Erdös had Erdös number 2, and so on. So numerous were Erdös's collaborators that the list includes more than 4500 people with Erdös number 2, and it is hard to find a mathematician with an Erdös number greater than 3 or 4—unless that mathematician has written no joint papers of any kind. (This unfortunate group has an Erdös number of infinity.)

    The speakers at the Erdös session represented the broad spectrum of fields he worked in: number theory, set theory, combinatorics, graph theory, and geometry, among others. In number theory, he was fascinated by the large-scale properties of the number system. In 1949, he and Atle Selberg of the Institute for Advanced Study in Princeton, New Jersey, published an “elementary” proof that the probability that a randomly selected large number is prime is roughly equal to 1 divided by the natural logarithm of the number. Until then, mathematicians doubted that an elementary proof, one involving only the arithmetic of the integers, was possible.

    Another characteristic and original theorem of Erdös's, published in 1940 with the late Mark Kac, says that a graph of the number of prime factors of very large numbers forms a bell curve, as if the numbers were choosing their factors at random. “God may not play dice with the universe,” Erdös is said to have remarked, alluding to Einstein's famous quotation, “but something strange is going on with the prime numbers.”

    All six speakers described the love of specific problems that drove this work. “He succumbed to the seduction of every beautiful problem he encountered,” said Joel Spencer of the Courant Institute at New York University. Instead of building sweeping theories, Erdös's gift was to pose problems that pointed the way to productive new areas of research. His result on the number of factors of large numbers, for example, launched the field of probabilistic number theory.

    Carl Pomerance of the University of Georgia delighted the audience with the story of how an irresistible problem led to his first meeting with Erdös. In 1974, when the great baseball player Hank Aaron broke Babe Ruth's career home-run record, Pomerance noticed that the prime factors of both 714 (the number of home runs Ruth hit) and 715 (Aaron's new record) added to 29. Pomerance wrote a short note for the Journal of Recreational Mathematics about what he called “Ruth-Aaron numbers,” pairs of consecutive numbers whose prime factors have the same sum. He speculated that such pairs become less and less frequent as their size increases. He added, “I had no idea how to prove such a result.”

    But within a week after his paper appeared, the young assistant professor received a phone call from the master. Erdös said he would show Pomerance how to prove his conjecture—provided that Pomerance invited him to Georgia. The meeting resulted in a joint paper—the first of over 40 papers Pomerance and Erdös would write together.

    Years later, Pomerance said, the University of Georgia awarded honorary degrees to both Erdös and Aaron. He asked both recipients to autograph a baseball for him. “And thus,” Pomerance concluded, “Henry Aaron also has Erdös number 1.”

  9. Biotechnology: How Many Genes Are There?

    1. Jon Cohen

    When J. Craig Venter, head of The Institute for Genomic Research (TIGR) in Rockville, Maryland, co-authored a July 1994 paper in Nature Genetics postulating that there are 60,000 to 70,000 genes in the human genome, he says he received an irate phone call from one of his biggest supporters. “What the hell do you think you're doing, saying there are only 60,000 genes?” the caller yelled.

    The caller was the late entrepreneur Wallace Steinberg, who played an instrumental role in forming the nonprofit TIGR and its main corporate backer, Human Genome Sciences (HGS), also in Rockville. According to Venter, Steinberg hollered, “I just sold 100,000 genes to SmithKline Beecham!”—a reference to the landmark deal he cut that gave the large pharmaceutical company access to HGS's database of gene fragments, known as ESTs (see main text). Steinberg “was seriously concerned,” says Venter. “He was always trying to raise the number of genes, because he saw them as a commodity.”

    In addition to annoying his sponsor, Venter's lowball estimate helped stir up a debate among his colleagues. More than a century after Austrian monk Gregor Mendel first detailed how genetic material was passed from one generation to the next, researchers still don't know how many genes we're all walking around with. The essential problem is that our genes are hidden in a haystack of apparently meaningless genetic information. Only about 3% of the 3 billion individual units known as “bases” that make up DNA actually code for proteins, which is the simplest definition of a gene. Until the international Human Genome Project is completed in 2003, scientists will not know for sure the number of genes in human chromosomes.

    But that doesn't lessen the certainty of the many scientists today who say they already know the answer. At the low end is Sydney Brenner, who launched the field of nematode genetics. Brenner is unequivocal: “We know almost exactly: 60,000 genes,” he says. “There isn't much room for much more. In fact, it may be less than that.” Brenner, who heads the Molecular Sciences Institute in La Jolla, California, has been studying the genome of the fugu, or Japanese puffer fish. It has 360 million bases, and his lab finds a gene every 6000 bases, yielding 60,000 total genes. Brenner assumes that this number is true across species. “If our genes turn out to be bigger, there will just turn out to be fewer of them,” he insists.

    Fishy estimate.

    Sydney Brenner contends that humans and puffer fish have the same number of genes—about 60,000.


    Venter's estimate is based on an analysis of the ESTs that have been sequenced. But one problem with calculations based on gene fragments is that several ESTs may come from the same gene, a redundancy that would not be detected unless their sequences overlapped—a shortcoming Venter tried to correct for in his Nature Genetics paper by estimating various redundancy frequencies.

    C. Thomas Caskey, head of the genome project at Merck & Co., guesses 80,000 to 100,000. He's working with an EST database put together by Robert Waterston and colleagues at Washington University in St. Louis. Caskey says they have identified 42,000 to 43,000 unique genes and are still counting. “The curve certainly hasn't flattened,” he says. “If it was 60,000, we'd see more of a flattening of the curve.”

    The co-founder of the genomics company Darwin Molecular in Seattle, Leroy Hood of the University of Washington, ups the ante a bit, but with a dose of humility. “I suspect it will be a lot more than 100,000 in humans. But any number anyone gives you is just a wild guess,” he says. Joining Hood on the high end, but without the caveat, is William Haseltine, head of HGS, who says the company's EST database already has identified 90% of human genes. “There are 120,000 to 150,000 genes,” Haseltine declares, adding that only HGS and its main competitor, Incyte Pharmaceuticals, would really know.

    But Randy Scott, scientific director of Incyte—which has a massive EST database similar to HGS's—doesn't claim to know. “We're pretty comfortable that we're in the range of 80,000 to 100,000, but we're still discovering,” says Scott. “I think people are fairly naïve about the complexity of the data, and that's why you get these rip-roaring debates.”

    Brenner takes a humanistic view of the debate. “People like to have a lot of genes,” says Brenner. “It makes them feel more comfortable. To be only eight times more complicated than E[scherichia] coli is an insult.” And having more genes in the database makes investors feel more comfortable, too.

  10. Biotechnology: Hype Surrounds Genomics Inc.

    1. Jon Cohen

    Boosters of the burgeoning genomics industry often give the impression that discovering a new disease gene is tantamount to discovering a new treatment. But history shows that identifying the enemy isn't the same as stopping it. For instance, researchers discovered the mutated gene that causes sickle cell anemia 40 years ago, but there still is no cure—and that's not surprising, says Robert Weinberg, a specialist in cancer-causing genes at the Massachusetts Institute of Technology. “For a number of genetic diseases, knowing the genes might not help the patient one whit,” he says.

    Weinberg is not alone in expressing concern about hype in genomics. Other academics, researchers at older biotech companies and big pharmaceuticals, and even some stock analysts and scientists who have started genomics companies worry that the promise of genomics is being oversold. “Hype and speculation in some cases have gone to absurd levels,” says J. Craig Venter, head of The Institute for Genomic Research, a Rockville, Maryland, nonprofit.

    The main concern is not that genomics is a sham, but that it could take a long time to produce substantial financial returns. “We're talking about understanding the computer of life—the most difficult thing mankind has ever understood on this planet,” says Thierry Soursac, the general manager of RPR Gencell in Santa Clara, California—a division of the drug company Rhône-Poulenc Rorer that does some genomics work, but primarily focuses on cell and gene therapy. Soursac says he has high hopes for genomics but low regard for the way Wall Street investors talk about its potential: “If it takes 20 years, it will be an extraordinary accomplishment. But for Wall Street, if it takes 2 years, it will be too long.” Michael Steinmetz, vice president of preclinical R&D at the pharmaceutical company Hoffmann-La Roche, agrees. “The timeline has been completely underestimated,” he says.

    If Soursac and Steinmetz are correct, it wouldn't be the first time biotechnology has gone down this road. Earlier waves of biotech companies were heavily criticized for overselling technologies like monoclonal antibodies and gene therapy. “Biotech doesn't exactly have a clean record in this regard,” acknowledges David Galas, head of Seattle's Darwin Molecular, who says the process of starting up a biotech makes it difficult to avoid hype. “Once you have an idea for a company, you want to sell it to investors. It's just sell, sell, sell all the way along the line.”

    Indeed, in part because of the hype, some older biotechs, such as Chiron of Emeryville, California—which is part of a “functional genomics initiative” with Genetics Institute and Genentech—have pointedly resisted transforming themselves into genomics companies. “Obviously, genomics information will provide, over some time frame, tremendous insight into humans and other organisms,” says William Rutter, head of Chiron. “But in the time frame of our lives, I'm not at all sure there will be a lot of direct impact.”

    The brass at Genetics Institute, a Cambridge, Massachusetts, company that makes products based on proteins secreted from cells, also blanches at the idea of being labeled a genomics company. Steven Clark, the company's vice president of research, is confident that genomics will eventually lead to new treatments. But “is this really going to revolutionize drug discovery?” asks Clark. “I don't know.”

    Daniel Kisner, president of Isis Pharmaceuticals, a start-up biotech in Carlsbad, California, asserts that many of the current crop of genomics biotechs will not survive. “[The industry is] highly competitive and already overbuilt,” says Kisner, who previously was an executive at Abbott Laboratories and, before that, a top official at the U.S. National Cancer Institute. “I don't know what these people are going to be doing in 5 years unless they begin to use their chemistry programs to make more clear drug discoveries.”

    The Massachusetts Institute of Technology's Eric Lander, who co-founded the genomics company Millennium Pharmaceuticals in Cambridge, Massachusetts, concedes that genomics is hyped, but questions whether it's overvalued. If you compare the market valuation—the number of shares of stock multiplied by the selling price—of the public genomics company with that of the big pharmaceuticals, genomics only accounts for about 0.5% of the total, he says. “Do we believe genomics has increased the value of pharmaceutical development by 0.5%?” asks Lander. “I'm willing to say yes. Maybe it's undervalued.” William Haseltine, head of the genomics company with the highest valuation, Human Genome Sciences, has this to say to the skeptics: “They won't be skeptical very long.” The clock's ticking.

  11. Key Player--Kevin KinsellaThe Industry's Top Showman …

    1. Jon Cohen

    There may be a genetic explanation for why businessman Kevin Kinsella has earned a reputation as the “P. T. Barnum of biotech”: His father was a Broadway actor, and his mother was a model with a top New York agency.

    Kinsella, the founder of the La Jolla, California, genomics start-up Sequana Therapeutics and a raft of other biotechs, is one of the foremost popularizers of the genomics revolution. While many scientists seem compelled to show off the results of their hard labor with the most cluttered, confusing, and downright ugly slides imaginable, the 51-year-old Kinsella gleefully exploits the pop and sizzle of high-tech to make sure his take-home message is taken home. Consider his latest show for Sequana: With the help of a Power Macintosh computer hooked to speakers and a projector, he presents “From Gene to Screen,” which includes excerpts from the movie The Nutty Professor and a photo of an island that morphs into a graph. During the presentation, he refers to the similarity between man and mouse as “the incredible likeness of beings” and the progress being made in genomics as leading to what he calls “the incredible shrinking genetic universe.”

    Whether Kinsella himself strains credibility depends, of course, on whom you ask. Tim Harris, Sequana's chief scientist and formerly director of biotechnology at the drug company Glaxo Wellcome, enjoys working with Kinsella. “He brings a lot of entrepreneurial flare to the operation,” says Harris. “He said he'd raise the money if I put the science together. It's obviously more complicated, but it's a good paradigm: He doesn't interfere with what I do, and I don't interfere with what he does.” One industry analyst who asks not to be quoted by name is less generous. “Sequana got put together by a marketing guy,” says this critic, who calls Kinsella a “superhyper” and says his salesmanship has its limits. This analyst asserts, for example, that Sequana was too anxious to make deals with big drug companies and, in some cases, sold itself short.

    Kinsella acknowledges that Sequana could have been more aggressive in some of the early collaborative deals it made, but the company—in part because of what The New York Times last June referred to as Kinsella's “great showmanship”—has fared much better both with pharmaceutical companies and on Wall Street than have several similar start-ups.

    After earning an undergraduate degree in management from the Massachusetts Institute of Technology in 1967 and a master's from Johns Hopkins University in international relations, Kinsella raised millions as a fund-raiser for Tufts University and then MIT, which introduced him to the world of venture capitalism. In 1978, he moved from Boston to San Diego with, he says, “$250 to my name” and the dream of raising money to launch new high-tech companies. Kinsella finally became a venture capitalist in 1983, starting Avalon Ventures with a $400,000 loan from an old MIT friend. “In the space of 4 years, I had turned that into $4 million,” boasts Kinsella in a two-page autobiography.

    Avalon has started 16 biotechs, including Vertex Pharmaceuticals, Neurocrine Biosciences, and Pharmacopeia. When Avalon formed Sequana in 1993, Kinsella decided to run it himself, in part because he was tired of starting one company after another from scratch. Kinsella also was awed by the power of genomics. “Genomics,” he declares with the heartfelt conviction of an impresario, “really is the foundation of all modern biology.”

  12. Key Player--Daniel Cohen: … And a Recent Recruit

    1. Michael Balter

    Genome researcher Daniel Cohen has lived his life straddling different worlds. He is an accomplished pianist who has made his mark studying genes. He was born in Tunisia, but has spent the last 37 years in France. He built his career in the public sector, but has been criticized for trying to cut deals with industry. But last year, Cohen cast his lot with the private sector, joining the Paris-based genomics company Genset as head of genome research. “It was a problem to be on both sides,” says Cohen. “You want at the same time to get the Nobel Prize and to make money. In the private sector, I'm much more relaxed and it's easier to do good science.”

    Cohen has been doing cutting-edge science since the early 1980s when, after completing his medical studies, he met the French physician and Nobel laureate Jean Dausset, and the two co-founded the Center for the Study of Human Polymorphism (CEPH) in Paris. As head of CEPH and, later, scientific director of the associated gene research lab Généthon, Cohen pioneered the use of automated techniques for mapping the location of genes on chromosomes at a time when many researchers were skeptical that this high-speed, “industrial” approach constituted real science.

    The result of their labors—the first rough physical map of the human genome in 1993—sealed Cohen's reputation as a world player in this fast-breaking field. “Cohen is not a classic scientist,” says a genome researcher who knows him well but asked not to be identified. “But most people would give him a lot of credit for the revolution in genome work.”

    Last year, Cohen again made waves when he quit CEPH, which is supported largely by the French government, for Genset, an internationally prominent French biotech company that specializes in complex genetic diseases. Cohen says he decided to leave after budget cuts at CEPH: “I would have died scientifically if I had stayed.” Indeed, his decision came after a number of unsuccessful attempts to keep the lab competitive through controversial collaborations with private industry (Science, 18 March 1994, p. 1552).

    Cohen took most of CEPH's gene-mapping team with him to Genset, a move that involved delicate financial negotiations and considerable friction with his old mentor, Dausset. At Genset, Cohen says, he “will be able to really compete on a global level.” The company has already raised some $100 million for genetic research through public offerings, and, with Cohen at the genetics research helm, it expects to raise more in the coming months.

    Although Cohen finally has jumped with both feet into the private sector, his life still has its underlying tensions. No matter what he accomplishes scientifically, his real love remains the piano, he says. “In genetics there is no mystery, but music is all mystery.” Still, he may be finding a way to resolve this conundrum too. Says Cohen, “I am the best geneticist of all musicians and the best musician of all geneticists.”

  13. Biotechnology: Genomics's Wheelers and Dealers



    Genome Therapeutics Corp.

    Human Genome Sciences

    Incyte Pharmaceuticals

    Millennium Pharmaceuticals

    Myriad Genetics

    Sequana Therapeutics

    Unless otherwise noted, all stock and compensation information comes from company prospectuses filed for initial public offerings (IPOs), and the value of all collaborative deals listed is the total potential revenue, excluding milestone and royalty payments if products are developed. Note that the quoted stock prices likely have changed significantly as the market fluctuates daily, and that stock holdings may include options that have yet to become vested.


    Santa Clara, California

    Key feature: Designing DNA probe arrays using GeneChip system

    Key technologies: Semiconductor fabrication techniques that put DNA sequences on glass “chips”

    “Our chips allow people much easier access to genetic information. There's a lot we can do by looking at patterns.”

    Stephen Fodor, President

    View this table:
    View this table:
    View this table:



    Key feature: Identifying genes and regulatory regions of genes

    Key technologies: Large-scale, high-throughput sequencing; technique for isolating “5 prime” sequences at beginning of genes (including regulatory regions); “functional polymorphism scanning” for analyzing gene variations in a population; bioinformatics

    “We discover pathways with minimal benchwork.”

    Daniel Cohen, Chief Genomics Officer

    C. Vioujard/Gamma Liaison

    View this table:
    View this table:
    View this table:

    Genome Therapeutics Corp.

    Waltham, Massachusetts

    Key feature: Sequences pathogen genomes

    Key technologies: Proprietary high-throughput “multiplex” DNA sequencing; positional cloning; bioinformatics

    “In bacteria, the gap between gene discovery and drug discovery is so much shorter than in humans.”

    Bernd Seizinger, Chief Scientific Officer

    View this table:
    View this table:
    View this table:
    View this table:

    Human Genome Sciences

    Rockville, Maryland

    Key feature: Sells subscriptions to the Human Gene Anatomy Project, putatively the world's largest collection of cDNAs from normal and abnormal human tissues

    Key technologies: Large-scale sequencing; bioinformatics

    “Our database is a tool of invaluable power for asking medical questions.”

    William Haseltine, CEO

    View this table:
    View this table:
    View this table:

    Incyte Pharmaceuticals

    Palo Alto, California

    Key feature: Sells subscriptions to Internet-accessible LIFESEQ database of cDNA sequences and expression patterns

    Key technologies: Large-scale sequencing; expression analysis; bioinformatics

    “In biotech before, people were staking out claims and trying to mine. [Now we're] making a business out of selling tools to miners.”

    Roy Whitfield, CEO

    View this table:
    View this table:
    View this table:
    View this table:

    Millennium Pharmaceuticals

    Cambridge, Massachusetts

    Key feature: Identifying disease genes with synergistic approaches

    Key technologies: Large-scale DNA sequencing; rapid analysis of differential gene expression; positional cloning; high-throughput expression cloning; bioassays for high-throughput drug screens; bioinformatics; comparative genetics

    “Genomics is a whole set of technologies, and it's going to dramatically change the pharmaceutical industry.”

    Mark Levin, CEO

    View this table:
    View this table:
    View this table:

    Myriad Genetics

    Salt Lake City

    Key feature: Developing predisposition tests using more than 25,000 clinical samples from large, multigenerational Utah families

    Key technologies: High-speed gene sequencing; positional cloning; bioinformatics

    “There's a significant opportunity in genetics predisposition testing rather than in developing therapeutics on our own, which is a much riskier proposition.”

    Peter Meldrum, CEO

    View this table:
    View this table:
    View this table:

    Sequana Therapeutics

    La Jolla, California

    Key feature: Has access to more than 30,000 DNA samples from individuals and families affected by specific diseases

    Key technologies: Positional cloning; high-throughput DNA sequencers and analyzers; high-throughput cell-based drug screens; bioinformatics; comparative genetics

    “All biotechnology companies, by dint of the power of genomics, will have to tailor their research programs toward genomics.”

    Kevin Kinsella, CEO

    View this table:
    View this table:
    View this table:
    View this table:
  14. Medicine: Developing Prescriptions With a Personal Touch

    1. Jon Cohen

    Richard Sykes, who heads one of the world's largest pharmaceutical companies, Glaxo Wellcome, made a bold assertion last fall about the future of clinical testing for new drugs. “It's going to change very, very dramatically. … The way studies are carried out is going to be driven by genetic technology,” said Sykes at an international summit on health research funding (Science, 25 October 1996, p. 491). And if that prediction seems daring, consider this: Many researchers now believe the way drugs are prescribed in the future is going to be driven by genetic technology, too.

    The impetus for these extraordinary shifts stems from a dusty, old field called “pharmacogenetics,” which attempts to understand the genetic roots of diseases so as to unravel why drugs affect different people differently. Now the boom in genomics is elevating the status of pharmacogenetics from novelty to necessity. “The field has been sort of slow for a number of years, but now, suddenly, people are seeing this [connection between genes and drugs],” says Arno Motulsky, a medical geneticist at the University of Washington, Seattle, who first published on the topic in 1957. Jay Lichter, a geneticist at Sequana Therapeutics, a genomics biotech in La Jolla, California, agrees. “Five years ago, when I was doing this at DuPont Merck, they didn't get it,” says Lichter. “I was the guy with cockamamie ideas.”

    Grouping people according to their genetic makeup, or genotype, in clinical trials could make it easier to prove that a drug works, say Lichter and others. Rather than selecting patients at random, as is typically done, researchers theoretically could use genetic differences, such as a mutation in a gene linked to an enzyme that helps metabolize drugs, to select people who are most likely to respond. Genotyping can also help elucidate why some drugs make some people sicker, and this “ability will allow us at one point to sell a drug even though it's toxic to a subpopulation,” says Roy Whitfield, chief executive officer of Incyte Pharmaceuticals in Palo Alto, California, which sells drug companies subscriptions to its genetic database. Companies additionally might use genotyping to try and rescue “dead drugs” that have failed clinical trials because of efficacy or toxicity problems.

    Some researchers caution that, at this point, the impact of genotyping patients on drug testing and prescribing is still speculative. “There's a lot of hyperbole by certain individuals in the field,” says U.S. National Cancer Institute chemist Frank Gonzalez, editor of the journal Pharmacogenetics. “That's how they get their fees,” says Gonzalez, who has done pathbreaking work on how genetic differences in the liver enzyme cytochrome P-450 (CYP) can alter the way individuals metabolize drugs. Pharmacogenetics also raises thorny marketing, safety, and regulatory issues.

    View this table:

    Still, intriguing efforts already are under way. At Georgetown University in Washington, D.C., Raymond Woosley, chair of the pharmacology department, says he and his colleagues have been working with Affymetrix of Santa Clara, California, to develop a rapid screening procedure for CYP mutant genes—some of which occur in up to 7% of the population. And Woosley's staff now routinely screens patients in clinical drug trials for these mutations. “We find it very helpful in analyzing [unusual] responses,” says Woosley. In the future, says Affymetrix President Stephen Fodor, “people are going to get diagnosed and treated at the same time.”

    Myriad Genetics, a genomics company in Salt Lake City, is developing a diagnostic that may let physicians better customize how they treat patients for hypertension, which can be caused by many factors, including high salt intake. “Right now, it's a hit-or-miss situation to get you on the appropriate therapy,” says Myriad CEO Peter Meldrum. “Frequently, physicians put all patients on a low-salt diet, which won't lower blood pressure in many.” Myriad has developed a test for mutants of the angiotensinogen (AGT) gene, which codes for a protein that regulates salt retention. The test now is being evaluated in a large clinical trial at the U.S. National Institutes of Health. If patients with AGT mutants are most helped by a low-salt diet, Myriad hopes to bring the AGT test to market this year.

    At the University of Toronto, psychiatric geneticist James Kennedy is aiming at an entirely different target: mutant receptors for brain chemicals. For the past year, Kennedy has been studying 180 patients with schizophrenia to see whether individuals who have a mutant gene for a dopamine receptor are less likely to be helped by the antipsychotic drug clozapine.

    While these lines of research offer patients and physicians the possibility of precisely tailored treatments, they could be a mixed blessing for pharmaceutical companies. One fear is that pharmacogenetics will shrink the market for a particular drug by limiting who can take it. “They're very worried about labeling on a bottle that says, ‘This [drug] only for person with genotype 87,’” says Eric Lander, co-founder of Millennium Pharmaceuticals in Cambridge, Massachusetts, and head of a genome center at the Massachusetts Institute of Technology. Geneticist J. Craig Venter, who runs The Institute for Genomic Research in Rockville, Maryland, points out that such targeting could cause harm, too. Venter is concerned about marketing drugs that are toxic to a specific genotype. “If you don't [genotype] those people before giving them the drug, you'll kill them,” says Venter. “That's a real downside.” Drug companies also worry that drug development might be slowed if the agencies charged with regulating new drugs begin requiring pharmacogenetic data.

    Despite such concerns, several large drug companies are ramping up their pharmacogenetics programs. C. Thomas Caskey says Merck & Co., where he leads the genomics program, is genotyping patients in clinical trials—and animals in preclinical tests—but he declines to give details. George Poste, head of R&D at SmithKline Beecham, is also reluctant to provide specifics but confirms that the company is doing the same. Lee Babiss, who helps run Glaxo Wellcome's genomics program, is more forthcoming. Babiss says Glaxo now is testing patients for mutants of the apolipoprotein E gene, which is implicated in Alzheimer's, in a study of a drug to combat that disease. “Ultimately, I believe our drugs and our competitors' are going to be administered with a diagnostic in mind,” says Babiss. “There's a real threat to any pharmaceutical that doesn't [incorporate pharmacogenetics]. Your market could be taken away from you.”

  15. Ethics in Science: Is Data-Hoarding Slowing the Assault on Pathogens?

    1. Eliot Marshall

    In May 1995, a new and virulent strain of tuberculosis appeared in rural Byrdstown, Tennessee, near the Kentucky border. It infected a 21-year-old worker at the Oshkosh B'Gosh clothing factory there and, in short order, every member of his family. By the time health experts had moved in to test friends and neighbors, TB had infected 75% of the man's co-workers and 80% of the people he had met in routine social encounters—more than 220 people in all. Fortunately, the organism—now known informally as the Oshkosh strain—was susceptible to antibiotics, so with standard therapy, the patients all recovered.

    The Byrdstown outbreak didn't make the local news or even the federal government's Morbidity and Mortality Weekly Report. But it may make history: The Oshkosh bug may be the first virulent microbe to have its genome sequenced and made available over the Internet. Robert Fleischmann of The Institute for Genomic Research (TIGR), a nonprofit genetics research group in Rockville, Maryland, has a $3.2 million grant from the U.S. National Institute of Allergy and Infectious Diseases (NIAID) to do the job. He plans to complete it in 1998 and, after a 6-month delay, publish the data so that other scientists can see what makes this aggressive organism tick.

    That 6-month pause has raised the ire of academic genome researchers, however. TIGR instituted the delay to check for errors and also to honor an agreement with Human Genome Sciences Inc. (HGS)—also in Rockville—TIGR's profit-making partner company. The agreement gives HGS scientists an early look at new discoveries—and time to patent them. This arrangement, and the far more secretive policies of many pharmaceutical firms, has rekindled a smoldering debate over who should control DNA sequence data, and how quickly it should be shared.

    One contingent of researchers who receive public funding for sequencing says that new DNA data should be released immediately, even daily. They argue that failure to share data quickly leads to duplication and waste. They also feel that sequencing teams supported by government funds should not be allowed to lock up the data or give a favored colleague prepublication access to genetic information. And they have persuaded some sponsors to endorse a rule requiring groups funded to do high-speed sequencing to share data as rapidly as possible, without filing patents. For their part, researchers at private companies oppose immediate data release, saying that it's like asking a pharmaceutical house to give away a formula for a new drug. And there's still a third group: researchers who receive money from both private and public sources, such as those at TIGR. Many—including TIGR's chief J. Craig Venter—say the demand for quick data release is based on the arrogant notion that sequencers are mere technicians. They claim that rapid data release could encourage second-rate research.

    Pathogenic debate

    The debate over access to DNA sequence data is raging among researchers studying species from viruses to Homo sapiens. But it is especially heated when it comes to the sequencing of pathogens, where holding back data, even for a year or less, arguably could cost human lives. Yet in this area, withholding sequence data is commonplace.

    New software, the potential for enormous drug profits, and the lure of scientific discovery have triggered intense corporate interest in microbe DNA. HGS has a claim on Haemophilus influenzae, several companies are going after the ulcer bug, and many are pursuing Streptococcus pneumoniae. Every big pharmaceutical company, it seems, wants a genome it can call its own. One reason for the strong interest is that companies expect that it will be easier to use microbial genes in drugs that attack microbes than to create new drugs based on human genes. Academic centers and independent outfits such as TIGR and the Sanger Centre near Cambridge, U.K., also are competing furiously, spurred, if not by potential profits, by national rivalry and the prestige of having helped vanquish a disease.

    The potential commercial market for pathogen sequence data was evident in August 1995, when Genome Therapeutics Corp. (GTC) in Waltham, Massachusetts, became the first private company to sell the genome of a bug it had sequenced: the bacterium that causes ulcers, Helicobacter pylori. GTC says it sold its data to the Swedish pharmaceutical company, Astra AB, for business deals worth $22 million. According to GTC's vice president for research, Gerald Vovis, “Astra purchased the exclusive right to use that sequence database” and is not planning on making it public. He adds that it “wouldn't be unreasonable” to assume that GTC and Astra are trying to patent the genome. Others, according to TIGR staffers, have paid for H. pylori data, including the British drug company Glaxo Wellcome and a group of European vaccinemakers. None is willing to disclose how much of the genome it has acquired.

    Rogues' gallery.



    Companies are pursuing Heliobacter pylori (top) and Staphylococcus aureus (bottom).

    By withholding DNA data, commercial drug companies may gain a slight advantage over competitors. But Jean-François Tomb, a researcher at TIGR, suggests that data-hoarding is slowing down the normal review process by which scientists check one another's results for variations and inaccuracies. He is heading up an H. pylori sequencing project that TIGR launched with its own funds in 1996. One aim, according to a TIGR staffer, was to share data with the research community “as a freedom-of-information kind of thing.” Tomb says he would like to check his version of the H. pylori genome against another, so he recently asked GTC's sequencing expert Douglas Smith whether GTC would share its H. pylori data. The response, Tomb says, was “clearly no.” Tomb says TIGR's data, which HGS has already reviewed and sought patents on, will soon be published. Tomb notes that, unlike TIGR, Astra will be able to make the comparison for “absolutely zero dollars.”

    Private sequencing projects have tackled many other organisms. One hotly pursued bacterium, for example, is Staphylococcus aureus, a bug that causes hospital infections and is increasingly resistant to antibiotics. GTC parlayed its Staph aureus and other genomic data into a $44 million commitment from the drug company Schering-Plough in December 1995. HGS also made a deal centered on Staph aureus data, receiving a commitment of at least $9 million and a promise of royalties from the drug company Pharmacia & Upjohn.

    While some duplication is normal in research, experts say it's getting out of hand in microbe sequencing. Tuberculosis, like Staph aureus and H. pylori, will be sequenced many times over in part because sequencers aren't sharing data, whether for business reasons or because of interlab rivalries. The first group to take a stab at sequencing the organism was GTC. The Sanger Centre also embarked on a project to sequence a docile lab strain of TB known as H37Rv, releasing data on a daily basis over the Internet. Meanwhile, quite independently, TIGR decided to get into the TB-sequencing act and applied for an NIAID grant. The grant had already been approved when TIGR found out about the Sanger project. But rather than cancel the grant, NIAID and TIGR chiefs carved out their own niche by announcing plans to sequence a more virulent strain of TB—the Oshkosh bug.

    Several other companies have sequenced—and kept secret—parts of the Streptococcus pneumoniae genome. This lethal pathogen attacks hundreds of thousands of people a year, causing an estimated $4 billion in health costs. Incyte Pharmaceuticals Inc. in Palo Alto, California, announced in December 1996 that it will be receiving “genomic sequences” of Strep pneumo from the pharmaceutical company Eli Lilly and Co. The data will be added to other information from “one to two dozen” organisms in Incyte's private data bank called PathoSeq. (In exchange, Lilly will get access to Incyte's collection of human genetic data and possibly share royalties on products that result from the collaboration.) A Lilly staffer who asked not to be named says, “We've heard that at least six groups have done partial sequences” of Strep pneumo. So far, though, no DNA data have been published. TIGR and HGS have nearly completed sequencing Strep pneumo as well, says TIGR project leader Brian Dougherty, who adds that the two organizations are discussing when and how the data will be released.

    Private pathogens.




    The potential for big drug profits has sparked corporate interest in Haemophilius influenzae (top), Streptococcus pneumoniae (middle), and Mycobacterium tuberculosis (bottom).

    A communitywide problem

    Some basic scientists who study the genes of small organisms say they are offended by the hoarding and duplication of sequence data on pathogens. “It's a terrible waste of effort and money,” says molecular biologist Julian Davies of the University of British Columbia in Vancouver, who works primarily on pathogens. “It bothers me,” he adds, to see so much money going into duplicative work when first-rate academic projects are begging for help. But similar problems are cropping up in many other areas of genome research, and secrecy intensifies as research gets closer to the market.

    Experts on the fruit fly (Drosophila melanogaster) and the nematode (Caenorhabditis elegans) say their fields, which until recently attracted little commercial interest, have a long tradition of sharing data. Gerald Rubin, a geneticist at the University of California, Berkeley, and a specialist on Drosophila, says his peers consider it “unethical not to give someone a published reagent … definitely wrong,” and they view sequence data as just the starting point for a project.

    But the tradition among scientists studying mammalian genomes, according to Rubin, is “completely different.” Sociologist Stephen Hilgartner of Cornell University in Ithaca, New York, agrees. In a study now in press, Hilgartner argues that some human gene hunters have been less generous with lab materials and willing to use subtle tactics to limit access to data.* He describes how some gene hunters and mappers have practiced “data isolation,” for example, making it hard for others to obtain bacterial clones of genes cited in their research by delaying their release or failing to identify them clearly.

    Venter agrees that refusal to share clones has been a problem. He was shunned by a research consortium in 1994, he says, when he offered to use TIGR's sequencing equipment to help locate a specific human gene.

    But by many accounts, Venter was party to one of the biggest DNA-hoarding projects of all—a joint effort funded in 1993 by the pharmaceutical company SmithKline Beecham (SB) and managed by HGS, TIGR's partner company. Its goal was to build a vast, private database containing informational “tags” from the ends of human genes. Venter was working at the National Institutes of Health (NIH) when he first proposed developing the database. The idea was to collect bits of sequence data short enough to be easily generated by robots, but long enough to be unique. These “expressed sequence tags,” or ESTs, he said, were the shortest route to the rich core of the human genome—the genes that code for proteins. When peer reviewers dismissed the project as infeasible, Venter left NIH and signed a deal to do the work with SB money.

    The TIGR-HGS collection of ESTs, according to HGS chair and CEO William Haseltine, has now been used to identify about 100,000 genes. And over the past 3 years, HGS has unleashed a blizzard of patent applications at the Patent and Trademark Office. Not only is HGS seeking more than 200 patents on full-length genes—four of which have been granted—it has filed several massive patent applications containing tens of thousands of ESTs. The EST applications are still working their way through the patent office (see p. 780).

    A religious campaign?

    HGS's vast, proprietary collection of genetic sequences quickly became a lightning rod for criticism of any attempts to lock up sequence data. Haseltine created a furor in 1994, for example, when he announced that academic researchers could use the database only if they signed an agreement to share proprietary rights to their work with HGS (Science, 7 October 1994, p. 25). The debate that ensued spurred a series of efforts to push as much sequence data as possible into the public realm.

    The first move came in September 1994, when SB's competitor, Merck & Co., announced that it would finance a sequencing project run by geneticist Robert Waterston at Washington University in St. Louis that would duplicate some of the work already done by HGS and TIGR. Merck won't disclose the cost of its Merck Gene Index, but at the conservative estimate of 30 cents per base pair, the company has spent about $50 million to sequence over 156 million base pairs of DNA. The entire collection has been deposited in the National Library of Medicine's database, GenBank. Anyone can tap into the files over the Internet and pluck out sequences for study. A Department of Energy (DOE) group called IMAGE, run by Greg Lennon at the Lawrence Livermore National Laboratory in Livermore, California, distributes the related clones, which can be used to search for detailed biological information.

    Like many other academics, Rubin views the Merck Gene Index as a godsend. Had the pharmaceutical firms kept all the human data to themselves, “it would have been a disaster,” Rubin asserts. “I am extremely grateful to companies like Merck that have made available their precompetitive information. … It furthers my research and that of many people.”

    In November 1995, the Wellcome Trust announced another gift to public databases: It pledged to give the Sanger Centre $75 million over 7 years to begin sequencing the complete human genome. In February 1996, the Howard Hughes Medical Institute in Bethesda, Maryland, awarded a $2.3 million grant to Waterston's group to create a complete gene index for the mouse, a valuable tool for gene-comparison studies. And the National Center for Human Genome Research (NCHGR), part of the NIH, followed Wellcome's move in April 1996, with $22 million in support for five U.S. pilot projects that have now begun sequencing the human genome at an accelerated pace. In another effort still awaiting final approval, Wellcome is expected to announce that it will offer $25 million in grants for the sequencing of microbes. In all cases, sponsors have insisted that the data be made public rapidly.

    To reinforce this ethic, several research sponsors have adopted a series of increasingly pointed guidelines for grantees. In 1992, NCHGR and DOE jointly announced a policy novel to biomedical research: It asked grant applicants who were likely to generate “significant amounts of genome data or materials” to specify exactly “how and when” they would make the results available to the public. The policy also says grantees should not retain work for more than 6 months “from the time the data or materials are generated,” whether or not they were part of a published study.

    The Wellcome Trust and the Sanger Centre, joined by NCHGR's director Francis Collins, built on these principles in February 1996. At a meeting in Bermuda of newly funded sequencing teams, Sanger Centre director John Sulston proposed that everyone agree to release raw data on a daily basis, or “as soon as possible,” without seeking patents on the raw data. There was no audible dissent, according to geneticist David Bentley of the Sanger Centre, who was present. Bentley has defended the policy, in a Policy Forum in Science (25 October 1996, p. 533), as a way to limit duplication, stifle “inappropriate” attempts to garner early patents, and avoid giving any group preferential access to data.

    Collins endorsed the policy again when he announced the NCHGR's grant awards in April. And he added a new touch, asking grantees not to seek patents on “raw genomic sequence” data, because this “could have a chilling effect” on future investments in gene technology.

    Still, it is not yet clear just how these principles will translate into action. First, the new rules have not met with universal praise. Venter and his TIGR colleague Mark Adams, for example, recently attacked their underlying assumptions in print, arguing that the rules would encourage sloppiness and discourage researchers from trying to publish journal articles (Science, 25 October 1996, p. 534). They argue instead for release “as soon as … data have passed a series of rigorous quality control checks and have been annotated.” Also, the antipatenting rule clashes with a 1980 federal law, called the Bayh-Dole Act, that encourages federal grantees to patent their discoveries.

    And the issue of when and how to share sequence data is especially complicated when it comes to labs that take both private and public funds. TIGR's allegiance to HGS already has caused many headaches over data release (see sidebar). GTC also exists uneasily in two worlds: In addition to its private income, it received at least $37 million in grants from the U.S. government between 1990 and 1995. Yet it has released only random genomic data from parts of the microbial genomes it set out to sequence. Vovis explains that the federal grant was mainly a “technology demonstration” project, one that was never meant to yield complete genomes. But as the company notes in its annual report, the grants helped defray the company's overhead research costs.

    The debate over who should control DNA data, which has been going strong for at least 5 years, could easily continue for as many more. It is hard to predict whether the campaign for daily release of genomic data will prevail, or the patent seekers will come out ahead in the end. But one thing is clear: The amount of genomic sequence available in public databases is growing at a breathtaking pace. Venter, for one, fondly wishes that, as a result, the “whole argument” about who owns the genes “will just go away.” But nobody is betting yet that it will go quietly.

    • * Hilgartner's essay will appear in Private Science: Biotechnology and the Rise of the Molecular Sciences, edited by Arnold Thackray, University of Pennsylvania Press, 1997.

  16. Biotechnology: Genomics's Odd Couple

    1. Eliot Marshall

    It was an unlikely alliance from the start: When J. Craig Venter and William Haseltine hitched up in 1992, they differed both in goals and in style. Venter, an ex-government researcher, wanted to build a topnotch, nonprofit institute and publish articles. Haseltine, an Ivy League supercompetitor, hoped to create a new kind of pharmaceutical company based on genetics. Still, both men signed onto an ambitious joint project backed by $125 million from the pharmaceutical company SmithKline Beecham (SB) to hunt for human genes.

    But the alliance between the “gene kings,” as they were dubbed by Business Week in 1995, has worn very thin. Venter and Haseltine are clashing over how much control each can exercise over data The Institute for Genomic Research (TIGR) has obtained through research funded by SB and other sponsors. Says Venter, “The relationship has grown pretty distant … there are no [longer] daily, weekly, or monthly interactions.”

    Three years ago, the gist of their legal collaboration seemed simple enough. Venter's nonprofit center—TIGR—would conduct basic genetic research. Haseltine's profit-making outfit—Human Genome Sciences Inc. (HGS)—would do some research, develop medical products, and manage finances, dispensing $85 million from SB in quarterly payments to TIGR, which also would apply for government grants. In exchange, TIGR would allow HGS to preview its scientific findings and give HGS the commercial rights to all its discoveries.

    As early as 1993, however, it was evident that Venter and Haseltine were growing apart. TIGR was supposed to give the collaboration a “quick start,” says Haseltine, spitting out DNA fragments for HGS to evaluate. But according to Venter, HGS grew nervous about TIGR's plans to publish data, and, by 1993, HGS had launched its own more aggressive assault on the human genome, setting up a duplicate intramural sequencing program. After that, TIGR had “no incentive” to sequence human DNA for HGS, says Venter, so he steered his institute in a new direction: sequencing microbial genomes. But the change created fresh trouble with HGS, this time over the control of microbial sequence data.

    Relations hit a low point in early 1995. TIGR had finished sequencing the genome of Haemophilus influenzae, a cause of serious bacterial infections in children. TIGR, which had financed the project with its endowment funds, was preparing to publish a scientific report. HGS objected, invoking clauses in its agreement with TIGR that entitle HGS to delay publication of TIGR's new scientific discoveries. (According to the agreement, HGS can delay data release in a series of steps for up to 18 months while HGS files for patents and develops medical products.)

    Venter and Haseltine recall subsequent events differently. According to Venter, TIGR had to call in the legal guns to prevent HGS from interfering with the publication of H. influenzae's sequence data. Venter says such clashes with HGS have “cost hundreds of thousands of dollars in legal fees that might have been spent on research.” Haseltine, on the other hand, says TIGR and HGS “worked out a friendly accommodation” that allowed HGS to file for a patent on the organism's genes while Venter prepared a report for publication. Adds HGS Senior Vice President Bradley Lorimier, “We moved heaven and Earth … to get that first bacterial genome patent” drawn up so that TIGR could publish the paper on time. It appeared in Science (28 July 1995, p. 496).

    The bruises from that battle had not fully healed when Venter took steps late last year to put even more distance between TIGR and HGS. When TIGR, HGS, and SB created a restricted database in 1994 to share research with academics, they agreed that SB could bow out on a year's notice. Last fall, SB gave notice that it wanted out. Venter then said that TIGR would end the agreement early, open the database, and add full-length genes to it (Science, 29 November 1996, p. 1456). The resulting scuffle over who controls the database sent Haseltine and Venter back to their lawyers.

    Haseltine immediately responded with a memo to HGS's partners, a copy of which he sent to Science, saying that HGS still would be entitled to preview and patent TIGR's discoveries. Venter and his attorney say they have taken the position in discussions with HGS that TIGR will allow HGS to preview discoveries for patenting, but not in a way that would clash with a research grant's terms. For example, the Department of Energy and the National Center for Human Genome Research ask that data be kept private for no more than 6 months. TIGR argues that if it accepts one of these grants, federal policy takes precedence. Haseltine disagrees; he says TIGR may not accept grants that are “not consistent with” HGS's proprietary rights in TIGR's work.

    According to Venter, discussions about what can and cannot be published now are conducted through third parties. The gene kings, who have long been aloof, are now barely maintaining diplomatic relations.

  17. Intellectual Property: Companies Rush to Patent DNA

    1. Eliot Marshall

    Getting rich on human genes has become a fantasy for many investors in the 1990s. Big, savvy pharmaceutical companies and brash biotech start-ups are spending huge sums of money in the hope of gaining exclusive property rights to uncharted areas of the human genome. But who ends up getting rich may have more to do with their skill at navigating patent law—and with the unpredictable decisions of federal judges—than the importance of the biology they have discovered.

    Although agencies around the world have been awarding patents based on DNA for more than 15 years, it's still not entirely clear which discoveries are patentable and which are not. One major unresolved issue is just how much biological data on the function of a DNA sequence is needed to win a patent. Applications based on whole genes whose function is well known stand the best chance of being awarded patents. But some less-than-complete gene sequences also have been patented in the past, when their commercial use was well defined.

    This question has been brought to the fore by a mass of recent patent applications that try to lay claim to thousands of genes by patenting DNA fragments that can be used to reconstruct whole gene sequences. Even if these fragments, called “expressed sequence tags,” or ESTs, are ultimately deemed unpatentable—and many experts now believe they will be—the filings could still cloud the legal picture for many years. The reason: These applications will create a priority date for the discovery of many genes, making it hard for later gene hunters to argue that they have made a truly novel discovery. This uncertainty about who can claim priority has been deepened recently by moves to place vast amounts of sequence data in public databases (see p. 777).

    Opening the floodgates. While the policy on gene fragments may be in a muddle, the notion that a whole gene can be privately owned was firmly established in 1980, when the U.S. Supreme Court ruled that Ananda Chakrabarty, a molecular biologist then working for General Electric, could patent a genetically engineered organism. Chakrabarty had spliced a gene for an oil-dissolving enzyme into a microbe, creating a bug that could clean up oil spills. The U.S. Patent and Trademark Office (PTO) initially rejected the application on the grounds that life couldn't be patented. But the court ruled that what Chakrabarty had described—although living—was an artificial substance, and that Chakrabarty had a right to patent it.

    Later decisions made it clear that even “normal” DNA sequences are considered artificial products and therefore patentable. John Doll, the PTO's biotech section chief, explains: While “nobody ‘owns’ the gene in your body, inventors can own the right to exploit it commercially. … You can't turn over a rock and find a gene.”

    Since 1980, Doll says the PTO has received more than 5000 patent applications based on whole genes. And it has granted more than 1500 patents on them. This estimate generally tracks the results of a study published in Nature last year by a science policy group at the University of Sussex in Britain, led by S. M. Thomas. Between 1981 and 1995, the Thomas group found, the patent offices of the United States, Europe, and Japan issued 1175 patents on human DNA sequences.

    The genes covered by these claims range from DNA coding for human interleukin and interferon—immune-system regulating proteins—to genes for bone and brain tissue. Most inventions are aimed at treating medical problems, and in the United States and Europe, more than half of the patents are held by public sector institutions. The single most valuable human DNA patent, however, may be one covering the human erythropoietin gene, which is used to produce a hormone needed by kidney disease patients. In 1991, the U.S. Supreme Court affirmed the validity of this 1987 patent, which now earns its owner, Amgen Inc. of Thousand Oaks, California, more than $1 billion a year.

    A fragmented picture. Just as the legal picture seemed to be clearing, with patent offices and the courts upholding claims based on whole genes, it was thrown into turmoil again in 1991 by the U.S. National Institutes of Health. NIH filed a set of applications for patents on thousands of EST gene fragments. Private companies have since staked their own claims on DNA fragments covering most of the genes in the human body.

    Between 1992 and 1996, for example, Human Genome Sciences Inc. (HGS) of Rockville, Maryland, together with its nonprofit partner The Institute for Genomic Research (TIGR), also in Rockville, launched a factorylike effort to sequence gene fragments. HGS, which owns the commercial rights in this enterprise, applied for scores of patents on ESTs. Incyte Pharmaceuticals Inc., of Palo Alto, California, set up a similar project. Today, at least 350 patent applications, covering more than 500,000 gene tags, are pending at the PTO. The largest single application contains 18,500 sequences.

    The ultimate fate of these monster claims is far from certain. In 1993, the PTO rejected NIH's application in a preliminary ruling, largely because NIH had not explained how the gene fragments, whose biological function was unknown, would be used commercially. Harold Varmus, who became director of NIH in 1993—and who had come under pressure to abandon the claim—decided not to appeal. But companies with big investments in gene fragments—particularly HGS and its main corporate funder, SmithKline Beecham (SB)—are continuing to argue the case for EST patents.

    SB's chief of research George Poste asserted in Nature a year ago that patenting ESTs “is no different from the patenting of other biomarkers such as the BRCA1 breast cancer gene” whose functions are unknown. And both Poste and HGS CEO William Haseltine have tried to persuade skeptics that ESTs are patentable as research tools. But these arguments have drawn derisory responses from others in the biotech community hoping to make money from sequence data. Mark Hoffer, counsel to Genzyme Corp. of Cambridge, Massachusetts, a developer of genetic tests and medical products, for example, likens them to “filing a claim on miscellaneous bolts” and arguing “they could be used to make a car.” Even PTO Commissioner Bruce Lehman has said, “A lot of this stuff is just data.” And as he points out, data alone aren't patentable.


    PTO Commissioner Lehman, overwhelmed by DNA fragments.


    So far, the PTO hasn't taken legal action on EST filings other than those from NIH. Nor has the PTO appeals board or any court touched this issue, because NIH never appealed the rejection of its filings. But Lehman is trying a clever tactic to reduce the backlog of EST patent claims, which he calculates would take his entire biotech staff a year to sort through if it did nothing else. Last October, after consulting with industry, Lehman issued a ruling that no application may contain more than 10 DNA sequences. As a result, companies would have to file thousands of new applications—at $400 to $800 per shot plus legal fees—to maintain all their current claims. “I think our policy will cause the companies to focus on what is the real innovation they're coming up with here,” says Lehman dryly.

    But even if these applications do not become patents, they could still have a long-term impact on the biotech business. Companies such as HGS and Incyte, for example, hope to split off subsidiary claims covering specific genes after they have investigated them more thoroughly, using the initial EST filing date to establish that they were the first discoverer of the genes. Doll, for example, says there is some concern that a company with thousands of ESTs in its portfolio could create “submarine patents” that vanish today but resurface later, when the company decides to take advantage of an early discovery date. But Doll also notes that a new international agreement limiting the life of a patent to 20 years from the date of filing will put a lid on such submarines.

    A public threat? Academic scientists, funding agencies, and at least one major pharmaceutical company have launched a counteroffensive to undermine large proprietary claims on the human genome by encouraging researchers to deposit sequence data in public databases. Some agencies are making quick release of data a condition of their grant awards. These moves may have undermined the value of private EST databases amassed by HGS, Incyte, and others. And some analysts worry that the rush to make public genetic sequences may also undermine the patentability of whole genes in the future.

    Poste argued in his article defending gene patents that by publishing DNA data before filing for patents, researchers may render genes “obvious” under patent law, making them unpatentable. And investment analyst Matthew Murray of Lehman Brothers in New York expressed a similar concern in a paper on genomics last September. “The opportunity for genome companies to capitalize on gene discoveries is somewhat limited,” Murray wrote, by the “rapid progress being made” by projects that release DNA data quickly. “Obtaining gene patents will become more problematic once the entire human genome sequence is in the public domain.”

    The impact of public databases and EST applications are just a couple of the uncertainties that hang over the world of gene patents. “It makes a lot of people nervous,” says David Galas of Darwin Molecular Inc. in Seattle, but “there's no way of answering these questions until you see what happens” in court. Says attorney Reid Adler of the Morrison & Foerster law firm in Washington, D.C., who filed the first of these EST patent claims for NIH: “I'm sure most of the genes will be identified before these issues are resolved in the courts.”

  18. Federal Regulation: Gene Tests Get Tested

    1. Eliot Marshall

    If any company is the coal miners' canary of genomics, it is Myriad Genetics Inc. of Salt Lake City. Last fall, Myriad became one of the first companies to bring a genomics invention to market—a diagnostic called BRCAnalysis that spots mutations in the breast cancer susceptibility genes BRCA1 and BRCA2. It looked like a sure winner: a test that would give women from families in which breast cancer is rife a chance to know whether they carry the genetic defect. But instead of reeling in the cash, Myriad has run into a series of obstacles—including concern about the need for federal regulation of the field—that suggest genetic testing may not lead to the quick commercial payoff some had predicted.

    Myriad's apparent difficulties reflect uncertainties facing any company hoping to strike it rich in diagnostics. Many tests will involve genes like BRCA1 and BRCA2 which are associated with disease in certain families but whose function is poorly understood, so patients—and most physicians—will have trouble interpreting test results. Indeed, some professional groups, including the American College of Medical Genetics, have recommended that genetic tests be used only in research projects until their accuracy and validity are proved. For patients who test positive, these uncertainties may be compounded by fears of discrimination.

    When Myriad launched a national campaign to market BRCAnalysis with a price tag of $2400 last October, it was putting these issues to the test for the first time. Many stockbrokers were upbeat. Matthew Murray of Lehman Brothers in New York commented in September that the breast cancer test would be “the most immediate pathway to cash flow from genomics. … We believe that there will be strong demand for this test.” Myriad appeared to be brimming with optimism, too: As it began marketing its test, the company unveiled plans to raise cash by selling $43 million in new stock.

    Slow start.

    Myriad's $2400 diagnostic faces potential regulatory hurdles.


    On 25 November, however, Myriad pulled the stock offering off the market with a terse statement: “The company has decided to withdraw [the stock] … because it believes that [market] conditions are not favorable to going forward at this time.” Myriad has not released data on sales of BRCAnalysis, but company spokesperson William Hockett claims the turnabout on the stock was unrelated to any problems with the test; it was just a matter of waiting for a better time to sell stock.

    But some investment experts believe that Myriad's decision also reflects the inherent problem of trying to move rapidly when so many issues are unresolved. For instance, Reijer Lenstra of the Smith Barney firm in New York says “There are concerns about what the test means and who should be getting it. … This is not an easy thing to market; forget a quick launch.”

    If Lenstra is right, Myriad's experience adds urgency to a major goal of the biotech industry: clearing away a thicket of social and regulatory issues that may undermine confidence in genetic testing. Just how—or who—should deal with these issues is, however, a matter of some debate.

    So far, the Food and Drug Administration (FDA) has indicated that it has authority only to regulate the safety and efficacy of tests that are sold as prepackaged “medical devices,” such as test kits for HIV. In-house laboratory testing of the kind offered by Myriad and OncorMed of Gaithersburg, Maryland, doesn't fall within its purview.

    And as Patricia Murphy, vice president of OncorMed, made clear last July when she testified before the Senate Labor and Human Resources Committee, industry leaders are happy to keep FDA out of the picture. Instead, Murphy recommended that testing be monitored by the states and by the federal agency that upholds the Clinical Laboratory Improvement Act (CLIA)—the Health Care Financing Administration of the Department of Health and Human Services.

    But some independent experts are not convinced that the CLIA system can do the job. Pediatrician N. Anthony Holtzman of Johns Hopkins University in Baltimore and geneticist Michael Watson of Washington University in St. Louis, who co-chair an independent government advisory group called the Task Force on Genetic Testing (TFGT), both point to the same weakness: CLIA checks only for lab quality; it does not address the important question of “validity”—whether a test result actually makes a valid prediction about what is likely to happen to the patient. Risk estimates derived from studies of large families with a high cancer incidence, for example, may not hold up in the general population. Watson argues that it would be best to restrict use of genetic tests until their validity is well established.

    The TFGT, which reports to Francis Collins, director of the National Center for Human Genome Research, has drawn up draft guidelines including a suggestion that the federal government create a committee to monitor the quality and validity of genetic tests. The group's final report, due in February, is likely to have a lasting impact.

    Even more important to the future of the genetic testing business is finding ways to ensure the privacy of people who test positive for a high-risk gene. Congress passed a law in 1996 that makes it illegal for companies to deny health-insurance coverage to workers just because they have medical risks—including genetic risks. But Collins says much broader protections are needed to ensure that people aren't discriminated against by employers and insurance companies.

    Collins is not alone: Industry insiders say that genetic testing will remain under a cloud until protections are firmly in place. They point to preliminary data from cancer researchers at Georgetown University in Washington, D.C., and the University of Pennsylvania, Philadelphia, showing that fewer than half the women who were offered a BRCA test last year accepted it.

    The biotech industry is now pushing for stronger privacy laws. In September, the board of directors of the Biotechnology Industry Organization (BIO), a Washington, D.C.-based lobby, adopted a statement saying that “Congress should enact a comprehensive bill” guarding the confidentiality of medical records. BIO argued that “privacy standards should be national in scope.”

    When the 104th Congress ended in 1996, nearly a dozen bills designed to prevent genetic discrimination were left on the agenda. Many of these proposals are now being reintroduced for what's shaping up to be a critical year for genomics companies and genetic testing.

Stay Connected to Science