News this Week

Science  30 Apr 2004:
Vol. 304, Issue 5671, pp. 658

You are currently viewing the .

View Full Text

Log in to view the full text

Log in through your institution

Log in through your institution


    Why a New Cancer Drug Works Well, In Some Patients

    1. Jean Marx

    The deadliest cancer in many developed countries is lung cancer. In the United States, it kills roughly 160,000 men and women every year, largely because it has spread by the time it's diagnosed. Recently, however, a new drug called gefitinib (Iressa) has given patients a glimmer of hope, while presenting a puzzle: Iressa causes significant tumor shrinkage in only about 10% of the patients who take it, but when it does work, clinicians say, it works amazingly well. “We see a complete Lazarus type of response that we've never seen before,” says lung cancer specialist David Carbone of Vanderbilt University School of Medicine. New results now provide an explanation for why Iressa is so much more effective in some patients than others—information that should lead to better treatments for those patients and perhaps for other cancers as well.

    In a paper published online by Science on 29 April (, a team led by Matthew Meyerson, Bruce Johnson, and William Sellers of Harvard's Dana-Farber Cancer Institute in Boston reports that the tumors of patients who respond to Iressa carry mutations in one of their proteins, the receptor through which epidermal growth factor (EGF) sparks cell growth. In addition, Daniel Haber and his colleagues at Massachusetts General Hospital in Boston report very similar findings in a paper that will appear in the 20 May issue of the New England Journal of Medicine and is also being published online this week. The work suggests that the mutations drive lung cancer growth and that Iressa—which inhibits EGF receptor activity—can kill cancer cells bearing the mutations.

    Cancer experts are hailing the results. Clinicians should now be able to screen lung cancer patients to see if their tumors carry the EGF receptor mutations and are thus good candidates for Iressa therapy. Patients might then take the drug immediately in hopes of warding off lethal tumor growth, instead of very late in the disease as has usually been the case so far, because most of the patients have been clinical trial participants. “If [the mutated EGF receptor] is a highly predictive marker [of Iressa responses], it would be the single most important finding in lung cancer genetics ever,” enthuses Carbone.

    Indeed, Harold Varmus, former director of the U.S. National Institutes of Health and currently president of Memorial Sloan-Kettering Cancer Center in New York City, describes the finding as a “victory for molecular medicine.” Iressa, a product of AstraZeneca, is one of a new generation of cancer chemotherapy drugs aimed at remedying the specific molecular defects that cause cancer (Science, 18 October 2002, p. 522). So far, results of clinical trials with such agents have been modest, except for a drug called Gleevec, which is used to treat a type of leukemia and certain rare cancers.

    Lazarus response.

    As shown by this series of CT scans, Iressa has been able to clear some patients' lung tumors, although it has proved effective only for a small minority. New results explain why some patients are more susceptible to the drug.


    But Brian Druker of Oregon Health & Science University in Portland, a pioneer in Gleevec development, says that the Iressa results show that defining a cancer's defects more precisely should improve the odds of finding a successful drug. “Get a molecular definition of a tumor and you'll know what it will respond to,” Druker predicts. “Even advanced cancers have Achilles' heels.”

    Iressa was originally tested on patients with a type of cancer called nonsmall cell lung cancer (NSCLC), because the EGF receptor is overly active in the tumors of about 70% of such patients. NSCLCs represent about 85% of all lung cancer cases. The fact that only 10% of the patients responded implied that their receptors might somehow be different from those of nonresponders. To find out what that difference might be, the Dana-Farber and Mass General teams began sequencing the receptor from the tumors of NSCLC patients. The two groups found similar results.

    In their first survey, Meyerson, Johnson, Sellers, and their colleagues found EGF receptor mutations in 15 of 58 tumors from Japanese NSCLC patients but in only one of 61 tumors from U.S. patients. That was interesting because clinical trials had shown that Japanese are three times more likely to respond to Iressa than Americans are. The mutations were also more common in women, nonsmokers, and patients with the adenocarcinoma form of NSCLC—all characteristics that had been linked to a higher likelihood of responding.

    In further experiments, the Dana-Farber group found mutations in all five responders they examined but in none of the four nonresponders. Similarly, in a group of U.S. patients, Haber and his colleagues found mutations in eight of nine responders but not in any of the seven nonresponders. “It's pretty clear the mutations are happening in the patients who respond to Iressa and not in those who don't,” Meyerson says. Other teams, including those of Carbone and Varmus, also have unpublished evidence for this conclusion.

    The EGF receptor is a kinase, an enzyme that regulates proteins by attaching phosphate groups to them. Both groups found that the mutations, either amino acid substitutions or small deletions, cluster near the catalytically active site of the receptor kinase, which is also where Iressa binds. Experiments on cultured cells by Haber and his colleagues showed, he says, that the mutant receptors “signal at a higher level in response to EGF and don't turn off.” Thus, the mutations may confer a growth advantage on tumor cells even while making but make them much more susceptible to Iressa.

    Although the new findings are good news for some lung cancer patients, they are in some ways a mixed bag. Iressa may well have to be taken for life—at a cost of $2000 to $3000 per month. And the market for Iressa seems likely to narrow—not what manufacturers want to hear. Still, researchers say there may be counterbalancing effects.

    EGF receptors in other kinds of cancers might carry similar mutations, in which case Iressa might be used to treat them, too. Also, the ability to identify a susceptible subgroup might mean that clinical trials can be smaller—and thus much cheaper to conduct.


    White House Budget Blowout

    1. David Malakoff

    Federal spending on nondefense research could fall significantly by 2009 if Congress goes along with President George W. Bush's long-range budget plans, according to an analysis released last week by AAAS (publisher of Science). Most agency science budgets won't keep pace with predicted inflation if the White House prevails in its efforts to expand tax cuts, halve the half-trillion-dollar deficit, and increase defense spending. For the full forecast—which is based on a range of assumptions about the next 5 years—see


    Mounting Lab Accidents Raise SARS Fears

    1. Dennis Normile

    For the third time in less than a year, an outbreak of severe acute respiratory syndrome (SARS) seems to have originated from a failure in laboratory containment. This latest incident, revealed in China late last week, is the most serious. One death is tentatively attributed to it, eight people are confirmed or suspected of contracting the disease, and hundreds have been quarantined. The apparent lapse is especially troubling because it occurred in China's leading SARS research lab, at the Center for Disease Control and Prevention (CDC) in Beijing. Also disturbing, experts say, is that one lab worker traveled widely while she had symptoms of the disease.

    Despite that lapse, Chinese authorities now seem to have the situation under control, says Robert Dietz, spokesperson for the Beijing office of the World Health Organization (WHO). The two earlier lab accidents, in Singapore in September (Science, 19 September, p. 1642) and Taiwan in December (Science, 2 January, p. 26), did not result in spread beyond the affected workers.

    Two of the Chinese cases, a 26-year-old graduate student identified by her surname, Song, and a 31-year-old male postdoc named Yang were both apparently exposed in the same lab at the Institute of Viral Disease Control at the Chinese CDC. But Dietz says that the onset of symptoms likely indicates two separate exposure events. Song developed SARS symptoms on 25 March; Yang, on 17 April. The prospect of two separate exposure incidents suggests “some sort of systemic or procedural failures” in the lab, Dietz says. But he warns against any conclusion until the origin of the infections is confirmed.


    Hundreds of people in China have been isolated after exposure to a SARS patient.


    Regardless of the source, the outbreak may show “a failure in applying guidelines to monitor the health of the people who work in these labs,” Dietz says. After working in the lab in Beijing, Song returned home to Anhui Province. When she developed a fever, she traveled to Beijing by train, where she was treated at a hospital and released. Accompanied by her mother, she returned to Anhui by train, entered a different hospital, and was later transferred to a third. On 19April her mother died of pneumonia, presumably a SARS victim. A nurse who attended Song at the Beijing hospital contracted SARS and has apparently spread the infection to several members of her family.

    Despite the initial delay, says Dietz, “once SARS was recognized, [China] ramped up [its response] immediately.” Authorities have closed the virus lab, and more than 200 institute employees have been quarantined. Another 400 who had contact with suspected SARS cases have also been quarantined. Dietz says WHO expects to have a team of two or three biosafety experts in Beijing this week. They will work with Chinese colleagues from the Ministry of Health to try to pin down the source of infection. WHO is forming other teams to work on epidemiology and infection control. A fourth team will be sent to Anhui for investigations there.


    Summit Pledges Global Data Sharing

    1. Dennis Normile

    TOKYO—Representatives from 47 nations have endorsed a 10-year plan to share Earth-observation data, identify gaps in observational efforts, and come up with ways to fill them. The agreement, reached at the second Earth Observation Summit here on 25 April, could help improve forecasting of abnormal weather, understanding of climate change, and management of natural resources.

    The idea behind the effort, called the Global Earth Observation System of Systems, is straightforward: Dozens of observational systems are now generating reams of data that could be far more powerful if they were combined and widely disseminated. But achieving that goal means overcoming major technical and political hurdles. “We're just at the start of resolving a lot of issues,” says Akio Yuki, deputy minister of Japan's Ministry of Education, Culture, Sports, Science, and Technology.

    “One of the big barriers to combining these systems is to find common data formats,” says Paul Gilman, assistant administrator for research at the U.S. Environmental Protection Agency. That will be the first order of business under the new plan. Another hurdle will be agreeing on what data will be shared. Yuki says Japan, for example, won't disclose fisheries data that could help Chinese and Korean fleets plying their shared oceans. Some countries also may be reluctant to share information that has national security value.

    Four-square for cooperation.

    Earth Observation summiteers include (from left) American Conrad Lautenbacher, South Africa's Rob Adam, Japan's Akio Yuki, and the European Commission's Achilleas Mitsos.


    There are also fiscal challenges ahead. Maximizing the benefits from existing observational schemes “will require a significant effort in capacity building for developing countries,” says Patricio Bernal, executive secretary of the International Oceanographic Commission. A current global network to monitor sea-level conditions, called GLOSS, has fallen short of its goals, says Conrad Lautenbacher, head of the U.S. National Oceanic and Atmospheric Administration, in part because of a lack of local capacity. The costs of such efforts “will need to be calculated,” says Takeo Kawamura, Japan's minister of education. U.S. officials have said that the industrialized world needs to stand ready to foot the bill.

    The idea of a “system of systems” was first proposed by the United States and endorsed by the eight largest industrialized nations at a summit meeting last summer in Washington, D.C. The effort “is government-led but science-driven,” says Rob Adam, director-general of South Africa's Department of Science and Technology and co-chair of an ad hoc group set up after last year's summit.

    Despite the slew of unresolved issues, participants at the second summit are satisfied with the plan they have adopted. “It seems a simple paper,” says Achilleas Mitsos, director-general for research at the European Commission. But behind the parchment lies a significant amount of work, he says, and the hope that it “will lead to action and not just wishful thinking.”


    Reorganization Plans Roil Staff Scientists at CDC

    1. Jocelyn Kaiser

    Over the past month, anxiety mounted at the Centers for Disease Control and Prevention (CDC) in Atlanta as Director Julie Gerberding floated options for overhauling the agency. Staff were relieved last week when Gerberding chose a plan that largely preserves the $6.5 billion agency's current structure but brings disparate units under joint management. Still, some CDC scientists say Gerberding paid lip service to their suggestions and instead rolled out a corporate model for CDC that may not be appropriate for a public health agency.

    Gerberding dismisses the grumbling as part and parcel of a reorganization, the first at CDC in over 20 years. “I would be worried if people were not apprehensive,” she says. Some concerns—such as a rumored reallocation of research funds—are unfounded, she says: “Nothing is likely to change very much for the intramural scientists at CDC.”

    But scientists themselves aren't convinced. Their biggest gripe is that the plan, called the Futures Initiative, is couched in vague “business-speak” that nobody understands, several staffers say. They feel it was developed top-down, without the support of rank-and-file staff. “People really do think it's being rammed down their throat,” says Frank Bove, an epidemiologist in CDC's Agency for Toxic Substances and Disease Registry. And the plan comes at a time when morale is especially low. Scientists have been reluctant to speak up for fear of retribution. “The level of apprehension is astounding,” says one staffer.

    Under fire.

    CDC Director Julie Gerberding's corporate model for improving CDC has scientists worried.


    Gerberding says that new global challenges in the past 2 years, such as the anthrax scare and last year's outbreak of the SARS virus, required the staff to work across the agency—and they exposed weaknesses in CDC's management structure. Add to that CDC's mission to prevent chronic health problems, such as the rise in obesity, and it became clear that the agency needs more “agility and speed and impact,” she told Science.

    Gerberding launched the Futures Initiative last June by surveying CDC staff and so-called customers, such as state public health officers. Her team came up with two “overarching goals”: health promotion, or prevention of disease; and preparedness for outbreaks. On 1 April, the team also unveiled three possible “design prototypes” for the agency. Summarized in diagrams, these plans provoked howls of protest. An e-mail from the Global Immunization Division complained, for example, of “few understandable concepts to which we can respond.”

    In response to the concerns, Gerberding announced on 19 April that she is going with the “least disruptive” plan. She will retain all 11 divisions but cluster them under shared leadership where appropriate—grouping, for example, the three infectious-disease centers. Quelling another rumor that had drawn protest from outside CDC, she will not abolish CDC's Office on Smoking and Health, says CDC spokesperson Tom Skinner.

    Even so, staff remain nervous about Gerberding's still-sketchy plan, especially the introduction of two new elements—a “marketing function” and agencywide goals with yardsticks for measuring success.

    Even Gerberding notes that her timing is not ideal; the agency “is experiencing a lot of stress” from global disease outbreaks, she says. Many senior scientific staff are leaving, in part because of Department of Health and Human Services (HHS) requirements that some CDC staff be ready to deploy on missions (Science, 3 October 2003, p. 49). Outside the agency, there is “concern that CDC is losing its edge,” says epidemiologist Tom Novotny, a former HHS official now at the University of California, San Francisco.

    CDC virologist James LeDuc, who thinks the problem is “semantics,” predicts that, “by and large, it's going fine.” But whether Gerberding can clearly enunciate to the rest of her staff what direction CDC is heading—and win their support—is yet to be seen.


    Farsighted Report on Flooding Augurs Economic Waterloos

    1. Daniel Clery

    CAMBRIDGE, U.K.—The annual cost of damage from floods in the United Kingdom could soar to about $48 billion—20 times the current figure—in the coming decades if drastic steps are not taken to deal with the threat, says a wide-ranging report commissioned by the government. The Foresight report on Future Flooding gazed up to 100 years into the future, taking into account factors such as climate change, economic growth, and urbanization. The main message: The status quo of flood defenses is not good enough. “It's quite a dire picture,” says the report's lead expert, Edward Evans, a water engineer at the University of Glasgow. “We can't go on building walls higher.”

    Drawing on the expertise of 60 researchers, the pioneering report forecasts floods according to four socioeconomic scenarios ranging from “world markets,” marked by a surging global economy and high greenhouse gas emissions, to “local stewardship” involving greater community involvement and environmental awareness. Factors driving elevated risk include rising sea levels and extreme weather events associated with climate change; urbanization, which could put more housing in flood-prone areas and increase rainwater runoff; and regulations that restrict flood defenses.

    Barrier to bad tidings.

    New report calls for urgent measures to bolster flood defenses, such as London's Thames Barrier.


    Perhaps unsurprisingly, the study predicts that the most devastating floodwaters would rise in the markets scenario. Today, 1.6 million Britons are at risk of flooding, and $3.9 billion is spent on defenses and on mopping up the damage each year. The world markets scenario suggests that, if the government were to take no action, the threatened population could swell to 3.6 million and the costs increase to $48 billion per year by 2080, due to climate change, development in flood-prone areas, and increased value of threatened properties. But even in the more benign local stewardship scenario, risk and costs are predicted to escalate if nothing is done.

    Although there is no silver bullet, the report's authors argue that a range of steps could sharply curb risks. “We need a complex bundle of responses,” says Edmund Penning-Rowsell, head of the Flood Hazard Research Centre at Middlesex University. These could include diverting rising waters into temporary storage pools rather than letting runoff overwhelm city drainage systems, dredging or widening rivers to increase capacity, and beefing up sea walls and river defenses such as the Thames Barrier, which protects London. And although climate change accounts for around a quarter of the flood damage potential, the report points out that any greenhouse gas cuts would slow sea level rise only a half-century down the road.


    Earliest Signs of Human-Controlled Fire Uncovered in Israel

    1. Michael Balter

    If you want to ignite a debate among archaeologists, just ask a simple question: When did humans first control fire? In recent decades, scientific journals have been ablaze with claims and counterclaims about this crucial step in human development. Now an Israeli team adds more fuel, reporting on page 725 new findings that push the earliest credible evidence back to 790,000 years ago—more than three times earlier than the previously accepted date. Surprisingly, however, this claim may be strong enough to damp down the debate rather than stoke it up.

    Archaeologists caution that the possibility of natural fires can never be entirely excluded at such an ancient site, and a few would like to see a bit more evidence before they start celebrating. But this time the skepticism is noticeably subdued. “I think they have made by far the best case yet for humanly controlled fire before 250,000 years ago,” says Richard Klein of Stanford University in California. Paola Villa of the University of Colorado, Boulder, agrees: The paper “provides very strong evidence of the use of fire by early humans,” she says. If the claim is substantiated, it may help explain how early humans were able to push into the chillier climate of Europe after 800,000 years ago.

    The new evidence comes from Gesher Benot Ya'aqov (GBY) in northern Israel, an open-air site known for its excellent preservation of wood remains (Science, 11 August 2000, p. 944). A seven-member interdisciplinary team, led by archaeologist Naama Goren-Inbar of Hebrew University in Jerusalem, conducted a painstaking analysis of the distribution of burned wood, plant seeds, and flint artifacts at the site. GBY is located on the shore of an ancient lake, and the organic remains were waterlogged, helping preserve them nearly intact. The team's archaeobotanists sifted through 23,454 seeds and fragments of fruit and 50,582 pieces of wood looking for burned specimens, which turned out to be a tiny percentage of the total in each case. Meanwhile, the archaeologists examined large numbers of flint artifacts; again, only a small proportion had been burned. And the burned flint was found in discrete clusters within the site's stratigraphic layers, which the team interpreted as evidence of hearths in specific locations. “This attention to detail shows why traces of fire have been missed in the past,” says Clive Gamble of the University of Southampton, U.K. The fact that fewer than 2% of the flint and wood fragments were burned rules out possibilities such as lightning-sparked wildfires, which the team argues would have burned a much higher percentage.


    Ancient fire left holes (arrows) in this burned grass seed from Israel.


    Although archaeologists who spoke to Science were enthusiastic about the findings, they caution that foolproof evidence for control of fire tends to come only from well-preserved hearths in cave sites, most of which are less than 250,000 years old. Ancient, open-air sites like GBY are notoriously difficult to interpret. The cave site of Zhoukoudian in China was once thought to be the oldest example of controlled fire use, dating to 300,000 to 500,000 years ago, but even the evidence there has been found wanting (Science, 10 July 1998, p. 251).

    Although it is “likely” that the burned wood at GBY comes from hearths, says charcoal expert Eleni Asouti of the Institute of Archaeology in London, “real evidence” would require proof that the wood fragments came from the same locations as the burned flint. But such stringent proof is rarely available even at much younger sites. Mirjana Stevanovic of the University of California, Berkeley, who spent years tracing the sources of fires in Neolithic houses in the former Yugoslavia, agrees that the case would be stronger with “more information on the spatial distribution of these finds.” Nevertheless, Stevanovic concludes, “the claim is impressive.”

    Indeed, archaeologists say, if people were using fire at GBY 800,000 years ago, it might help explain the history of early humans in northern latitudes such as Europe. GBY sits in the middle of the so-called Levantine corridor, “at the crossroads out of Africa,” says Gamble. Thus it may be no coincidence that the earliest substantiated human sites in Europe also begin to appear right around 800,000 years ago. Says Colorado's Villa: “The colonization of Europe, where temperatures probably dropped below the freezing point at times, is generally tied to the use of fire.”


    Boston Weighs a Ban on Biodefense Studies

    1. Andrew Lawler

    BOSTON—A deeply divided city council has asked for legal advice on whether it can ban scientific research it deems dangerous to the community. The target is a $178 million biosafety level 4 (BSL-4) facility funded by the National Institutes of Health (NIH), to be built by Boston University (BU) near the city center. Even lab opponents say they are unlikely to win an outright ban, but a stormy 5-hour hearing on 20 April demonstrated that scientists disagree sharply over the facility's value, community activists are outraged, and BU is on the defensive.

    NIH chose the university in October as one of two private sites for the high-security labs, which will conduct research into infectious agents—including those which might be used by terrorists. The other, in Galveston, Texas, has encountered far less opposition.

    BU officials insist that their lab, which is awaiting land-use and environmental-impact reviews, will be a plum for the city. They note it will be designed with advanced safety and security systems, bring in construction jobs and high-tech positions, improve the public health, and cement Boston's position as a leader in biological and biotech research. But opponents say the dense urban site is a poor choice, given the expected traffic in deadly materials. They also question whether BU will control all the research, which they claim could include secret studies.

    The battle over the lab's fate, which began last fall and is likely to rage into the autumn, is turning heads even in a city known for the gusto of its political brawls. On City Hall Plaza, protestors in white biohazard suits carrying FedEx packages protested in the bright spring sunshine, while nearby vendors sold the day's newspapers with a full-page ad paid for by BU touting the lab's benefits. Inside, a city council member sharply accused opponents of engaging in “scare tactics” while one onlooker's angry outburst led to his ejection from the council chambers.

    BU senior vice president Richard Towle testified that it was highly unlikely any dangerous material could be released, because in more than 70 years of total operating time, no U.S. BSL-4 lab has had a serious containment failure. BU officials promise a series of impenetrable barriers: advanced biometrics for personnel identification, layers of security, and thick concrete walls. But neighbors are skeptical. “Nothing can be built” to work as a perfect container, said Dolly Battle, who chairs Safety Net in the nearby neighborhood of Roxbury.


    Protestors outside Boston City Hall vigorously oppose construction of an advanced biolab in their neighborhood.


    Towle said that NIH had recently given its assurance that BU will be fully responsible for research done at the lab, adding that “there will be no bioweapons and no classified research at this facility.” But David Ozonoff, a BU public health professor, told the council that BU will not have the authority to determine what kinds of research can and cannot be done in the lab. Although NIH does not do classified research, he noted, it does partner with organizations that do, such as the U.S. Army, which might want to use BU's facilities. And if classified research goes on at the lab, “federal law will not permit them to disclose” that, Ozonoff added.

    Opponents also charged that the lab will do more to threaten than promote public health. Ozonoff, who first backed the new facility last year, has changed his mind. The lab “is not likely to meet a public health need—and it may make us less safe” by creating new biological agents that could be used by terrorists, he told the council. He is not alone. More than 140 scientists, physicians, public health professionals, and academics wrote to Boston Mayor Thomas Menino 13 April, arguing that the lab poses “real and catastrophic risks to the health and safety of people in the local and surrounding communities.” Harvard Medical School cell biologist Daniel Goodenough adds that the benefits are likely to be so meager, and the threats so great, that “the biolab should not be built at all.”

    City council members held off consideration of an ordinance prohibiting BSL-4 research but asked the corporation counsel to research legal precedents for banning research. Four members present were highly critical of the facility and likely would back such an ordinance. But council member Maura Hennigan—who supports a research ban—told Science that she's not optimistic it would pass, given that trade unions and Menino favor the lab. “We may have case law on our side, but if [Menino] supports the project, then I don't think the council will go against it.” Opponents still could throw up roadblocks before construction is due to begin in the first half of 2005. A draft environmental-impact statement will be released around June, and the Boston Redevelopment Authority must issue permits, a process that doesn't begin until fall.


    Resetting Pregnancy's Clock

    1. Ingrid Wickelgren

    A complex exchange of signals between mother and fetus determines when labor begins. By listening in, researchers hope to find ways to prevent or delay premature labor

    Premature babies look impossibly fragile. Their balled fists are as small as walnuts; their skin is translucent; tubes sprout from their bodies, connecting to devices that sustain life outside the womb.

    In the United States, one out of eight babies is born prematurely—before 37 weeks instead of the usual 40—a rate that has increased 27% over the past 2 decades. Most preterm infants survive, many after a long stay in a neonatal intensive care unit, but they are at risk for disabilities such as mental retardation, cerebral palsy, lung and gastrointestinal problems, and vision and hearing loss. “The medical and nonmedical expenses associated with preterm birth probably exceed those of any other disease,” says Roberto Romero, a specialist in maternal-fetal medicine at the Detroit campus of the National Institute of Child Health and Human Development (NICHD).

    For decades, researchers have had surprisingly few explanations for premature labor. Asymptomatic uterine infections, detected by the presence of microorganisms in the amniotic fluid, account for one-third to one-half of the cases, but the rest have no clear cause. As a result, preterm labor is very hard to predict, prevent, or stop.

    But bright spots are emerging on the research landscape. Scientists have identified a human pregnancy clock powered by stress hormones that may be affected by diet and other factors. This could lead to early detection and perhaps prevention of some premature labor. Researchers are also unraveling the molecular events that unfold just before and during normal labor. Inflammation now appears to be central to the labor process, whether at term or preterm. Bacterial infections, allergic reactions, the stretching of uterine muscle, and the manufacture of a hormone from the fetal lung can all spawn labor through inflammatory pathways, new research shows. Other data reveal mechanisms underlying the body's receptiveness to the pregnancy-prolonging hormone progesterone.

    This cascade of findings is already pointing to potential targets for labor- stopping drugs that may eventually replace today's so-called tocolytics, which postpone labor for no longer than 48 hours and have potentially serious side effects. In the meantime, new methods for distinguishing true from false labor might ensure that tocolytics are given only to women who need them (see sidebar, p. 668).

    These developments and the potential payoffs are beginning to attract funds into a field that has long been short of cash. Last year, for example, the March of Dimes pledged $75 million over 5 years to fight premature birth, in part through research into its causes. “It's a really exciting time,” says endocrinologist Roger Smith of the University of Newcastle in Australia. “There have been major changes in our understanding of preterm labor over the last couple of years.” Adds molecular endocrinologist Carole Mendelson of the University of Texas Southwestern Medical Center in Dallas, “The mechanisms explaining how labor starts and progresses are finally starting to emerge.”

    Like clockwork

    The initiation of labor is driven by shifts in the levels of key hormones, chiefly estrogen and progesterone. During most of pregnancy, an abundance of progesterone, which is secreted by the placenta, relaxes the smooth muscle cells lining the uterus. It also keeps the ring of tissue at its bottom, the cervix, cinched tight with tough ropes of collagen, like the closure on a drawstring bag.

    As the body prepares for labor, increasing amounts of estrogen, which opposes progesterone's actions, excite the uterine muscle. It also prompts the fetal membranes overlying the cervix to produce fatty acid hormones called prostaglandins. These soften the cervix by stimulating the production of enzymes that digest its collagen fibers. During labor, the uterus generates forceful contractions that eventually expel the fetus through the widened cervix.

    Fragile beginnings.

    Endocrinologist Roger Smith with premature baby, born at 28 weeks (29 weeks in photo) at John Hunter Hospital in Australia.


    In the mid-1990s, Smith and his colleagues unraveled a “placental clock” that appears to time the hormone shift. Levels of a placental protein called corticotropin-releasing hormone (CRH), which promotes estrogen production, rise exponentially throughout pregnancy, they found. This rise is so predictable that they could use blood levels of CRH at 4 to 5 months of pregnancy to estimate when a woman would give birth.

    “You can probably use CRH levels as a gauge of where you are in pregnancy,” says pediatrician Louis Muglia of Washington University in St. Louis, Missouri. The hormone might be used to identify women who will give birth early due to accelerated placental clocks. Currently, the laboratory test for CRH is too cumbersome for routine clinical use, but researchers are working on alternatives.

    Meanwhile, researchers are trying to determine what might make CRH build up to risky levels too early in a pregnancy. People have speculated about emotional stress—CRH is involved in the body's stress response—or genetic factors. So far, there has been little evidence to support these ideas. Last April, however, endocrinologist John Challis of the University of Toronto, Canada, and his colleagues reported data in sheep that may point to a different culprit: dieting.

    Sheep put on a low-calorie diet from 60 days before until 30 days after conception had an extraordinarily high incidence of preterm birth, the team found: Half of them went into preterm labor. Calorie restriction would not have affected fetal growth, the Challis group reasoned, because 30-day fetuses require minimal nutrients. But the undernourished fetuses' pituitary and adrenal glands—which govern the stress response—matured prematurely. And the adrenal glands of the fetuses that were delivered early released unusually high levels of cortisol, a stress hormone whose release is driven by CRH. The cortisol surge, the researchers suggest, initiates labor, because it occurs in many species just before birth. Recent preliminary data further suggest that the placentas of undernourished sheep don't produce enough progesterone.

    If the relation between lack of nourishment and early labor holds true for humans, Challis says, it “has enormous impact” for women trying to become pregnant. Dieting before pregnancy “may put them at higher risk of preterm labor,” he says.

    Even if eating well at the time of conception is important to an on-time delivery, many other factors can still intrude. So researchers are testing prophylactics that might prevent early labor by tipping the hormonal balance. At-risk women might be treated with a CRH antagonist such as antalarmin, which can delay delivery in sheep. But the most promising of the preventive measures so far is progesterone itself.

    Obstetrician Paul Meis of Wake Forest University Baptist Medical Center in Winston-Salem, North Carolina, and his colleagues tested whether a metabolite of progesterone, 17 α-hydroxyprogesterone caproate, could prevent preterm labor in high-risk pregnant women. Weekly injections starting at 16 to 20 weeks reduced the rate of preterm delivery by 33% in 310 women compared to 153 women receiving a sham injection. The injections also reduced the incidence of certain complications, including some types of hemorrhage and need for supplemental oxygen, in the newborns, the team reported last June.

    Fighting fires

    In many cases, however, it may not be possible to prevent the early onset of labor. It can start suddenly, for instance when an infection strikes. Various teams are trying to decipher the molecular events that take place once labor begins, in an effort to identify ways to halt it safely.

    Many of the newly implicated molecules play roles in inflammatory pathways. In August 2003, Washington University's Muglia and his colleagues reported using DNA microarrays to measure changes in gene expression in human uterine tissues sampled either during labor or prior to labor. Many of the hundreds of genes whose expression was altered at labor coded for inflammatory proteins, and the pattern was surprisingly similar for both term and preterm samples.

    A key event in this inflammatory response, according to work by obstetrician Phillip Bennett and his colleagues at Imperial College Hammersmith Hospital Campus in London, is the activation of the DNA-binding protein called nuclear factor κB (NF-κB), which triggers the transcription of inflammatory genes. In 2001, Bennett and his team discovered that NF-κB revs up its gene transcription activity during labor, and in work presented last month at the annual meeting of the Society for Gynecologic Investigation they found that NF-κB is also activated in preterm labor—at higher levels than in term labor.

    Activation of NF-κB, in turn, boosts production in the fetal membranes of the inflammatory cytokine interleukin-8 (IL-8) and the prostaglandin-producing enzyme cyclooxygenase 2 (COX-2), the Imperial College researchers have shown. Both IL-8 and prostaglandins soften the cervix and prepare the uterus to contract. “We think NF-κB is very central,” says Bennett, whose group is now working on ways to inhibit its action.

    What might cause the upsurge in NF-κB during labor? Infection is one likely culprit. Bacterial toxins released during an infection, says Bennett, bind to receptors on macrophages or amniotic sac cells and activate NF-κB through known signaling pathways.

    Allergens may also incite labor through inflammatory pathways, according to work presented in February at the annual meeting of the Society for Maternal-Fetal Medicine. Pharmacologist Robert Garfield of the University of Texas Medical Branch (UTMB) in Galveston, NICHD's Romero, and their colleagues showed that injecting egg-white protein into pregnant guinea pigs that had been sensitized to it caused one-third of them to go into premature labor. By contrast, none of the nonsensitized guinea pigs or sensitized guinea pigs challenged with saline delivered prematurely. In addition, the scientists could prevent allergen-induced preterm birth by treating sensitized guinea pigs with a histamine-receptor antagonist, which blocks histamine-induced stimulation of uterine muscle. The data, according to the authors, “represent the first experimental evidence” that at least some types of allergic reactions can initiate preterm labor.

    Closer to term, physically stretching the uterus may also provoke an inflammatory response through NF-κB. Just before delivery, the head of the fetus usually presses down in the lower part of the uterus and stretches tissue there. Likely because of increased pressure, multiple fetuses raise the risk of prematurity, with the risk increasing incrementally with the number of babies in the uterus. Stretching the uterus in rats stimulates the expression of genes that code for labor- associated transcription factors and proteins, molecular biologist Stephen Lye of the Samuel Lunenfeld Research Institute in Toronto and his colleagues have found. Among those proteins, according to work reported earlier this year from Bennett's lab, is NF-κB. Stretching activates NF-κB in tissue from the human amniotic sac, although this effect was not found in uterine muscle.

    The maturing fetus also prompts labor at term through a lung protein that activates inflammatory pathways, Mendelson and her team reported last month. The researchers detected this surfactant protein, dubbed SP-A, in the amniotic fluid of pregnant mice during the last 3 days of gestation. Levels of SP-A rose in parallel with similar rises in interleukin-1 and NF-κB in macrophages present there. Cell culture work clarified that SP-A could stimulate the production of these inflammatory proteins by macrophages.

    Preparing the way.

    Cervical tissue from pregnant guinea pigs in midterm (left) and at the end of gestation (right). At term, enzymes degrade tough ropes of collagen (horizontal stripes in left image), softening the cervix.


    The UTMB researchers then discovered compelling evidence that this interaction can lead to labor: Injecting SP-A into the amniotic fluid of pregnant mice induced preterm delivery within 6 to 24 hours, whereas injecting an antibody to the surfactant protein or an NF-κB inhibitor delayed labor by more than a day. The authors conclude that this fetal lung secretion “provides a key hormonal stimulus” for the inflammatory cascade in the uterus that leads to labor. “If we can understand how SP-A acts to induce labor at term, that opens up the possibility of finding therapeutic interventions for blocking preterm labor,” Mendelson says.

    Recognizing the link to inflammation, some researchers suggest that anti- inflammatory drugs will prove useful in treating preterm labor. Newcastle's Smith and his colleagues have recently launched a multicenter clinical trial of rofecoxib (Vioxx), which blocks the production of prostaglandins by inhibiting COX-2.

    Stay calm

    Aside from inflammation, a decline in responsiveness to progesterone is likely to be critical to the initiation of labor. Progesterone is thought to block genetic triggers of labor and maintain uterine quiescence. In species from rats to humans, blocking the receptor through which progesterone exerts its effects—the so-called B receptor—with a drug such as RU486 will induce labor.

    However, a drop in blood levels of progesterone does not accompany labor in humans, even though it does in nonprimates. Many researchers hypothesize that a “functional” withdrawal of progesterone—that is, a decrease in the body's receptiveness to it—initiates labor.

    In June 2002, Newcastle's Smith and his colleagues implicated a second progesterone receptor, dubbed A, that is believed to inhibit the B receptor. In uterine tissue samples from 24 pregnant women, the amount of messenger RNA for progesterone receptor A relative to that of receptor B was much higher during labor than prior to it. In February, Smith's team went on to report that prostaglandins may be behind this effect, as they increase the ratio of A to B receptors in cultured human uterine muscle cells.

    But other factors may also play a role in progesterone's impotence during human labor. When progesterone binds to its receptor, the complex binds to DNA and promotes the transcription of genes. Several protein coactivators unwind the DNA so it can be transcribed. This past summer, Mendelson's team reported that levels of three of these coactivators decrease in the uteruses of pregnant women and mice during labor. The researchers also found that giving pregnant mice a drug that keeps DNA unwound delayed labor for up to 2 days beyond the normal 19-day mouse pregnancy. The drop in the levels of coactivators, Mendelson infers, helps halt the expression of genes important to keeping the uterus relaxed. NF-κB also may contribute to functional progesterone withdrawal: Data from the mid-1990s suggest that it represses the activity of the progesterone receptor by directly binding to it.

    While many groups are probing the molecular signals that induce labor, others are looking for inherited genetic variations associated with preterm labor. In 2002, a team led by obstetric anesthesiologist Richard Smiley of Columbia University in New York City identified a polymorphism in the β adrenergic receptor, which relaxes uterine smooth muscle, that lowers the risk of preterm delivery among Hispanic women, probably by maintaining the receptor's sensitivity to repeated stimulation.

    Now Muglia and his colleagues are starting a 5- to 10-year search for polymorphisms that elevate the risk of preterm delivery among a broader population of women. The researchers will comb the genomes of families with a strong history of premature labor. They will also compare the genomes of women who have had a premature delivery with those of women who have had full-term pregnancies. “It would give us great insight into the mechanism if we could identify just one gene involved,” Muglia says.

    “I'm optimistic that in 5 years we will fit the different pieces of the puzzle together and that will guide us to a way to intervene,” Smith says. Meanwhile, he adds, the puzzle remains only partly assembled: “We still don't understand how we get born.”


    Labor: True or False?

    1. Ingrid Wickelgren

    Only half of the women who show signs of preterm labor actually deliver early. In others, contractions die out on their own. But doctors have no reliable way to tell who needs potentially toxic drugs to stop contractions and who doesn't. A device developed by pharmacologist Robert Garfield of the University of Texas Medical Branch in Galveston and his colleagues might change this.

    In the 1990s, the researchers predicted the onset of labor in pregnant rats by monitoring the electrical activity in their uterine muscles. They found that the higher the frequency of electrical signals in the bursts that produce the muscle contractions, the closer a pregnant rat was to labor.

    Last June, the researchers reported that the same holds true for pregnant women. They monitored the uterus's electrical activity for 30 minutes in each of 99 pregnant women who were being checked at a clinic for either term or preterm labor. The team found a notable transition in the frequency spectrum at 24 hours before delivery in the term patients; in preterm patients, this occurred about 4 days before delivery, presumably because the labor process unfolds more slowly before term. The transition, Garfield says, can reveal with certainty when a patient is in true labor. The researchers are gearing up to test the device in a larger clinical trial, and they are working with medical instrument firm Fairway Medical Technologies in Houston to further develop the technology.

    The same group has developed a device called a Collascope for measuring the softening of the cervix. A light beam excites the collagen in the cervix, causing it to glow at an intensity that is proportional to the tissue's collagen content. That, in turn, reveals the proximity of labor, because collagen degrades as labor approaches. The researchers must still establish normal and abnormal profiles for the cervix's collagen content during pregnancy, however.

    A rapid assay of corticotropin-releasing hormone levels might also indicate the presence or imminence of labor (see main text).


    California Academy Starts on the Museum of Its Dreams

    1. Marcia Barinaga

    The California Academy has embarked on the first step of an ambitious two-stage experiment aimed at creating a landmark natural history museum

    SAN FRANCISCO—In 1996, the California Academy of Sciences got some bad news: San Francisco's new mayor, Willie Brown, wanted the academy—a popular San Francisco attraction that includes a natural history museum, an aquarium, and a planetarium—out of its prime location in Golden Gate Park. While staff members wrung their hands about where to relocate and at what cost, the academy's supporters spoke out against the idea, and by 1998 the plan had collapsed under a wave of public protest. And as it turned out, the threat of being moved did the academy a favor. In the intervening period, academy scientist Patrick Kociolek had become its new director, and he and his board had begun to rethink the institution from the bottom up, taking note of San Franciscans' strong support. They halted plans for a modest retrofit of the academy's aging buildings and instead planned the most expensive cultural construction project in San Francisco's history—a new landmark for the city, as well as for the worldwide museum community.

    Now Kociolek's dreams are edging toward reality. On 31 December 2003, the old academy closed its doors. In 2008, a dramatic new building, designed by a star architect and filled with cutting-edge exhibits and research space, is to reopen in the same spot near the Golden Gate Bridge. And in true scientific form, Kociolek (pronounced Ko-SEL-ik) has turned the two-stage move into an experiment in museum practice. Next month the academy will open in temporary quarters and begin testing a variety of innovative approaches, such as employing retail experts to help design exhibits; bringing research into public view with a slew of programs, including inviting visitors to work as co-researchers; and putting a continually changing face on even the “permanent” exhibits.

    The overall aim of the experiment is to align the academy with its new role in society. No longer dusty 19th century cabinets of curiosities, natural history museums deal head-on with some of the most pressing issues of the 21st century, such as global biodiversity, conservation, and the public understanding of science, says Kociolek, an expert on diatoms. Natural history museums everywhere are evolving in similar ways. But many are constrained by the limitations of old facilities, whereas a crop of brand-new museums lack the collections and research depth of the great old institutions. With its 150-year history and 18-million-specimen collection, and the promise of a new building, the California Academy has the “best of both worlds,” says Charles Preston, founding curator-in-charge at the new Draper Museum of Natural History in Cody, Wyoming. The academy is “a great organization with visionary leadership,” says Leonard Krishtalka, director of the University of Kansas Natural History Museum and Biodiversity Research Center in Lawrence. “I have every confidence that [it] will raise the bar and set the model for the 21st century natural history museum.”

    Coming attraction.

    The new Cal Academy building is to house a planetarium, a living rainforest and coral reef, and an aquarium, says director Patrick Kociolek (right).


    Earthshaking beginnings

    The Cal Academy's ambitious plan had humble, and uniquely Californian, beginnings. The 1989 Loma Prieta earthquake damaged several of the academy's 12 buildings, the oldest of which dates back to 1916, and in 1995 San Franciscans passed a bond measure to fund repairs. But Brown's plan to move the academy put the project on hold before it began. By the time the museum's location was secure, Kociolek was suggesting that simply retrofitting the old buildings was not enough. He got his board and staff to brainstorm about ways to modernize. “How do you take this Victorian-era concept [of the natural history museum] and make it relevant to people?” Kociolek asked. “What are the societal imperatives that a place like this can address?” As they debated ideas, Kociolek says, “we realized that the current set of buildings, no matter what we do to them, would not support this program.”

    After that there was no looking back. The project requires the academy to move into temporary space while the old buildings are razed and the new one built, and then back to its new permanent home, at a total cost of $370 million. In the late 1990s, California's coffers were full, San Francisco was flowing with dot-com dollars, and fundraising got off to a good start. The museum got $24 million from state and federal sources, and in 2000 it managed to parlay its popularity into the passage of a second bond measure, bringing the city's commitment to $118 million. For the rest, Kociolek first went to the museum's “close family” of wealthy supporters, from whom he has raised another $100 million to date. “We are still in the early ‘quiet phase’ of the capital campaign,” says Kociolek, “and we already have two-thirds of the money”—putting them in a strong position to pull in the remaining $128 million from individuals and foundations.

    To design the museum, the academy chose architect Renzo Piano, who created Paris's famous Pompidou Center in the 1970s and recently redesigned Berlin's Potsdamer Platz. The choice paid off: Discussions with Piano quickly yielded the idea that the academy's new home should be a “green building” embodying the latest environmentally conscious technologies. Piano's creation will be topped by a living roof—more than an acre of undulating parkland with native plants. The roof, a visitor-accessible exhibit in itself, will also house solar panels and will collect excess rainwater to help flush toilets. The building will be naturally ventilated with thermostatically controlled windows and will use 50% less energy than a traditional building of the same size, says Kociolek.

    Inside, the museum will embrace a new approach to exhibits. “Most museums have a series of very fixed halls,” says academy provost Terry Gosliner. “We tried to keep the bulk of our public floor much more open in its configuration, so that you can very quickly and inexpensively modify that space to bring in a large traveling exhibition or produce your own.” There will be several fixed features, the most prominent of which will be a pair of domes. One will house the planetarium, the other a three-story living rainforest exhibit, lit by skylights that will also illuminate a living Philippine coral reef located between the two domes. But even these permanent exhibits will be constantly changing, with different narrative material highlighting various aspects of the coral reef, rainforest, and aquarium, from diversity to conservation to the natural history of the species found there.

    This approach reflects a general trend away from static exhibits, says Robert Sullivan, associate director for public programs at the Smithsonian's National Museum of Natural History (NMNH). “The old idea of a permanent exhibit is dead, if it was ever alive,” says Sullivan. Today's museums need “a critical mass of changing exhibitions to give people reasons to come again and again.”

    Kociolek agrees, noting that museums constantly struggle to keep visitors returning. So he has turned to the retail trade for help: He's brought in designers of stores, including Anthropologie and Urban Outfitters, and electronics giant Sony as part of the exhibit-design team. “Retail folks [know how to] bring people back,” he says. But Scott Lanyon, director of the University of Minnesota's Bell Museum of Natural History in Minneapolis, warns that asking for input from the retail world must be balanced by “giving a strong voice to the scientific staff” so that exhibit content is not compromised. “I'm really glad he is doing it and not me, so I can watch,” Lanyon quips.

    As for research, the building's layout will reflect the academy's values there as well. The academy has in recent years embraced multidisciplinary research, but in the academy's old quarters, research departments were in far-flung corners. In Piano's building, the research staff will be grouped together, with some space dedicated exclusively to collaborative research. The exhibits will tie into the institution's unique research strengths, says Kociolek. For example, the planetarium will complement the academy's strong life-science theme with a focus on astrobiology.

    A laboratory for museum design

    While designers plan the ultimate exhibits, they will be able to fine-tune their ideas during a 4-year period for experimentation in the academy's temporary home, a nondescript warehouse building south of Market Street near San Francisco's Moscone Conference center and the Yerba Buena Center for the Arts. Most of the collections and aquarium stocks, and all of the research staff, will move to the new location, which opens to the public in May. Kociolek says academy staff will use the space—which is large enough for one or two changing exhibits plus a scaled-down aquarium—as a “laboratory” for testing ideas.

    The ants go marching in.

    Brian Fisher (right) and colleagues unload army ants, including large soldiers and small workers (left), for a new exhibit.


    The first exhibit in the new space will be about ants, using the insects to provide “a basis for talking about the whole notion of biodiversity,” says academy entomologist Brian Fisher, who uses ants as biodiversity indicators in his research in Madagascar. One goal, Fisher says, is to show visitors that “systematics and the collections it requires are central to conservation.” Making this kind of connection is an important mission for natural history museums, says Krishtalka of the University of Kansas: “Our challenge is to tell the public the real stories of biological diversity and connect them to their day-to-day lives.”

    Indeed, another part of the ant exhibit will provide a direct connection to visitors' lives, inviting them to contribute data to a research project on ants of the Bay Area. No one has ever done a survey of Bay Area ants, Fisher says. Volunteers will collect ants, note their location, and identify the species, with the help of a naturalist resource center at the academy and information online at Fisher's “AntWeb” site (NetWatch, Science, 6 February 2004, p. 739). “This is not just an educational exercise,” Gosliner notes, “but one that will produce meaningful data.” For those interested in marine life, the academy is also involving volunteers in a survey of the benthic organisms of San Francisco Bay.

    In the temporary space, academy staff will test other means of exposing visitors to the research side of the museum. “We have struggled with [how best to do] this for years,” says David Kavanaugh, director of research at the academy. “I think all natural history museums have.” Kavanaugh plans to experiment with a “visible laboratory” where the public can watch scientists at work. Cameras mounted on microscopes will display specimens on a public screen, “and the scientists may be miked so they can answer questions,” Kavanaugh says.

    Michael Novacek, provost of the American Museum of Natural History in New York City, calls that “a great idea … something that museums are doing now more, and doing it better than they used to.” But it can be difficult to pull off successfully, notes Anna K. Behrensmeyer, a paleobiologist at NMNH. “We have a paleo lab … completely surrounded by glass,” she says. “But we haven't been able to get enough people to man it,” leaving the lab empty much of the time.

    Gosliner notes that the 4 years in the temporary space will acquaint many staff members with working in front of the public. “There will be no ‘behind the scenes,’” he says. “We just don't have space for it.” It won't be only museum visitors who are watching; all eyes in the natural history museum community will be turned to San Francisco as the California Academy's bold new experiment unfolds.


    Consensus Emerges on HapMap Strategy

    1. Jennifer Couzin

    Halfway through the $100 million gene-mapping project, scientists are heartened by the progress made, although the final payoff remains uncertain

    BALTIMORE, MARYLAND—Several years ago a small band of scientists pinned its hopes on a grand and controversial genetics scheme, a sort of sequel to the Human Genome Project, aimed to speed the search for genes behind common diseases. Last week 100 or so researchers converged in a small hotel room here to hash over the early results of that $100 million venture, called the International HapMap Project, or simply the HapMap.

    Midway through the 3-year project, many of the arguments about how to build the map have dissipated; indeed, participants seemed buoyed by the early results and also by the willingness of HapMap leaders and funders to modify the map's original design. Still, questions remain, for instance, on the best design of the map and how well it will work once complete, likely toward the end of 2005. “I thought it would be 3 years of haranguing and finger-pointing” over how to proceed, says Charles Langley, a geneticist at the University of California, Davis, who says he's impressed with progress to date.

    The HapMap emerged from the frustrated hunt for genes behind common diseases. Initially, these studies relied on single-nucleotide polymorphisms (SNPs)—places where the genomes of different people vary by a single DNA base—to help researchers home in on specific genes. But the human genome contains about 10 million common SNPs, and finding those that differ between healthy people and those with, say, diabetes or cancer, began to look impossibly costly.

    Around 2001, some of the biggest names in genomics, such as Francis Collins, head of the National Human Genome Research Institute (NHGRI), and Eric Lander, then at the Whitehead Institute/MIT Center for Genome Research, proposed a solution based on haplotypes. Largely unmapped at that time, haplotypes are stretches of DNA that travel as a unit, each carrying a group of SNPs, when chromosomes recombine during inheritance. If disease researchers see one haplotype that predominates in their patients, the thinking goes, they can narrow their SNP search to that haplotype, typically around 10,000 to 20,000 bases of DNA (Science, 24 May 2002, p. 1391). There are many unidentified SNPs in every haplotype, and those “hidden” SNPs are likely to be the ones involved in disease.

    DNA tags.

    Two SNPs (in color) are sufficient to identify each of these three haplotypes.


    The HapMap plan, funded mainly by the National Institutes of Health and four foreign countries, is to develop a map for three populations: 90 Utah residents whose ancestors hailed from northern or western Europe, 90 Yorubans from Nigeria, and 90 Asians (45 Japanese and 45 Chinese). The HapMap will include only relatively common SNPs, those found in at least 5% of a population. Proponents expect the map to be useful in other ancestral populations, such as Native Americans, but that remains uncertain.

    So far, the data, all publicly available over the Internet, are limited to the Utah population, and gene hunters are just beginning to use them. But already it's becoming clearer—although by no means definite—when haplotypes can help locate disease genes and when older gene-finding methods are likelier to bear fruit. “There's not as much confusion or mystery now,” says Mark Daly of the Whitehead Institute, whose early work in Crohn's disease helped uncover some of the first haplotypes.

    Still at issue, however, is how “dense” the map needs to be. (The more SNPs, the greater the map's resolution.) The original plan—and funding—called for 600,000 SNPs in each of the three populations, spaced roughly evenly, one every 5000 DNA bases.

    But a paper by David Altshuler and his colleagues at Harvard and the Whitehead Institute suggested early on that 600,000 SNPs might not suffice (Science, 21 June 2002, p. 2225; published online 23 May 2002). They suggested, and later studies confirmed, that Yoruban haplotypes are shorter than those of Europeans and Asians because, like many African populations, Yorubans have evolved over a longer period of time and are more genetically diverse. That means their SNPs are less likely to travel together in large numbers. Some suspect that haplotypes in Yorubans might be just 5000 or even 2000 DNA bases long, says David Cutler of Johns Hopkins University in Baltimore, Maryland. So, many more SNPs might be needed to analyze these, and possibly other, genomes.

    Over lunch at the meeting, Collins announced an abrupt change in plans: NHGRI wants to add 2.25 million SNPs and is releasing $6.5 million in HapMap money to do so. Two developments have made adding SNPs feasible. For one, the price of identifying SNPs has plunged from about 30 cents 2 years ago to just a few pennies. And Collins hinted that an unidentified group or groups would type SNPs for only a penny each. In the meantime, the public SNP database from which HapMappers are drawing many SNPs has ballooned from 2.4 million 2 years ago to nearly 9 million today. That makes it easier and cheaper to find SNPs that may fit well in the HapMap.

    Diverse heritage.

    People of Yoruban ancestry have shorter haplotypes, prompting changes to how the HapMap is designed.


    It was clear from the meeting that extra SNPs may indeed be needed. David Clayton, a statistician at the University of Cambridge, U.K., reported an analysis of candidate diabetes genes designed to test what density of SNPs is necessary to find the rest of the SNPs in a section of DNA. The number varied enormously, from one SNP every 30,000 DNA bases to one about every 150. To Clayton, the results also suggest that if scientists have already pinpointed a small stretch of promising DNA, using older gene-hunting methods to nail the disease gene might be better than using the HapMap.

    The audience was delighted when David Cox, co-founder of Perlegen Sciences in Mountain View, California, reported that his company's completed haplotype map, melding DNA from 25 ethnically diverse individuals, would be made public. Cox said that SNPs from the Perlegen map helped find additional SNPs—potentially those involved in disease—in genes sequenced from African Americans, Chinese, and the same community of Utah residents who donated DNA to the HapMap. “Most of the SNPs are the same in different ethnic groups,” says Cox, who expects that including Perlegen SNPs in the HapMap will make it a more powerful gene-hunting tool. Cox also adds that this was his first invitation to a HapMap meeting—suggesting emerging cooperation between the public and private efforts.

    Although the HapMap ride has gotten smoother, a big question remains: whether in certain populations, common diseases are caused mainly by combinations of rare SNPs. If so, the HapMap, with its focus on SNPs that show up in at least 5% of the population, won't pick those up, says Augustine Kong of deCODE Genetics in Reykjavik, Iceland. Only once the map is complete will scientists be able to test—and, they hope, banish—that fear.


    Plague Annals Help Bring Microbe Lab in From the Cold

    1. Richard Stone

    Findings linking bubonic plague and gerbil abundance in Kazakhstan could be a boon for public health—and are a dividend of efforts to stem a proliferation threat

    Western governments have spent the last decade striving to erase the vestiges of the Soviet Union's diabolical biowarfare program. So it's not without irony that a unique archive on bubonic plague, kept at a former biodefense lab in Central Asia, may help save lives rather than destroy them.

    On page 736, a team led by Herwig Leirs of the University of Antwerp in Belgium models the ebb and flow of plague in populations of the great gerbil, the main host for plague and plague-infected fleas on the steppes of Kazakhstan. The model, based on 40 years of field data gathered by hundreds of Kazakh zoologists, should help the region's cash-strapped disease surveillance agencies anticipate natural plague outbreaks.

    The source of the data, the Kazakh Scientific Center for Quarantine and Zoonotic Diseases in Almaty, Kazakhstan, was the linchpin in a network of Soviet antiplague institutes established after World War II to track bacterial diseases. Beginning in the 1960s, the center became a peripheral player in the sprawling Soviet bioweapons program, preparing vaccines against potential battlefield pathogens. Some analysts claim that it also provided pathogens to labs engaged in bioweapons R&D. In an interview with Science last year, the center's director, Bakyt Atshabar, denied that his lab ever conducted offensive bioweapons work.

    As the heart of the antiplague network, the Almaty center assembled a formidable scientific corps to monitor wild populations of bacterial rogues ranging from plague and cholera to anthrax and tularemia. A few years ago, the U.S. Defense Department realized that the center, which had fallen on hard times after the Soviet breakup, posed a potential proliferation threat. Its huge collection of nasty strains “was kept behind a wooden door with a primitive lock,” says Dastan Eleukenov, executive director of the Monterey Institute of International Studies' Center for Nonproliferation Studies office in Almaty. Moreover, he says, by the mid-1990s, anyone could wander onto the center's grounds: “Bums would spend the night there.” The first task was securing the strains, which are now stored in a modern vault in what has become a tightly guarded and fenced facility.

    In locksteppe.

    A surfeit of gerbils portends plague.


    Attention turned to reducing any temptation—real or imagined—for center staff to drift off to rogue nations. The European Union and the International Science and Technology Center, a nonproliferation agency, sponsored a group led by Nils Chr. Stenseth of the University of Oslo to work with the Almaty center on the ecology of plague outbreaks in Central Asia. “We soon realized our Kazakh colleagues had a huge potential” for public health studies, thanks to the plague archive, says Leirs. The first job was getting the data, handwritten in ledgers, transferred onto computer. Subsequent analyses showed that the prevalence of plague-infected fleas rises and falls largely in step with gerbil numbers. “The data set serves as a great backdrop in looking at the evolution and transmission of plague,” says May Chu, a microbiologist at the National Center for Infectious Diseases' laboratory in Fort Collins, Colorado.

    Leirs and colleagues are launching a follow-up study to tackle an outstanding puzzle: where the plague bacterium retreats when gerbils are scarce. Some researchers have argued that plague might, like anthrax, form spores that enter a quiescent stage until host populations rebound, and others have suggested that it may simply hang around at low levels. The current study, however, lends support to a third hypothesis: that plague disappears locally between outbreaks only to reemerge after infected fleas jump to gerbils from migrating animals, including birds.

    The findings could help cut the risk of people contracting plague. Before the antiplague outposts were established, several hundred plague cases were documented in Kazakhstan each year. The annual incidence ever since has usually been just a few cases, thanks to efforts to collect fleas, culture the bacterium to confirm its presence, and douse infected gerbils and other hosts with pesticides. But such costly interventions have fallen off in recent years. The findings suggest that health authorities can focus scarce funds on tracking gerbil populations and resort to countermeasures when plague is likely to emerge: roughly 2 years after gerbil numbers surpass a threshold. As Chu notes, “the proof of the pudding” is whether the model will in fact predict outbreaks.

    The high-profile study could also boost the quarantine center's prospects. “The need has never been greater for active surveillance of diseases emerging from wildlife—both new ones like bird flu and SARS and old enemies like plague,” says project collaborator Michael Begon of the University of Liverpool, U.K. “We hope that this study will help us and our Kazakh colleagues play important roles, side by side, in the fight.” Few would wish to contemplate the alternative, in which plague was deemed a weapon rather than the enemy itself.


    Sushi-like Discs Give Inside View of Elusive Membrane Proteins

    1. Robert F. Service

    SAN FRANCISCO—Technology and basic science mingled at MRS's spring meeting, held here 12 to 16 April with the International Union of Materials Research Societies' 9th International Conference on Electronic Materials

    Proteins that wedge themselves into the fatty membranes surrounding cells are among the most important molecules in medicine. As the cells' gatekeepers, they detect key compounds outside the cell and determine which should be allowed inside. Just one class of these proteins, G protein-coupled receptors (GPCRs), is a target for drugs whose gross sales top $30 billion a year. Studying such gatekeepers in detail is extremely difficult, because removing them from the cell membrane almost invariably alters their shape and destroys their function. “You take these guys out of their normal environment, and you get scrambled eggs,” says Stephen Sligar, a physicist and chemist at the University of Illinois, Urbana-Champaign. But now Sligar's team has found a way to put them on display while making it seem as if they've never left home.

    At the meeting, Sligar reported that he and his colleagues created tiny lipid-based nanodiscs, suspended individual GPCRs inside them, and tracked the proteins as they relayed their chemical signals across the fatty membranes. “It's a very cool technique,” says Robert Hamers, a chemist at the University of Wisconsin, Madison. “I can see all kinds of applications for something like this.” Hamers says nanodiscs could shed light on the biochemical behavior of a host of membrane proteins that have escaped detailed understanding. In time the method may also make it possible to crystallize membrane proteins to obtain atomic-level maps of their structure using x-ray crystallography, another long-elusive goal.

    Flip side.

    Nanodiscs reveal ends of membrane proteins (green) normally tucked inside cells.


    Like natural cell membranes, the discs are composed of two layers of phospholipid molecules, each sporting water-friendly head groups and long, oily, water-repellent tails. In watery environs, phospholipids nuzzle side by side into two-layered sheets, with the oily tails in both layers facing inward and the relatively water-friendly head groups pointing out into the water. In cells, the sheets curl into a sphere to form the cell's protective outer membrane. Hydrophobic membrane proteins, such as GPCRs, dissolve in these layers to avoid contact with water. For decades researchers have tried to study membrane proteins in artificial membranes, such as spherical lipid structures called liposomes. Unfortunately, it's hard to see inside the bubble.

    To keep the membranes from forming spheres, Sligar's group had to protect lipid groups along the edges of the membrane from water. Two years ago the researchers succeeded by engineering proteins that linked together to form a ring that was water-friendly on the outside and lipid-friendly on the inside. When they combined the ring proteins with the phospholipids, the two formed tiny nanoscale protein discs, each filled with a tiny phospholipid membrane. Since then, Sligar's team has dissolved a number of different membrane proteins inside the discs.

    For their current experiment, Sligar and colleagues teamed up with researchers at 3-Dimensional Pharmaceuticals in Exton, Pennsylvania, who supplied them with copies of a GPCR called the β2 adrenergic receptor (β2AR), a well-characterized protein that is the target of heart drugs called beta-blockers. Using their ring-protein technique, Sligar's graduate student Andrew Leitz created nanodiscs resembling sushi rolls, each wrapped around a single copy of β2AR. The researchers dosed the discs with a small druglike compound that in cells binds to the portion of the receptor facing out of the cell. The binding triggers β2AR to change shape and release a G protein, which in cells binds to the inside end of the receptor. Using radiolabeled compounds, the group found that the receptors in the discs traced the same key steps. The result opens the door to studying dozens of less-understood membrane proteins.

    That's not all. Sligar has also teamed up with x-ray crystallographers to try to create crystals in which millions of copies of a membrane protein are all oriented precisely the same way. Crystals of water-soluble proteins are used to create atomic maps of dozens of different proteins every year, leading to innumerable breakthroughs in biochemistry and medicine. Finicky membrane proteins have been all but impossible to crystallize, but Hamers says the nanodiscs should give researchers a way not only to keep them stable but also to chivvy them into alignment. Sligar's team hasn't made nanodisc crystals yet. But Sligar says: “We're working very hard on it.”


    Lighting the Way to Speedier Chips

    1. Robert F. Service

    SAN FRANCISCO—Technology and basic science mingled at MRS's spring meeting, held here 12 to 16 April with the International Union of Materials Research Societies' 9th International Conference on Electronic Materials

    Computer engineers have long relied on lasers, fiber optics, and other optical devices to transmit data over long distances. They'd also like to use them to shuttle information on and between chips. The reason: As ever-increasing miniaturization allows chips to process more and more electronic data, the wires transmitting all these electrons simply can't keep pace. Fewer electrons can jostle their way through ever smaller wires, and those that do make it face increased resistance and therefore lose more energy as heat. Communicating by photons could help combat these problems, but so far, optical devices have proved stubbornly hard to miniaturize. There are signs, however, that the field of integrated optics may finally be ripe for liftoff.

    “Optics and photonics, for all their success, have remained very bulky and pricey precisely because the optical equivalent of an integrated circuit has so far proven to be elusive to be mass-produced in an efficient, cost-effective manner,” says Jung Shin, a physicist at the Korea Advanced Institute of Science and Technology in Daejeon.

    Take waveguides, devices for steering light. Waveguides that fit on a standard postage stamp-sized computer chip can't take in most of the light in a fiber-optic pulse. At the meeting, however, researchers in the United States and Singapore reported creating the first silicon-based “mode transformer,” a device that funnels light in fiber optics into tiny on-chip waveguides. By making it practical to build light-manipulating devices on silicon CMOS—the workhorse technology of electronics—the new device “represents the latest, and so far the best, results that can deliver the integrated circuit revolution to optics,” Shin says.

    Light-steering devices small enough to be useful must confine light in tight spaces and steer it around sharp bends. To manipulate light through such tight turns, engineers guide it using two materials that have very different indices of refraction—which determines the speed at which light moves through the materials. In such devices, light is typically confined in a core material with a high index, which is then surrounded by a low-index cladding layer. The larger the difference between the indices, the better the light reflects off the cladding back toward the core. The result: tighter turns and smaller devices. The trouble is that optical fibers are much larger than small waveguides and are made of two materials with similar indices of refraction, factors that make it difficult to funnel light into the small waveguide (see diagram below). One result is that a light pulse typically loses as much as 99.9% of its photons in moving from low-contrast fibers to high-contrast waveguides on chips.

    Dire straits.

    Light from an optical fiber (shown in cross-section) can easily enter large, low-contrast waveguides but balks at small, high-contrast ones that can steer it in tight spaces.


    In 2000, researchers led by physicist Lionel Kimerling of the Massachusetts Institute of Technology (MIT) and his graduate students Kevin Lee and Desmond Lim proposed solving the problem using a novel type of silicon mode transformer, made up of one waveguide embedded within another. Light from an optical fiber would travel first into a relatively large, efficient, low-contrast waveguide, and then into a second, high-contrast waveguide inside it. In 2001, MIT licensed the technology to LNL Technologies, also in Cambridge. Lee joined the effort to develop a working model. After years of work, in partnership with engineers at the Singapore Institute of Microelectronics, Lee and colleagues hit upon the right combination of materials and growth conditions. Their two-part mode transformer, Lee reported at MRS, transfers as much as 90% of the light in an optical fiber to a high-contrast waveguide within silicon. Full-scale integrated optics chips are still “a few years away,” Lee says. But all of a sudden the horizon looks a lot closer.


    Printable Electronics That Stick Around

    1. Robert F. Service

    SAN FRANCISCO—Technology and basic science mingled at MRS's spring meeting, held here 12 to 16 April with the International Union of Materials Research Societies' 9th International Conference on Electronic Materials

    Plastic electronics are big business these days. They are still sparse on the sales floor, but electronics giants in Japan, Europe, and North America are all pushing hard on the technology to make ultracheap printable electronics for everything from smart cards and product ID tags to the electronic drive circuitry for large-screen displays. Researchers have shown that they can make many of these devices with materials that already exist. But among the deviling details, most of today's plastic electronic materials that can be printed from liquids—the cheapest manufacturing method—are unstable in air and quickly degrade. At the meeting, researchers at the Xerox Research Centre of Canada in Mississauga, Ontario, revealed hints and a few details of far-more-stable materials to come.

    Xerox materials scientist Beng Ong reported that his team has come up with air-stable versions of printable semiconductors, conductors, and insulators, all three of the electronic elements needed to make circuitry. And even though Ong said he was unable to reveal the full details of the conductor and insulator, others in the field are hailing the achievement. “It will make it easier to make long-lasting devices,” says Zhenan Bao, a chemist and organic electronics expert at Stanford University in Palo Alto, California. “That is critical for devices to be used as commercial products.”

    Xerox's new semiconductor builds on an advance that Bao and colleagues reported in 1996, when she was at Lucent Technologies' Bell Laboratories in Murray Hill, New Jersey. They devised a polymer called regioregular polythiophene, which was liquid-processible and capable of switching from a good to a poor carrier of electrical current, an essential requirement for semiconductors. Unfortunately, the material had to be processed in an inert environment and carefully sealed, because oxygen molecules reacted with the plastic, causing it to turn into a metal-like conductor with a current-carrying prowess that could not be turned off.

    Let us spray.

    Plastic transistors created with an ink-jet printer may herald a new wave of low-cost electronics.


    Lucent's regioregular polythiophene contained numerous rigid tails designed to help orient the polymer to form continuous sheets, an arrangement that improves its current-carrying ability. At the meeting, Ong reported that removing all but one of the tails reduces the polymer's ability to react with oxygen, yet the polymer still maintains its ability to arrange itself on a surface. The change makes the material drastically more stable in air without sacrificing its semiconducting behavior.

    Ong said that because Xerox is still trying to patent its new conductor and insulator, he could not give complete descriptions of these materials. But he did say that the conductor is an organic-inorganic hybrid material with an electrical conductivity as much as 100,000 times higher than the conventional plastic material used today. And the insulator—an organic polymer—performs up to three times as well as conventional materials while being more stable in air.

    Xerox has already used an ink-jet printer to pattern its semiconductor to make working transistors (above). And company researchers are now working to print their new conductor and insulator as well. If they can put all the pieces together, they may soon be able to turn out electronics as easily and cheaply as printers today generate paper copies.