News this Week

Science  19 Mar 2004:
Vol. 303, Issue 5665, pp. 1742
  1. U.S. TRADE POLICY

    Editing Ban to Be Eased, But Cuban Travel Blocked

    1. Yudhijit Bhattacharjee

    After months of protests by U.S. publishers, the federal government last week said it would ease restrictions on the publication of papers from countries under a U.S. trade embargo. But that good news was offset by its warning off more than 50 U.S. scientists from attending a conference last week in Cuba, part of what appears to be a broader crackdown on travel to the communist country.

    Both the publications and travel policies are run by the Department of Treasury's Office of Foreign Assets Control (OFAC), which is responsible for enforcing trade sanctions against embargoed countries, including Iran, Sudan, Libya, and Cuba. Last September, the agency ruled that U.S. journals needed a government license to edit submissions from these four countries because editing, by adding value to the manuscript, in effect represented a financial contribution to that country and violated the Trading With the Enemy Act (Science, 10 October 2003, p. 210). The action left society publishers wondering how to remain true to their principles of open communication without running afoul of U.S. law.

    Last week, however, a senior OFAC official told Science that the agency had changed its position. OFAC “anticipates” providing a “general license” allowing all publishers to edit manuscripts from embargoed countries, the official said, effectively ending the ban. “This could be good news, but we want to wait and see,” says Robert Bovenschulte, head of the American Chemical Society's publishing division.

    “It's nice that OFAC is rethinking its stand,” says Jean Smith, a spokesperson for Representative Howard Berman (D-CA), who in a 3 March letter to OFAC called the agency's September ruling “patently absurd.” Berman sponsored a 1988 amendment to the trade sanctions law that exempts information from economic embargoes.

    Stuck at home.

    The U.S. government stymied plans by Stuart Youngner (right) and other scientists to attend this meeting in Havana.

    CREDITS: (LEFT TO RIGHT) INSTITUTO DE NEUROLOGÍA Y NEUROCIRUGÍA, CUBA; CASE WESTERN RESERVE UNIVERSITY

    But even as OFAC relaxes its views on publishing, its policies are threatening another type of scientific exchange. Last month, OFAC wrote to U.S. researchers scheduled to attend the Fourth International Symposium on Coma and Death in Havana, Cuba, that they faced “criminal and/or civil penalties” if they attended the 9 to 12 March meeting without a specific license. None of the U.S. participants attended. Soon after, six U.S. scientists canceled their participation in the International Congress of the Cuban Society of Clinical Neurophysiology taking place in Havana earlier this week.

    Although travel to Cuba is restricted, U.S. professionals don't need the government's approval to visit the country to attend international meetings or to do research. U.S. participants at the coma conference planned to travel under this provision before OFAC told them that the meeting did not qualify for unlicensed travel because it was not being conducted by an international organization. In a separate letter to Marazul Charters, a New Jersey-based company that was handling travel arrangements for the U.S. attendees, OFAC said the meeting's endorsement by the World Federation of Neurology and other international organizations was not sufficient, as the primary sponsor of the conference was the Institute of Neurology and Neurosurgery (INN) in Havana.

    Marazul then applied for a specific license but cancelled the trip when it got no response. (Its application was formally denied 1 day after the conference began.) In the denial letter sent to Marazul's Bob Guild, OFAC Director Richard Newcomb cited “limited information” as a reason for not granting the license. “We would need to see at a minimum a copy of each individual's resume and a statement from each individual explaining the reason why he/she needs to attend the conference,” the letter said.

    Among the participants most surprised by OFAC's decision were those who had attended three previous symposiums organized by INN dating from 1992. “Those meetings were no different, and I had no trouble traveling to them without a license,” says Stuart Youngner, a bioethicist at Case Western Reserve University in Cleveland, Ohio. “But this time I didn't want to go to jail.”

    Guild fears that OFAC's action will have a chilling effect on attendance at future meetings in Cuba, including the International Meeting on Childhood Infection and the Pan American Congress on Child and Adolescent Mental Health, to be held later this month in Havana. He says he was shocked by statements from OFAC officials that attending a scientific conference is not an integral part of doing research. “In the past, the research category of the general license has allowed U.S. researchers to attend meetings organized purely by Cubans,” he says. OFAC officials declined to clarify their definition of research.

    OFAC's position on travel could close “unique avenues for the flow of scientific information from Cuba to the U.S.,” says Frank Müller-Karger, a marine scientist at the University of South Florida, St. Petersburg. Müller-Karger, who holds a specific OFAC license and travels frequently to Cuba for research, warns that the policy “may result in less access to Cuban waters” for meteorological and oceanographic researchers.

  2. TEXAS BIOTERROR CASE

    Butler Gets 2 Years for Mishandling Plague Samples

    1. David Malakoff,
    2. Kerry Drennan*
    1. Kerry Drennan is a writer in Lubbock, Texas.

    LUBBOCK, TEXAS—In the end, the tough Texas jurist turned out to have a tender heart. Federal Judge Sam Cummings—known for handing down stiff penalties—last week sentenced microbiologist Thomas Butler to 2 years in prison for mishandling plague samples that he mailed to Africa and defrauding Texas Tech University here. Deviating from government sentencing guidelines that called for up to 9 years, Cummings cited Butler's “great service to society” and his lack of “evil” motives.

    “We're very happy the judge downward departed [from the guidelines but] upset he didn't go further,” defense attorney Chuck Meadows said after the 10 March decision, which brought little reaction from a solemn Butler but tears from friends and family. Butler hasn't decided whether he will appeal, defense attorneys said.

    Prosecutors called the sentence fair. “It sent the appropriate message to the academic and scientific community,” said U.S. District Attorney Robert Webster. “The government is not going to tolerate a cavalier attitude in sending deadly agents.” But Cummings's leniency surprised Floyd Holder, Butler's lead attorney: “It's the first time it's happened to me. But I've only been practicing law for 25 years.”

    Butler, 62, captured national headlines in January 2003 after he reported that 30 vials of plague bacteria that he had originally collected in Tanzania were missing from his Texas Tech laboratory, sparking a bioterror scare and a massive investigation (Science, 24 January 2003, p. 489). The government ultimately charged Butler with 69 counts of lying to investigators, moving the bacteria without proper permits, tax fraud, and stealing from his university by diverting clinical trial payments to his own use.

    Many scientists rallied to Butler's defense, saying he was the victim of overzealous prosecutors. But last December, a jury convicted Butler on 47 of the 69 charges (Science, 19 December 2003, p. 2054). He was acquitted of the central lying charge, however, and found guilty of just three plague-related offenses, all linked to a mismarked Federal Express package containing plague samples that Butler sent back to Tanzania. Since the verdict, Butler has repaid the university $250,000, resigned from his post at the Texas Tech University Health Sciences Center, and given up his medical license.

    Less time.

    Judge Cummings said Butler's good works mitigated his prison sentence.

    CREDIT: JIM WATKINS/LUBBOCK AVALANCHE-JOURNAL

    At the nearly 4-hour sentencing hearing, prosecutors backed the government's recommendation that the scientist serve from 87 to 108 months in prison and pay the health center $750,000. But the defense made an often emotional plea for probation, presenting three character witnesses and reading excerpts from more than 100 supportive letters, including four written by Nobel laureates.

    In an even voice, Butler also read a short statement, saying he was “deeply sorry this whole thing has happened. … At no time did I intend to break laws or mislead anyone.” His clinical trial contracts had “benefited” Texas Tech by “bringing some national reputation and adding money to the university for my salary and research expenses,” he said. And the “export of bacteria to Tanzania was done for humanitarian reasons … so that the Tanzanians could continue their research in this area that we started together. The specimens arrived safely. No one was harmed.”

    Hoping to avoid prison, Butler ended with the plea: “Won't you please be lenient and allow me to remain with my family and do community service?”

    In announcing the reduced sentence, Cummings noted that “very few cases brought before this court have the potential to impact not only science, medicine, and research, but society as a whole.” The sentencing report did not recognize the “substantially extraordinary” and “exceptional” circumstances of Butler's career, he added. “The defendant's research and discoveries have led to the salvage of millions of lives throughout the world,” he asserted. And although the court “in no fashion” condoned Butler's plague shipment, Cummings found that it did not represent a significant risk and was not done “with evil or terroristic intent.”

    Cummings also concluded that Texas Tech “would likely have never received any” of the disputed clinical trial contracts without Butler's help. The researcher did the university more good than harm, he said. Cummings, however, did order Butler to repay Texas Tech another $38,675 and also imposed fines totaling $19,700.

    Prosecutors said the sentence should remind scientists not to ship dangerous bacteria without the proper paperwork and packaging. And the public needs to know that it is “flying Southwest Airlines instead of Bubonic Airlines,” said U.S. Attorney Richard Baker, referring to several instances in which Butler carried plague samples aboard aircraft.

    Butler is due to report to prison officials on 14 April. Federal rules require him to serve at least 85% of the sentence.

  3. PLANETARY SCIENCE

    Far-Out Ice World

    1. Robert Irion
    CREDITS: (LEFT TO RIGHT) NASA/JPL-CALTECH/R. HURT (SSC-CALTECH); NASA/CALTECH/M. BROWNT

    Astronomers may have spotted the first resident of the Oort cloud, a vast swarm of comets thought to encircle the sun. Tentatively called Sedna, after the Inuit goddess of the sea, the icy world is more than twice as far from the sun as the planet Pluto is and may be three-fourths as wide. A team led by astronomer Michael Brown of the California Institute of Technology in Pasadena announced its finding this week after spotting Sedna on 14 November 2003 as a slow-moving dot of light (right). Archived images dating back to 2001 reveal that Sedna swoops deep into space on an eccentric 10,500-year orbit (left) that carries it 900 times Earth's distance from the sun. Sedna's existence suggests that the inner Oort cloud contains more mass and extends closer to the sun than expected, Brown postulates.

  4. ASTRONOMY

    Academy, GAO to Study Possible Robotic Hubble Mission

    1. Andrew Lawler

    The battle over the fate of the Hubble Space Telescope intensified last week as Congress ordered two independent studies to examine whether NASA should launch a shuttle to service the observatory. And although NASA Administrator Sean O'Keefe still insists that sending astronauts on a repair mission would be unsafe, he left open the possibility of a robotic mission that could prolong the life of the 14-year-old telescope.

    The battle was joined in January when O'Keefe announced that he was canceling plans for a final upgrade of the $1.3 billion telescope (Science, 23 January, p. 444). Last week's developments keep alive slim hopes for Hubble's future in the scientific community, which has argued that NASA should not abandon its premier research instrument without a deeper examination of the options. NASA managers, however, say that more restrictive guidelines since the 2003 Columbia shuttle accident preclude a flight to an orbit far removed from the safe haven of the space station. Both sides find support for their cause in a 5 March letter to O'Keefe from the former chair of the Columbia investigation board, retired Admiral Harold Gehman.

    Safety first.

    NASA chief Sean O'Keefe says a Hubble repair mission would be “fundamentally irresponsible.”

    CREDIT: MIKE THEILER/EPA/AP

    Although a mission to an orbit beyond that of the space station may be “slightly more risky,” Gehman wrote, he urged “a deep and rich study” to resolve the impasse. On 11 March, senators Barbara Mikulski (D-MD) and Kit Bond (R-MO), the ranking minority member and chair of the NASA funding panel, took him at his word, asking for reviews by the General Accounting Office and the National Academy of Sciences of the risks, costs, and benefits of any mission. The former would be done by 1 July, the latter on a yet-to-be-specified timetable. In the meantime, they asked O'Keefe to put on hold any plans to terminate Hubble-related contracts.

    O'Keefe assented to that request, but he makes no bones about his opposition to an additional shuttle flight. A few hours after his appearance before the Senate panel, O'Keefe told reporters that such a mission would pose unacceptable risks. If a problem arose that the shuttle crew could not handle, he explained, the only option would be to send up another orbiter. That vehicle would have to fly in formation at nearly 30,000 kilometers per hour with its damaged sibling while the crew scuttled across on a tether in open space. Putting astronauts in such a risky situation, he says, would be “fundamentally irresponsible” and violate the more conservative approach to shuttle operations outlined in Gehman's Columbia accident report. And the earliest flight date would be spring of 2007—potentially too late to save Hubble.

    Lawmakers aren't convinced that O'Keefe has been totally objective. “I always stand up for astronaut safety,” says Mikulski, but Hubble needs advocates, too.

    At his press conference, O'Keefe suggested a way to resolve the apparent deadlock. An ambitious robotic mission to extend the telescope's life beyond 2007 or 2008, he mused, might also provide important technologies for President George W. Bush's recent initiative to send humans to the moon and Mars. Such a mission would give Hubble fresh batteries that could extend its life—and allow all sides to declare victory.

  5. JAPAN

    Older Scientists Win Majority of Funding

    1. Dennis Normile

    TOKYO—In 2001, a handful of influential politicians set an informal goal: 30 Japanese Nobel Prize winners by 2050. But a recent survey by the nation's top science advisory panel suggests that the government's approach to distributing research grants may be headed in the wrong direction.

    Senior scientists, those 50 and older and presumably beyond their prime Nobel years, receive a majority of competitively awarded research grants. In contrast, younger researchers entering their prime are given a tiny slice of the overall funding pie (see graph). The problem, say some policymakers, is that the current grants system rewards people for past accomplishments rather than their potential for breakthrough discoveries. “We really have to find a way to identify the small buds that are going to blossom,” says Hiroyuki Abe, a member of the Council for Science and Technology Policy, which conducted the study.

    Gray area.

    Competitive grants go mostly to older scientists.

    SOURCE: JAPAN COUNCIL FOR SCIENCE AND TECHNOLOGY POLICY

    One confounding factor is Japan's success over the past decade in shifting to a peer-reviewed competition for grants. In the past, all full professors and key researchers received small amounts of money. As part of its assessment of the change in funding patterns, the council studied which age groups are getting the most funding across all governmental competitive grant schemes.

    To no one's surprise, senior scientists claim the lion's share. Abe says that's because of an overreliance on history. “The evaluations concentrate too much on past achievements, and this benefits big-name scientists,” he says. “Younger researchers just don't have the accumulated results yet.”

    This weakness is particularly evident when it comes to funding groundbreaking research, says Abe, a former president of Tohoku University. He notes that all four of Japan's recent Nobel laureates began their award-winning work in their late 20s or 30s. “We have to revise [evaluations] to focus on promising ideas,” he says.

    An official of the Ministry of Education, Culture, Sports, Science, and Technology, the source of more than three-quarters of the country's $2.5 billion in yearly competitive research grants, says they are moving to address the issue. In the fiscal year ending this month, the ministry put $150 million into a pot reserved for younger researchers with minimal track records. Grants awarded under the new scheme don't appear in the just-released survey. “We are already headed in the direction being urged by the science council,” the official says, adding that the ministry is studying additional ways to expand opportunities for younger scientists.

    But meager funding is just part of the problem. Eisuke Enoki, a medical student at Kobe University who runs a mailing list popular with young scientists, says, “It is the shortage of independent research positions that generates the most discussion.” Many grant schemes are not open to those in temporary positions, a category that includes Japan's burgeoning numbers of postdoctoral fellows. And at some universities, senior faculty members so thoroughly control research activities that grants won by entry-level scientists may instead be diverted to the projects of their bosses. Enoki says that the first concern of those on his mailing list is “finding positions with the freedom to work on your own projects.”

    Abe readily agrees that greater independence for younger researchers is essential. “We've been urging universities to tackle this issue,” he says. Next month, a plan to quasi-privatize the national universities officially kicks in, which includes the promise of tying institutional funding increases to more objective faculty evaluations. Abe and others hope that these changes will help younger faculty members.

  6. PARTICLE PHYSICS

    Gamma Rays Spotlight a Dark Horse for Dark Matter

    1. Charles Seife

    Do mysterious gamma rays emanating from the center of the galaxy hold the secret to the missing matter in the universe? A team of physicists suggests that they might. The controversial finding also shows how little is known about most of the mass in the cosmos.

    Most physicists agree that about one-quarter of the universe is made up of an exotic form of matter that hasn't been discovered yet. The leading candidates areWIMPs—weakly interacting massive particles such as one suggested by supersymmetry theory, a favored extension to the Standard Model of particle physics.

    In the 12 March issue of Physical Review Letters, physicists from France and the United Kingdom present indirect evidence that a different putative particle entirely—1000 times or more lighter than the supersymmetric candidate—is responsible for dark matter. “It's really strange, contrary to everything assumed” about the dark-matter particle, says team member Céline Boehm, a theorist at the University of Oxford.

    Light from a lightweight?

    Galactic gamma rays may bear witness to an unexpected low-mass particle.

    CREDIT: NASA/GSFC

    If such light dark-matter particles exist, the scientists argue, then they and their antiparticles will collide, producing electrons and antielectrons. The electrons and antielectrons, in turn, will collide and produce gamma rays with 0.511 million electron volts of energy. The INTEGRAL gamma ray satellite has found the distribution of gamma rays that one would expect if the Milky Way were shrouded in a halo of light dark-matter particles, the team reports.

    John Ellis, a physicist at the European particle physics laboratory CERN near Geneva, Switzerland, is skeptical. He says such a low-mass particle should have been produced by an earthly particle accelerator. “Why hasn't it been seen somewhere before?” he asks. “I find it implausible that [a light] particle would escape detection.” Other scientists have proposed that the antielectrons and electrons in galaxies come from other natural sources, such as supernovae and cosmic ray interactions.

    Even proponents admit that the case for a light dark-matter particle is tenuous. Says Boehm: “I won't say I believe it, but I won't say I don't believe it, either.” In the meantime, dark matter remains one of the darkest mysteries of physics.

  7. ECOLOGY

    Naturalists' Surveys Show That British Butterflies Are Going, Going ...

    1. Elizabeth Pennisi

    Long-term studies of bird, butterfly, and plant populations across Great Britain suggest that, contrary to current thinking, insect species may be disappearing even more rapidly than other organisms. A compilation of decades of work by thousands of volunteers, reported on page 1879, indicates that birds and other vertebrates are not necessarily good stand-ins for monitoring invertebrates. The results “show that we have likely underestimated the magnitude of the pending extinctions,” says Stuart Pimm, an ecologist at Duke University in Durham, North Carolina.

    For decades, biologists and environmentalists have struggled to determine humans' impact on the flora and fauna around them. Insects and other invertebrates are so plentiful and cryptic that it's been hard to get a handle on their numbers or calculate any anthropogenic toll.

    In 1993, Jeremy Thomas, an ecologist at the National Environmental Research Council Centre for Ecology and Hydrology in Dorset, U.K., realized that British ecologists had a way to monitor local losses of species. In 1970, Thomas and his colleagues, with the help of amateur butterfly watchers, had begun annual surveys of native butterflies. These censuses covered 2861 10-kilometer-by-10-kilometer areas across the United Kingdom that had been set up previously by the government for making maps of the country. Similar long-term studies of native birds and local plants using these squares have been conducted for years.

    Similarities in census techniques and study areas made it possible to compare the three types of populations. Jeremy Thomas and colleagues “were able to compare the rates of decline in these groups in a way that has not been achieved previously,” says Chris Thomas (Jeremy's brother), an ecologist at the University of Leeds, U.K. Adds Pimm, “It's the first study to compare widely different taxa with the same methods in the same place.”

    Danger, danger.

    The pearl-bordered fritillary (left) and the large blue are two of the declining native butterflies.

    CREDITS: (LEFT TO RIGHT) DAVID SIMCOX; JEREMY GREENWOOD

    To qualify for the new study, data sets had to include two surveys of each group, separated in time. For example, Jeremy Thomas's team had surveyed 58 butterfly species between 1970 and 1982 and again between 1995 and 1999. The plant work, which covered 1254 species, was done from 1954 to 1960 and again from 1987 to 1999. Birders had tracked 201 avian species between 1968 and 1971 and again from 1988 to 1989. “Britain is the only place with this level of documentation for this long a period,” notes Arthur Shapiro, an evolutionary ecologist at the University of California, Davis.

    After examining the data for a year to make sure their analyses were sound, despite differences in when the surveys were done and the intervals between them, the researchers came up with clear results. Overall, between the first and second surveys, 28% of native plant species disappeared from at least one survey square. Half of the birds showed a similar decline. Butterflies suffered the most: About 71% of those species lost ground in at least one square, Jeremy Thomas and his colleagues report.

    Additional data from over more than a century on organisms in one Netherlands site tell a similar story. Jeremy Thomas's team found that the extinction rate for butterflies was two orders of magnitude higher than that for plants and vertebrates. And other insects, including some wasps and flies, seem to be disappearing at alarming rates there as well.

    About 20,000 volunteers contributed observations of birds, plants, or butterflies included in the new study. Shapiro and Chris Thomas point out that sometimes nature sleuths might follow different census procedures or be less skilled in identifying species, making results inconsistent. But others are confident that these volunteers did a good job. British natural history buffs are numerous and have a strong tradition of making careful observations. “All of Britain is surveyed by amateurs of a very high level of competence,” Pimm notes.

  8. CONFLICT OF INTEREST

    Varmus Backs Some Limits on NIH's Consulting Policy

    1. Jocelyn Kaiser

    Former National Institutes of Health director Harold Varmus last week distanced himself from a 1995 policy he issued that led to an expansion of industry consulting by NIH employees. Varmus told an advisory panel that although industry interactions are “real” and “necessary,” some activities should be prohibited for NIH's top leaders and senior staffers who oversee grants.

    Varmus's decision to lift restrictions on consulting served its purpose of attracting talent to NIH's intramural program, he says. It also led to new industry connections. According to a December 2003 report in the Los Angeles Times, NIH staffers accepted hundreds of thousands of dollars in payments for consulting, some of which created potential conflicts of interest. Although NIH Director Elias Zerhouni has found no evidence that these deals influenced patient safety or funding decisions, he formed a blue-ribbon panel to review NIH conflict-of-interest policies, co-chaired by industry executive Norman Augustine and National Academy of Sciences President Bruce Alberts. House and Senate committees (Science, 30 January, p. 603) and the Department of Health and Human Services inspector general also have begun investigating.

    Speaking at the panel's second meeting, Varmus said the policy “had a very salubrious effect on recruitment.” But the 1989 medicine Nobelist now believes that top institute leaders should not consult for companies that could receive NIH funding if there is no higher-level official to take over their duties in case of a conflict. Limits may also be warranted for senior staff members who oversee programs, he said. Although the NIH policy does not apply to the private sector, Varmus says he has “taken a vow” to do no consulting as president of Memorial Sloan-Kettering Cancer Center in New York City, he said. (The two NIH institute directors whose consulting deals were described by the Los Angeles Times no longer consult.)

    Varmus also suggested that the NIH panel issue new guidelines on outside paid work that are more restrictive than his policy, which eliminated time and money limits. He said that an NIH scientist “taking a shower” in the morning should be thinking about NIH rather than any outside work. The monetary limit might be equivalent to an individual's salary, he later told reporters. Although Varmus doesn't favor rigid rules, he recommended that senior administrators publicly disclose any consulting payments. NIH recently received permission from the Office of Government Ethics to make public such data for 66 positions.

    The panel also heard NIH staff members defend consulting arrangements. Transplantation surgeon Allan Kirk, for example, said they are needed because “we're the lowest-paid surgical staff in the country.” Neuroscientist Daniel Weinberger argued that “what one learns on science advisory boards is not trivial” and that NIH staff “would be at a disadvantage” if not permitted to do what is standard among academic colleagues.

    That's not a good enough reason, said cell biologist Jack Bennink, who described himself as “in the vast minority.” “Our primary jobs are as government employees,” said Bennink, who would like the panel to “deny compensation.”

    The panel plans at least one more public meeting before releasing its recommendations on 6 May.

  9. FRENCH PROTESTS

    Government Dangles Fresh Carrots to Dispirited Scientific Community

    1. Barbara Casassus*
    1. Barbara Casassus is a writer in Paris.

    PARIS—In a bid to prevent a mass walkout by its scientists, the French government has aired new concessions on research support. But protest leaders have dismissed the overtures; they plan to decide on 19 March whether to start shutting down labs next week.

    Raising the stakes in a months-long dispute over cash and jobs, research heads voted overwhelmingly last week to stop performing administrative duties (Science, 12 March, p. 1595). As Science went to press, nearly half of the country's 3500 public lab or unit chiefs had tendered administrative resignations or declared an intention to do so; agency directors had not yet acted on the resignations. The protesters had demanded that the government come up with €200 million owed to science agencies and reinstate 550 permanent jobs in government labs that had been converted this year to 3- to 5-year contracts. Upping the ante, they have also called for the creation of hundreds of new university posts.

    The government has met and bettered the demand for cash, agreeing to “unfreeze” €294 million from the 2002 and 2003 budgets. It has also pledged to reinstate 120 full-time civil service jobs. Then last week, research minister Claudie Haigneré floated the idea of establishing a national science foundation that would evaluate and fund projects, finance a much-needed upgrade of scientific infrastructure, and create 5000 new 5-year research posts.

    ILLUSTRATION: TIM SMITH

    Protest leaders, saying they are unwilling to compromise, have spurned the attempt at rapprochement. They continue to enjoy popular support: One opinion poll for the newspaper La Croix found that more than three-quarters of respondents believe the government has not done enough for research.

    At least one agency chief is moving quickly to counter an impression of apathy. Bernard Larrouturou, director of the basic research agency CNRS, last week unveiled plans for an overhaul that would involve merging labs, boosting the proportion of foreign scientists from 12% to 25%, and strengthening the evaluation of grant proposals. He has invited all 1265 CNRS lab directors to Paris on 23 March to explore ways to defuse the crisis.

  10. ASTRONOMY

    Surveys Scour the Cosmic Deep

    1. Robert Irion

    New technology and dedicated time on large telescopes expose thousands of remote galaxies and highlight a basic mystery: How did great waves of star birth in the early universe start and stop?

    Astronomers and statisticians agree: You need a lot of subjects to do a decent survey. In the past, astronomers trying to survey ancient galaxies in the depths of space could only pick them off one by one, like a lone pollster at a single phone. But now, banks of powerful telescopes in space and on the ground have transformed the art of the astronomical survey, with striking results.

    Like archaeological digs that unearth the origins of great civilizations, these new sky surveys are exposing the ancestry of today's universe. Instruments that capture many objects in one exposure are revealing the precursors of modern galaxies shining in x-rays, ultraviolet (UV) and optical light, and infrared. The combined images of thousands of galaxies yield clues about how they assembled and how vigorously their newborn stars blazed at different times in the first half of the universe's history.

    As findings from these surveys cascade into the literature, they are shaking up notions about the evolution of star birth in the young cosmos. Observers have found that some galaxies matured quickly after the big bang and then flamed out, forming giant blobs of stars that may have barely changed in at least 10 billion years. Another population of galaxies kept evolving, churning out new stars for eons and gradually settling into mature but mildly fertile galaxies such as our Milky Way.

    Current theories of galaxy formation can't explain why concussive waves of star birth swept through some early galaxies but not others—and why some of those fierce stellar fires got snuffed after a few billion years. Startled by their own data, a few observers have implied that modelers of the cosmos need new ideas to describe our universe's combustive childhood (Science, 23 January, p. 460).

    Theorists aren't yet ready to revise equations on their cluttered whiteboards, but they agree that the surveys illuminate serious flaws. “We're starting from a shaky foundation,” says cosmologist Carlos Frenk of the University of Durham, U.K. “We don't understand how a single star forms, yet we want to understand how 10 billion stars form.” Fellow theorist Simon White of the Max Planck Institute for Astrophysics in Garching, Germany, concurs: “The simple recipes in published models do not reproduce the star formation we see. Theorists are now having to grow up.”

    Remote.

    A Hubble survey called GEMS reveals the shapes of thousands of distant galaxies.

    CREDITS: BORIS HÄUßLER/MPIA AND THE GEMS COLLABORATION

    The GOODS on star birth

    In the recipes cooked up by White and his colleagues, galaxies and galaxy clusters assemble under the gravitational influence of cold dark matter—unidentified, slow-moving particles that suffuse space. Quantum fluctuations during the big bang 13.7 billion years ago ultimately led to dense pockets of cold dark matter. Galaxies coalesced within the densest clumps, while gravity drew them into weblike filaments.

    Theorists think their models nail the behavior of cold dark matter. Supercomputer simulations of the clumping process reproduce the observed patterns of galaxies and gaping voids at various stages of the universe's growth. The recently discovered acceleration of the cosmos, caused by an even more mysterious “dark energy,” has little effect in the first few billion years after the big bang, when gravity dominates.

    But theories sputter when the subject goes from darkness to light: the populations of stars that turned on inside baby galaxies. Those first stars somehow jump-started mature assemblies of millions to billions of blazing suns. Modelers have not come close to reproducing how and when those lights popped on; really, no one has tried. “The small-scale physics of star formation [is] unimaginably complex,” Frenk says. Magnetic fields, radiative physics, shock waves from supernovas, and gas clouds enriched with varying doses of heavy elements all get into the act.

    Observations ought to help sort things out, but until recently most notable surveys have been either too broad and shallow or too narrow and deep. For instance, the Australian-European Two Degree Field Survey and the Sloan Digital Sky Survey in the United States both cover huge swaths of the sky, showing how gravity has woven hundreds of thousands of galaxies into a gigantic web. But nearly all of those galaxies are modern, seen as they existed within the last several billion years of cosmic history.

    Gravitational assembly.

    Supercomputer simulations show the growth of a galaxy pair from 500 million years after the big bang to today (series, left to right).

    CREDITS: A. KRAVTSOV/UNIV. OF CHICAGO AND A. KLYPIN/NEW MEXICO STATE UNIV. AT NCSA

    At the other end of the scale, the Hubble Space Telescope's new Ultra Deep Field (UDF) peered back in time far enough to spot true “protogalaxies”—unformed masses of stars emerging less than 1 billion years after the big bang (Science, 12 March, p. 1596). But UDF's slice of the sky is too small to give a representative sample of the distant universe. Now, new instruments have opened a middle path: surveys that detect substantial numbers of dim, ancient galaxies in reasonably large chunks of the sky.

    One of them is called GOODS: the Great Observatories Origins Deep Survey. GOODS is a joint program of Hubble, the Chandra X-ray Observatory, and the Spitzer Space Telescope, which views the heavens in infrared light (Science, 19 December 2003, p. 2047). The three satellites are combining to study two spots on the sky—one in the Northern Hemisphere, the other in the Southern—with a combined area almost half the size of the full moon, about 30 times bigger than UDF. Facilities on the ground, including major optical and radio telescopes, are contributing as well, and images are immediately available to all astronomers.

    “This is one of the most exciting periods you can imagine for this research,” says astronomer Anton Koekemoer of the Space Telescope Science Institute (STScI) in Baltimore, Maryland. “We have three telescopes in space, all covering completely different wavelength regimes. It's a unique time.” A fourth satellite, the Galaxy Evolution Explorer, also contributes to studies of star formation by surveying the entire sky in UV light, although it is not part of GOODS.

    The various wavelengths help astronomers interpret what's going on inside each blob of light. In broad terms, x-rays may stream from hot gas spiraling into a central black hole, whereas nests of newborn stars emit intense UV light. Optical light shines from ordinary stars like our sun, and infrared reveals warm cocoons of stellar nurseries and dust clouds. The expansion of the universe stretches this light as it travels to Earth, making it appear redder. For distant objects, this “redshift” grows extreme: UV light shifts into the optical part of the spectrum, optical slides to infrared, and so on.

    Although Spitzer's survey has just begun, early results from the Chandra and Hubble GOODS fields—published on 10 January in Astrophysical Journal Letters—give tantalizing hints that early galaxies may have matured faster than astronomers assumed. For example, the Chandra fields contain roughly 500 bright blips of x-rays, most of which Hubble also detects. However, about a dozen of the x-ray sources are visible in infrared but not in optical light, a big surprise given their x-ray brilliance.

    Astronomers can come up with only two explanations, both of which have excited the team. The sources could be galaxies with hyperactive cores, shining when the universe was perhaps 2 billion to 3 billion years old. But to hide them from Hubble's view, thick dust would have to shroud the entire galaxies. “We know of nothing like that in the local universe,” Koekemoer notes.

    Instead, the sources may be the first supermassive black holes at the hearts of newborn galaxies, so distant that their light has been redshifted to wavelengths that Hubble can't detect. For that to happen, the black holes must have grown up less than 800 million years after the big bang—just barely enough time to build a huge black hole, says astrophysicist Niel Brandt of Pennsylvania State University, University Park. “A black hole can double its mass every 20 million years, provided you keep the thing fed in its nascent galaxy,” Brandt says. “But you need quite an efficient feeding mechanism to make it grow.” Koekemoer and Brandt agree that Spitzer's survey should settle the debate; closer, dust-shrouded galaxies would appear much brighter in infrared light.

    CREDITS: BORIS HÄUßLER/MPIA AND THE GEMS COLLABORATION

    Life in the desert

    GOODS sees other signs of precocious growth. From the fluxes and wavelengths of light streaming from distant galaxies in the survey, it looks as if the rate of star birth in galaxies had soared far beyond today's modest levels when the universe was just 1 billion years old. The rate steadily climbed for billions of years. Then, starting about 7 billion years ago, when the universe was half its current age, star formation gradually subsided. The unspectacular rate of star birth we see in galaxies today is about a tenth of the peak value.

    What properties made some of the early galaxies so fertile? Answers may lie within a special set of pages in the cosmic history book: the epoch when the universe was between 2.5 billion and 4.5 billion years old. Astronomers called this region the “redshift desert,” because until recently they lacked the tools to pick out dim galaxies at those specific wavelengths. Hubble's new Advanced Camera for Surveys can stare deep into the desert, but until recently it was a barrier for telescopes on the ground.

    “The redshift desert is the most exciting era in the history of galaxy formation,” says astronomer Charles Steidel of the California Institute of Technology in Pasadena. “It's when a lot of the stars in today's universe formed. The entire universe of massive galaxies was transformed from something that looks completely foreign to something that looks similar to what we observe today.” As star birth raged steadily, some of today's grand spiral galaxies may have arisen largely in isolation, whereas mergers among smaller galaxies probably led to giant elliptical systems. However, the details of that hierarchy are poorly known.

    Now, several teams in the U.S. and Europe have marched into the desert. They use the light-gathering power of the biggest telescopes on the ground and new observing tactics. For instance, researchers at the 8-meter Gemini North telescope on Mauna Kea, Hawaii, used a method called “nod and shuffle” to resolve faint galaxies barely discernible in the infrared glare of the atmosphere. Astronomers completed the far-reaching Gemini Deep Deep Survey by nodding the giant telescope back and forth every minute during several 25-hour exposures. Each nod focused first on galaxies and their surrounding patches of sky, then on the sky alone. By electronically “shuffling” these images on the telescope's detector, the team subtracted the sky's brightness exactly—leaving the faint footprints of scores of galaxies in the previously inaccessible desert.

    “Over the last year, the redshift desert has become the redshift dessert,” jokes Gemini astronomer Roberto Abraham of the University of Toronto in Ontario, Canada. And as the team announced recently at a meeting of the American Astronomical Society,* that “dessert” appears rich with galaxies that already seem old and inactive. The team believes it has spotted giant elliptical galaxies in which nearly all stars formed just 1 billion to 2 billion years after the big bang. According to prevailing theories of galaxy assembly, galaxies so massive could not have assembled that early, Abraham says: “If you look far enough away, the ellipticals should just evaporate. There's just no way around it.”

    Other researchers interpret the Gemini data more cautiously. “It's not quite clear these systems are really old,” says astronomer Rogier Windhorst of Arizona State University in Tempe, pointing out that heavy dust can make galaxies appear redder—and older—than they really are. But other results appear to bolster the broad strokes of the survey's claim. For example, a team from the European Southern Observatory (ESO) used one of the four 8.2-meter telescopes in the Very Large Telescope array in Paranal, Chile, to find a “sizable sample of extremely red objects” in the middle of the redshift desert. “They seem to be quite clustered, and their mass already is like that in the local universe,” says astronomer Emanuele Daddi of ESO in Garching, Germany.

    Hawaiian twin.

    The Gemini North telescope unveiled old, massive galaxies in a recent survey.

    CREDIT: GEMINI OBSERVATORY

    Steidel's own work at the twin 10-meter Keck Telescopes at Mauna Kea also exposes massive, distant galaxies, but the objects in his survey are still spawning new suns. The stars born in that long-ago window of time—stars that make up both the red ellipticals and the old, bulging centers of galaxies like our Milky Way—formed under conditions vastly different from today's, Steidel says. “Most star formation in the universe today takes place quiescently in thin galaxy disks. That's not the case at high redshift—it was much more violent then.” Indeed, he notes, star birth was up to 1000 times more intense within active pockets of some early galaxies than it is now.

    That tumultuous activity may have been self-limiting. Fiercely active clouds of star formation would create hordes of gigantic stars—the kind that end their lives within millions of years by exploding into supernovas. Wave after wave of explosions would propel gas far into the spaces between galaxies. There it could get trapped in a hot plasma, unable to seed further star birth.

    In that way, Steidel believes, supernovas may prove critical for models of galaxy evolution. Galaxies that grow up within regions destined to become dense clusters may transform rapidly into giant ellipticals, because of relentless collisions with neighbors. Then, supernovas could halt star formation within these hot, active nests at an early age in the universe—creating the massive, old, red objects seen by the surveys.

    A soupçon of violence

    Supernovas are not the only violent ingredients that theorists may add to tweak their recipes of how galaxies evolve. Both Carlos Frenk and cosmologist Rachel Somerville of STScI think another key missing piece is the vast outpouring of energy from the cores of young galaxies.

    Black holes probably lurked at the hearts of most galaxies from the earliest times. As these young active cores matured into full-fledged quasars, they would have launched ferocious winds—blasting outward at a significant fraction of the speed of light, according to some observations. That process could have selectively shut off star formation in the most massive, hyperactive galaxies, Somerville postulates. Once closed for business, those galaxies would have aged passively into old red blobs.

    But no single new input will fix the theoretical problem of how to get the first galaxies to light up, Somerville admits. “We must put star formation into the context of the whole galaxy,” she says. “Compressional waves trigger it, rotational shear fields disrupt it, supernovas and winds from the core might blow gas back out. So far, we can't make a simulation that has all of these processes in it.”

    Whatever the solution, cosmologist August Evrard of the University of Michigan, Ann Arbor, thinks his fellow theorists may need to ignore their pet ideas about how stars form. “We have almost too much information from our local universe,” he says. “People are constrained by what they see in our backyard. High-redshift systems are much gassier, with different chemical compositions. There may be another mode of star formation that occurred in the early universe.”

    If a new mindset is needed among theorists, the emerging culture of group surveys in astronomy might help catalyze it. But for longtime observers, major surveys are bittersweet.

    “One of astronomy's charms was that you didn't need a huge team and hundreds of hours of telescope time to make an impact,” says Mark Dickinson of the National Optical Astronomy Observatory in Tucson, Arizona, leader of Spitzer's GOODS team. “That's still true, but now surveys are a bigger investment of time for a smaller number of projects. It means that more individuals have to rely on archived databases.”

    Still, Dickinson is glad to provide universal access to the enormous sets of data. After all, some young astrophysicist scouring online may come up with a bright new idea about how to set galaxies ablaze with starlight.

    • * 203rd national meeting, Atlanta, Georgia, 4 to 8 January.

  11. ECOLOGY

    A Bid to Save the 'Galápagos of the Indian Ocean'

    1. Eva Sohlman*
    1. Eva Sohlman is a writer in Stockholm.

    Yemen hopes to balance economic growth and conservation to protect the dazzling flora and fauna of the Socotra Archipelago

    HADIBOH, YEMEN—Abdulkarim Al-Eryani points to several scrawny goats gnawing on the saplings of an unusual tree with a stiff, parasol-shaped crown and long, spiky needles. The dragon's blood tree is under attack, and experts worry that this icon of Yemen may be edging toward extinction. “It's a disaster,” says Al-Eryani, a Yale-trained microbiologist and a former prime minister who now serves as an adviser to Yemen's president.

    Al-Eryani is spearheading an environmental rescue mission on Socotra, dubbed “the Galápagos of the Indian Ocean” for its assemblage of one-of-a-kind plants and animals. Last year the islands—Socotra, by far the biggest, Abd Al Kuri, Samha, and Darsa—were declared a biosphere reserve by the UNESCO Man and the Biosphere Programme, a status that is helping the Yemeni government raise funds for conservation and sustainable use of the archipelago's resources. Al-Eryani—who chairs the Socotra Conservation Fund, set up in 2002 to attract private donations—and likeminded colleagues are trying to rein in plans to make the biological hot spot a tourism hot spot too. Socotra is “a real diamond,” says Edoardo Zandri, chief of the United Nations-sponsored Socotra Conservation and Development Programme (SCDP). But “as the locomotive of development picks up speed,” he says, “the natural balance between people and nature is at risk.”

    Arabian eden.

    Ten plant genera are found nowhere else in the world.

    Socotra is among the top 10 islands in the world in the number of endemic species, says Anthony Miller of the Royal Botanic Gardens in Edinburgh, U.K., who has done fieldwork there. “It's up there with the Galápagos and the Canary Islands,” he says. About a third of the main island's 900 plants are native: species such as the poisonous Socotran desert rose with its alluring pink flowers and the cucumber tree with branches drooping like dreadlocks over a bulbous trunk. Most famous of all may be the dragon's blood tree and its dark-red resin used in dyes and rust-resistant paint. Endemic animals include 24 species of reptiles, at least seven kinds of birds, and an array of land crabs, centipedes, and dragonflies.

    The island's denizens are under siege from several quarters. Goats are one obvious villain: They are at least partly to blame for measured declines of dragon's blood trees, says Gary Strobel, a plant pathologist at Montana State University, Bozeman, who led an expedition to Socotra last fall. Also harming the trees, U.N. studies indicate, is the fact that the local climate is becoming more arid, making it harder for seeds to germinate and reducing sapling survival. But the gravest threat, Al-Eryani and others say, may well be ambitious plans to woo tourists.

    Socotra was long cocooned from the commercial development now menacing its fragile ecosystems. Heavy monsoon winds from May to September make boat landings treacherous, a passage that pirates rendered even riskier until the early 20th century. The Cold War also kept Socotra cloistered: The Soviet Union had a naval base at Socotra, then part of Marxist South Yemen, putting it off-limits to the outside world. Things began to change after the unification of North and South Yemen in 1990. But it wasn't until 1999, when passenger jets started flying to Hadiboh, the main town on Socotra, that travel to the archipelago became routine.

    Weird and wonderful.

    Among the plants the U.N. and Yemen are striving to save are the Socotran boab (far left) and the dragon's blood tree.

    CREDITS: GARY STROBEL

    Proposals are now in the works to dot the island with five-star hotels, a golf course, and a casino. Local officials have claimed that these initiatives will pull many of the island's 44,000 inhabitants out of poverty. In the short term, at least, conservationists have an unlikely ally in their efforts to scale back such development: Militant fundamentalism in Yemen, the ancestral home of Osama bin Laden, has put a crimp in tourism, Al-Eryani says, “protecting Socotra from uncontrolled exploitation.”

    That may have secured some breathing room for an effort begun in 1996, when Yemen and the U.N. Development Programme (UNDP) teamed up to try to avert a collision of interests in Socotra. With input from some 100 scientists, the U.N. body cataloged the archipelago's life forms. “Hundreds of new species of animals, fish, coral, and plants were discovered,” says Zandri.

    The inventory led to a wider U.N.-led scheme in 2002, the SCDP. The program is training Socotri nature guides and supporting a new ecotourism society. SCDP aims to coordinate government and donor efforts to improve conditions for the archipelago's residents and to protect biodiversity. The Socotrans themselves have set up 30 marine sanctuaries in which no net fishing is allowed and have banned the cutting of live trees.

    Extending these efforts, UNDP last year signed a $5 million, 5-year program for sustainable development and conservation, with support from Italy and Yemen. The effort will help create sustainable fisheries and improve local management of protected areas. Plans are also afoot to form a partnership with the Galápagos Archipelago National Park in Ecuador to draw on the park's experiences in fending off invasive species and coping with tourism over its 45-year history.

    “The island is at a crossroads between becoming a world-class ecotourism destination, largely managed and protected by the local people themselves, or becoming a prey for unscrupulous developers,” says Al-Eryani. In pushing for the former, Al-Eryani is hoping that renewed scientific interest will provide more ammunition for making the case that Socotra's biological riches are far too precious to fritter away.

  12. GENOMIC MEDICINE

    Gene Expression Tests Foretell Breast Cancer's Future

    1. Ken Garber*
    1. Ken Garber is a science writer in Ann Arbor, Michigan.

    The first patient profiling tests based on gene expression hit the market early this year. With others soon to follow, they will test genomic medicine's power

    One of the hardest decisions a breast cancer patient faces is whether to undergo chemotherapy. Most patients whose cancer has not spread to the lymph nodes are cured by surgery and the estrogen-inhibiting drug tamoxifen, but a small minority will go on to develop distant metastases and die. Chemotherapy reduces that risk but has its own dangers, including heart failure, leukemia, and life-threatening infections.

    Doctors can't guide patients very well because they too are on the fence. A pathology examination and molecular markers broadly hint at whether a tumor may metastasize, “but for each individual, we don't really know,” says Clifford Hudis, chief of the breast cancer medicine service at Memorial Sloan-Kettering Cancer Center in New York City.

    Help may have arrived from genomics, a field long on publicity but, until now, short on products for the clinic. On 26 January, Genomic Health, a private company based in Redwood City, California, quietly launched Oncotype DX, a testing service for determining a breast cancer patient's chance of developing metastases. The test is based entirely on the pattern of genetic activity in her tumor. “Our hope is that the data just help shed some more light on the real risk for these women,” says Genomic Health co-founder and CEO Randy Scott.

    This somewhat understated goal masks an intense competition for breast cancer patient business. Agendia, a Dutch start-up company, launched a genomic profiling test in early March. At least two other companies are developing similar tests. Oncotype DX costs $3400, and the potential market is huge. Over 215,000 women will be diagnosed with breast cancer this year in the United States alone, and the Genomic Health test “might be relevant to up to a third of breast cancer patients,” says Peter Ravdin, a medical oncologist at the University of Texas Health Science Center in San Antonio.

    But the new tests are reaching the market without having been validated in true prospective clinical trials. They haven't been subjected to Food and Drug Administration (FDA) review, either, because they're being marketed as a lab service rather than as diagnostic kits. How well the tests perform is critical not only for breast cancer patients, but as one of the first tests of genomic medicine.

    Predicting trouble

    Genomic Health has a low profile, but it has pedigreed executives. Scott was a founder of Incyte, a Palo Alto, California, genomics company marketing a popular gene database, and Chief Medical Officer Steven Shak shepherded Herceptin, the first monoclonal antibody for treating solid tumors, to market while he was at Genentech in South San Francisco. For its first clinical project, Genomic Health decided to create a test that could anticipate the course of breast cancer, selecting 250 genes for their presumed influence on cancer progression. Company scientists measured the genes' expression in tumor samples. Sixteen genes, as a group, strongly predicted which tumors metastasized; they and five “control” genes were ultimately chosen for the screen. Genomic Health then tested this gene panel in 668 stored breast tumors from a different trial and unveiled the “validation” results in December 2003 at the San Antonio Breast Cancer Symposium.

    In the validation study, the test was much more accurate than current methods oncologists use to estimate metastasis risk. The investigators analyzed and classified gene expression patterns in tumors without knowing whether the person's cancer had spread. The “patients”—all of whom had been treated years earlier—were separated into low-, intermediate-, and high-risk groups. Only 6.8% of the patients in the low-risk group had gone on to develop distant metastases after 10 years, their files revealed, compared to 30.5% in the high-risk group.

    Reading the genes.

    A new test developed by Steven Shak (above) and colleagues uses RNA extracted from breast tumors to predict cancer spread.

    CREDIT: GENOMIC HEALTH INC.

    But the test has limitations. The biggest: It doesn't identify which patients are likely to benefit from chemotherapy. Researchers assume that patients most at risk of metastases would benefit most from chemotherapy, but responses to treatment will vary, and “other sets of genes [might] predict a benefit from chemo,” says Hudis, who sits on Genomic Health's scientific advisory board. Genomic Health, like many other companies, is busy profiling genes involved in chemotherapy response and hopes to add those tests later. Oncotype DX “is really just the first step,” says Scott.

    Other limitations include the fact that Oncotype DX is based on tumors that express the estrogen receptor, taken from patients whose cancer had not spread to lymph nodes. Fewer than half of all breast cancer patients fall into this category, raising a question about how well the test would work for others. What's more, the patients took tamoxifen, so the test might have been picking up in part on their response to that drug and might not work as well in patients taking other anti-estrogen drugs.

    Oncotype DX did not undergo FDA review. As a “clinical laboratory reference service,” or “home brew,” it is exempt from the standard review FDA requires for diagnostic kits. But it was approved by the California State Licensing Agency, which regulates labs in the state. FDA review might have aired several issues. The tumors themselves “come from a study that was 20 years old,” points out Daniel Hayes, a medical oncologist at the University of Michigan, Ann Arbor. They might have changed over that time, he suggests. Or perhaps the sample was selective, collected only from patients with large enough tumors to have material left over after pathology tests. The lack of FDA review “is too bad,” says Edison Liu, executive director of the Genome Institute of Singapore.

    But Hayes and others consider the evidence strong, based on what they heard at the San Antonio meeting. “If [Oncotype DX] is as good as it seems to be, I'm certainly going to suggest for some patients—those patients with stage one breast cancer—that this test could be a very valuable test,” says Ravdin.

    The ultimate test of Oncotype DX may be on the way. The National Cancer Institute and Breast Cancer Intergroup of North America, a consortium of clinical trial cooperative groups, are discussing a prospective clinical trial. In addition to tracking metastasis and survival, the trial may settle the lingering question of whether the test predicts patient response to chemotherapy.

    Bright line.

    Colored dots representing activated genes highlight tumors that are likely to metastasize.

    CREDIT: M. SEVERINO/MERCK RESEARCH LABORATORIES

    The challengers

    Neither Genomic Health nor its competition is waiting for future trials. Agendia, based in Amsterdam, began selling its Web-based breast cancer profiling test, called Mammaprint, last month, according to Chief Scientific Officer René Bernards. The test costs €1650.

    Agendia's test is very different from Genomic Health's. Bernards and colleague Laura van't Veer were working at the Netherlands Cancer Institute (NKI), which had an extensive bank of frozen tumor samples dating back to 1983. Bernards contacted Stephen Friend, a pediatric oncologist who founded Seattle biotech company Rosetta Inpharmatics, a pioneer in DNA microarrays, chips that light up to reveal the activity of thousands of genes. “I went to see Stephen and said, ‘Listen, I have a beautiful tumor collection here in Amsterdam, you have beautiful technology,’” Bernards recalls. “‘Let's get together and do this project on gene expression profiling.’”

    The group used an approach that scans the entire genome. They generated expression data for 25,000 different genes from a group of 117 young Dutch women with breast cancer that hadn't spread to the lymph nodes and whose clinical course had been followed for more than 5 years after surgery. Ultimately, they identified a 70-gene expression signature to indicate good or bad prognosis.

    The work was proof of principle for the then-controversial idea that individual tumors are programmed early to metastasize or not (Science, 14 February 2003, p. 1002). “Before 4 years ago, biologists assumed that there were … neutral small tumors that would become good or bad tumors later,” says Friend. But “sometimes very early tumors can be predestined to be bad players.” Friend and Bernards were the first to show that global genomic expression patterns could identify such tumors.

    The group validated their gene panel in 295 tumors from NKI—a study published in The New England Journal of Medicine in December 2002 (Science, 11 April 2003, p. 238). Almost 56% of lymph node-negative patients with a poor prognosis signature developed distant metastases after 10 years, compared to 13% of patients in the good prognosis group. Unlike Genomic Health, Rosetta and NKI looked only at tumors from patients who had not yet been treated with tamoxifen or other drugs, so there's no chance that treatment skewed the results. And the Rosetta group didn't preselect its genes as Genomic Health did. “If you start out in a completely unbiased fashion, you come up with, I think, a more powerful set of genes than if you take your own best guesses,” says Bernards.

    But the Rosetta-NKI study also has flaws. It, too, used stored tumor samples and case records. It tested only women under 53 years old at a single institution, raising doubts that the results can be safely extrapolated. And it incorporated its test group—the group it derived the 70 genes from—into its validation group, thus potentially inflating the results. “To me that's just not kosher,” says Hayes. Although the authors made statistical adjustments, Friend concedes that the study should have been designed differently. Two prospective trials of the Agendia test are under way, one in Europe and one in the United States.

    Although Genomic Health and Agendia are the furthest along, at least two other companies, Celera Diagnostics in Alameda, California, and Arcturus Applied Genomics in Carlsbad, California, are developing breast cancer prognostic tests. Each company has a different technology platform, but the goal is the same: to predict which women's cancers will recur.

    Looking for validation.

    Genomic test results from stored tumors were checked against actual patient outcomes.

    CREDIT: GENOMIC HEALTH INC.

    “There will be a lot of demand for these tools, generated by the patients themselves,” predicts Liu. But how doctors and patients will apply test results is far less certain. Bernards says that such tests may lead many women with a low-risk signature to forgo chemotherapy. “We expect that in Europe we can achieve a 25% to 30% reduction of chemotherapy,” he says. Ravdin notes that “aggressive chemotherapy causes a risk of 1% of something really bad happening.”

    But American doctors are generally more prepared than Europeans to order chemotherapy for marginal cases. So it's possible that testing won't decrease overall chemotherapy use in the United States. “Actual chemotherapy use will probably stay the same,” Scott says. “It's just that you'll now do a better [selection] job.”

    Few doctors are likely to rely wholly on the test results. “Most of us aren't going to be brave enough to withhold chemotherapy from a patient who otherwise looks like they need it, on the basis of this test, which has not yet been validated for the chemotherapy decision,” says Hudis. But if traditionally low-risk patients test for a “bad” tumor, the test could save lives by tipping these patients toward chemotherapy, Hudis points out.

    For all the weaknesses of these tests—the validation questions, the absence of prospective trials and FDA review, and the inability to predict chemotherapy response—they provide a new, more personalized dimension of information to women agonizing over a life-or-death decision. “Right now, even if it's a little bit wrong,” says Hayes, “it's better than what we've got.”

  13. INTELLECTUAL PROPERTY

    NIH Roils Academe With Advice on Licensing DNA Patents

    1. David Malakoff

    The National Institutes of Health urges universities not to strangle the goose laying the golden biotech eggs

    SAN ANTONIO, TEXAS—When academic scientists Stanley Cohen and Herbert Boyer successfully spliced a functioning foreign gene into a bacterium in 1973, the discovery helped launch the biotechnology revolution—and ultimately produced a blockbuster patent that earned the inventors and their universities some $300 million. Since then, U.S. universities have patented more than 4500 DNA-based discoveries. Although few have paid off like Cohen and Boyer's, the patents have helped attract the type of massive private investments needed to move campus discoveries into the clinic.

    Critics, however, argue that academia's eagerness to patent genomic inventions is having some negative side effects. Some campuses have licensed discoveries exclusively to a single company, for instance, reducing competition that might spur innovation and drive down prices. And some experts worry that a growing thicket of patent-related legal restrictions—especially on research tools—could strangle future biomedical research.

    This month the National Institutes of Health (NIH) offered a proposal aimed at clearing out some of the patent undergrowth. But the draft guidelines, unveiled here at a meeting of university patent experts,* are being criticized as premature and based on anecdote rather than evidence. Meanwhile, academic researchers and the U.S. National Academies have launched studies of DNA-based patents intended to inform the debate. “There is often more rhetoric than data,” says Robert Cook-Deegan, a policy specialist at Duke University in Durham, North Carolina.

    Gene king.

    The University of California has patented more DNA discoveries than the government or any company has.

    SOURCE: L. WALTERS/KENNEDY INSTITUTE OF ETHICS/GEORGETOWN UNIV.

    NIH officials emphasize that their draft guidelines, labeled “best practices for the licensing of genomic inventions,” is a work in progress. NIH technology transfer specialist Jack Spiegel advised the gathering of patent administrators that federally funded researchers should seek to patent DNA-based inventions only if the inventions will need “significant” private sector investment to become products. And any patented inventions should be licensed as widely as possible, with owners giving nonprofit researchers and public health agencies easy access. “An exclusive [licensing] arrangement may not be the most beneficial one for the public,” the draft concludes.

    Although the draft has not yet been circulated widely, university officials who have seen it say much of it is not controversial. “Many of us are already doing these things,” says Thomas Ittelson, who handles technology transfer issues for the Massachusetts Institute of Technology's (MIT's) Whitehead Institute in Cambridge. For instance, making sure that licenses allow academics and public health agencies to freely use patented technologies has become standard practice at major institutions, he says. Still, he and others worry that NIH, although well-intentioned, may be moving too quickly. In particular, they are concerned that the guidelines could harden into regulations accompanying grants—as happened with earlier NIH guidance on licensing biomedical research tools.

    Growth curve.

    The number of U.S. patents on DNA products took off in the 1990s.

    SOURCE: L. WALTERS/KENNEDY INSTITUTE OF ETHICS/GEORGETOWN UNIV.

    That could codify some language that troubles university officials. The draft suggests, for example, that exclusive licensing of gene-related patents is having “detrimental short-term and long-term effects on both the quantity and quality” of health care. University-based technology transfer officers contacted by Science described that concept as “annoyingly half-baked … overly simplistic.” One wondered, “Where did that come from?” Several noted that small biotechnology companies often need to have exclusive rights to a nascent technology to raise sufficient venture capital. “The vibrancy of the biotechnology industry is dependent on these exclusive licenses,” says Ittelson, adding that he “was dismayed that NIH would even think about drafting guidelines before we had all the facts.”

    NIH officials were somewhat surprised by the negative reaction. “I'm not sure we realized the impact that some of the language would have,” says one. The draft had been circulating within the agency for months, the official said, and was intended to reflect NIH's own approach to patenting and licensing. The NIH officials reassured critics that they have no timetable for finalizing the guidelines and welcome all comments.

    NIH is also sponsoring several studies aimed at providing new data on the scope and impact of university gene patents. One is a nine-scholar effort led by former MIT licensing specialist Lori Pressman, Cook-Deegan, and ethicist LeRoy Walters of Georgetown University in Washington, D.C. The team has begun to analyze the nearly 4400 DNA-based patents held by 30 top universities to determine what discoveries academia is patenting and how they are licensed. Some preliminary findings—including that very few of the patents have been licensed to more than 10 users and nearly one-third have never been licensed at all—may surprise some people, notes Walters. “The data may help us get past the anecdotes,” he says. The project should be completed later this year.

    In the meantime, Walters's team has been sharing some of its numbers with a new National Research Council panel on gene patents that began work earlier this month. Led by Princeton University President Shirley Tilghman, the panel aims to identify where intellectual property is either creating problems for genomic research or helping fuel new discoveries. It hopes that the result will inform all sides of the debate over how universities should handle DNA-based patents.

    • * Association of University Technology Managers 2004 Annual Meeting, 4 to 6 March, San Antonio, Texas.

  14. INTELLECTUAL PROPERTY

    Most Academics Eschewing Patents?

    1. David Malakoff

    SAN ANTONIO, TEXAS—A quick scan of the business pages might suggest that every scientist on campus has received a fistful of patents and started a company planning to make millions from those inventions. But the reality is quite different. A new study finds that a majority of researchers at 11 major institutions didn't try to claim a discovery as intellectual property, much less become a CEO.

    Those results “appear to at least partly debunk the rhetoric” that universities are becoming overwhelmingly commercial, says economist Jerry Thursby of Emory University in Atlanta, Georgia, who conducted the study with his wife, Marie Thursby of the Georgia Institute of Technology in Atlanta. The pair presented their preliminary findings earlier this month here at the annual meeting of the Association of University Technology Managers.

    The Thursbys tracked 4702 science and engineering researchers who were on the faculty in 1993 at universities ranging from Harvard to Texas A&M. They then documented each researcher's invention “disclosures”: the first step toward obtaining a patent or license. Nearly 63% of the group, they found, didn't make a single disclosure in the 16 years (1983 to 1999) covered by the study. They also confirmed that relatively few researchers produce the bulk of the disclosures. And their data suggest that researchers from lower quality departments—as ranked by the U.S. National Academies—may be more likely to disclose than their colleagues at the nation's top programs.

    That last finding was no surprise to Ray Snyder, a former licensing officer at the University of Missouri, Columbia. “I had some faculty who disclosed like vitamins: one a day,” he says. “But those usually weren't [the inventions] I wanted.”

    The Thursbys caution that their snapshot is incomplete. It doesn't include all medical school researchers, for instance, or academics who started work after 1993. The latter group “may be much more entrepreneurial,” says Jerry Thursby. The couple hope to expand and complete their analysis later this year.

  15. SCIENTIFIC LIVES

    Reflected Glory: Life With a Nobelist Parent

    1. Giselle Weiss*
    1. Giselle Weiss is a writer in Allschwil, Switzerland.

    Having a researcher for a parent can stimulate a young mind, but what happens when that parent becomes a scientific superstar?

    BERN, SWITZERLAND—On a snowy morning in mid-January, 35-year-old Silvia Arber was at City Hall here in the Swiss capital to pick up the $75,000 Latsis Prize, the latest in a string of honors the young neurobiologist has racked up for her work showing how brain cells connect to one another. Arber, the daughter of Nobel Prize winner Werner Arber, laughingly recalls how amazed the Swiss National Science Foundation was at the press response to the prize. Rather than the usual few stray phone calls, “it was totally crazy,” says Arber, who is on the faculty of both the Biozentrum and the Friedrich Miescher Institute in Basel. The news even ran on 10 to 10, a popular Swiss TV news magazine. When the interviewers showed up, however, it was clear that it wasn't just the Latsis that interested them: They wanted to know what it is like to be the daughter of a Nobel laureate.

    Having a Nobel in the family attracts some of the attention that any celebrity status brings. But for reasons that have to do with what we expect of people who are famous for their brains rather than beauty or brawn, Nobel celebrity has its own mystique. For the children of Nobel laureates who pursue careers in science, the trappings of the award create a rich environment and opportunities, but they often lead to burdensome expectations.

    Nobel glamour can be bestowed at any age. “I was born with it,” says Stephen Bragg, a mechanical engineer whose father (at 25 the youngest person ever to receive the prize) and grandfather shared an award for x-ray crystallography in 1915. Guri Giaever was 10 when her father Ivar woke her up to tell her he'd just won a Nobel Prize for the discovery of quantum tunneling in superconductors. Not understanding what it meant, Giaever got dressed for school and went outside to wait for the bus, only to see a black limousine (sent by General Electric, where her father worked) pull up and a red carpet roll out. “I started to have an inkling what this might be about,” she says. For other sons and daughters, Nobel ancestry came after their own careers were well established. When Rena Lederman's father, Leon, captured a Nobel in 1988 for his part in developing the neutrino beam method used by particle physicists, his daughter was already on her way to tenure in the department of anthropology at Princeton University.

    Helping hand.

    Andrew Davis has helped his 88-year-old father, Raymond, cope with Nobel fame.

    CREDIT: A. DAVIS

    For many families, life reverts to normal once the initial hoopla is over. A student at Britain's Rugby School, Stephen Bragg says that his illustrious pedigree probably meant more to his headmaster than to his classmates, who were more concerned with rugby honors than scientific trophies.

    Children of accomplished scientists typically get to meet and talk to the cream of the scientific world; the sprinkling of that kind of magic dust over fertile minds is only accentuated by the arrival of a Nobel. Bragg remembers being brought down from her room at age 7 or so to “say hello” to Ernest Rutherford, the father of atomic physics. Tobi Delbrück, son of molecular biologist Max Delbrück, winner of a Nobel in 1969 for work on the genetics of viruses, found himself at age 10 on a camping trip in the company of physicist Richard Feynman, a Nobelist in 1965 for the theory of quantum electrodynamics. Tobi yearned to ask Feynman, who had many unorthodox interests, about his legendary skill at picking locks but was too shy. “No, no,” said Max, “go over there and ask him.” Tobi did, and Feynman regaled him, his sister Ludina, and Feynman's son, Carl, with stories about lock picking. The Delbrücks lived in Pasadena, California, but spent long summers at Cold Spring Harbor, where Tobi got to know virtually the entire molecular biology community “as a kind of extended family.” Similarly, Vilhelm Bohr, the son of Aage Bohr and grandson of Niels Bohr, both physics Nobelists, recalls what a “privilege” it was that leading scientists—especially physicists—used to assemble at the house in Copenhagen where he grew up and take time to talk to him and his brother and sister.

    Carrying the torch.

    Silvia Arber is a prizewinner in her own right.

    CREDIT: S. ARBER

    Just like any parents, Nobel laureates can have strong views about their children's career choices or can leave it up to them. “My father didn't push us into science,” says Giaever, who specializes in yeast genomics at Stanford University. Rather, he was simply “concerned that we have jobs after we got out of college.” Other Nobel parents, however, are more assertive. “I remember my father [George Porter] being intensely keen to kind of ‘help,’” says Andrew Porter, who felt somewhat pressured to choose chemistry and physics at age 15 before coming around on his own to biochemistry and molecular genetics—a better fit—later in his career. “The choice of career is very problematic in general,” says one young French researcher for whom “everything” changed after the Nobel and who went to some lengths “to find my own adventure.”

    Even for those who cut their own path, the shadow of a parent's Nobel can create a frustrating standard of achievement. “It's a bit difficult to live up to the reputation,” says Bragg. Moreover, people do assume that intelligence is inherited. That can play out cruelly in school and annoyingly later in life. When giving a talk, says Bohr, “there's always a few people at least who have the expectation that I will say something especially novel and original.”

    In Japan, Nobel laureates often are accorded pop-star status. That may well suit pop stars, but for scientists the sudden and uncalled-for fame can bring unwanted intrusion into their private life. Raymond Davis, for example, at 88 the oldest person ever to receive a Nobel Prize, could not deliver his own Nobel lecture in 2002 because he suffers from Alzheimer's disease, and his son, Andrew Davis, a cosmochemist at the University of Chicago, gave the lecture for him, as he has others since. A few of the newspaper stories around the time of the Nobel announcement concentrated on his father's condition as opposed to his achievement, and, says the younger Davis, “we didn't want him to become a ‘poster boy’ for Alzheimer's.”

    Scientific family.

    Tobi Delbrück enjoyed being surrounded by scientists, especially the likes of Richard Feynman.

    CREDIT: T. DELBRÜCK

    For some Nobel progeny, what they remember about their parents' scientific life is not the award but their passion and their desire to explain their work. William Lawrence Bragg, says son Stephen, had an extraordinary ability “to see what a mind not attuned to what [he was] doing might find difficult.” Giaever remembers that her father always approached her science homework with a blank sheet of paper and started essentially “from 2 + 2,” a process she recalls as extremely painful but instructive. When her parents went away on long vacations, her grades would plummet.

    What was most notable about the children of Nobelists approached by Science was the diversity of their responses. To some the prize genuinely seems to make no difference. A few scientists declined to discuss it. Others expressed surprise at what thinking about it stirred up. “It's a mixed thing,” says Bohr. “You really have to learn to live with it.” Sometimes, the most indelible impressions are simple ones. Max Delbrück died in 1981 of multiple myeloma. The last year or so of his life he spent at home, in Pasadena, finishing papers. “Lots of visitors from all over the world would come to visit him and just chat with him in the backyard,” says Tobi Delbrück. “He had his couch and would lie down in the sun. … That was a wonderful time.”

  16. Surviving the Blockbuster Syndrome

    1. Robert F. Service

    New technologies have sent drug company R&D costs soaring, but with no corresponding rise as yet in the number of new drugs. Executives are talking about mergers, reorganization, and other ways to kick-start their production engines

    These are anxious times at Merck. Two years from now, one of the New Jersey-based pharmaceutical giant's top-selling medicines, the cholesterol-lowering compound Zocor (simvastatin), will lose its U.S. patent protection. If sales follow the usual pattern, Zocor revenues—now about one-fifth of Merck's yearly tally—will drop from $5 billion in 2003 to just half that within 2 years. Merck executives and stockholders are banking on other drugs in the development pipeline to emerge and fill the void. But troubles have hit here as well. In November, Merck pulled the plug on two experimental drugs winding their way through phase III clinical trials, the last and most expensive premarket approval hurdle.

    If it's any consolation for Merck executives, they are not alone. After a string of heady years in which shareholders consistently earned double-digit returns, many large drug companies face the prospect of leaner times. While their most profitable ideas are going off patent, allowing other companies to sell generic versions, expenses for discovery and development are continuing to climb. Indeed, companies have been on a research and development shopping spree. According to the Pharmaceutical Research and Manufacturers of America, the industry's trade group, pharma companies spent $33 billion on R&D in 2003, a threefold rise since 1990 and nearly 30-fold since 1977.

    But so far this investment hasn't done much to increase output. Since 1996 the number of new drugs approved by the U.S. Food and Drug Administration (FDA) has dropped. Productivity, measured as the number of new drugs per unit of R&D spending, “is getting worse year after year,” says Stephen Hill, chief executive officer of ArQule, a Woburn, Massachusetts-based drug company. “That is not sustainable.” Adds Kenneth Kaitin, who directs the Tufts Center for the Study of Drug Development (CSDD) in Boston, “It's going to force a complete restructuring of the industry.”

    CREDIT: TERRY SMITH

    In a sense, restructuring has already begun. Large pharma companies are struggling to remake their R&D efforts, incorporating suites of new high-speed tools and narrowing their focus to drug classes in which they have the most scientific expertise. Hungry for blockbuster ideas, they're also turning to biotech companies like never before. Merck, for example, inked some 40 licensing deals with biotech and small pharma companies last year alone, something they were loath to do just a few years ago.

    Whether big companies can overcome the drag on productivity is uncertain. Many factors will continue to weigh them down, from the increasing complexity of chronic disease research to the disruption caused by a steady string of mergers within the industry. Some analysts worry that the industry's bottom line may soon take a hit. And this could lead to R&D cutbacks or a new round of mergers. Increasing drug prices sharply as a quick fix won't work, a prominent economist has warned: Patients and insurance companies are already objecting to the cost of new medicines. Company executives say they're aware of all these risks but believe that their massive investment in R&D has prepared the way for new growth—just around the corner.

    Upping the ante

    Worries about productivity are chronic in the pharmaceutical industry; analysts have been wringing their hands for at least a decade. R&D costs were already on the rise (see chart), in the 1990s. And the number of genuinely new drugs approved by FDA known as new chemical entities (NCEs) was then roughly what it is today. But in the early 1990s drug companies followed a two-pronged approach to combating their troubles. First, they embarked on a massive wave of mergers and acquisitions, allowing the partners to combine R&D efforts and reduce management overhead. Company buyouts quickly boosted the size of the surviving production pipelines. 1996 alone, for example, saw over 100 such company marriages.

    Lagging behind.

    Despite a large increase in R&D spending, the number of new drugs known as new chemical entities (NCEs) has risen only slightly.

    CREDIT: K. KAITIN/TUFTS CSDD

    Second, they pursued blockbusters, drugs with expected sales over $1 billion. Whereas only 15% of drugs fell into this category in the early 1990s, the fraction has risen to roughly 50% today. These compounds typically had several years of patent life left when first released, so the industry's sales figures kept growing as new top- sellers came online. “The number of blockbusters was rising steadily,” says Peter Tollman, a pharmaceutical analyst at the Boston Consulting Group. For the top 10 companies, which account for half of all sales in the industry, the boost was enough to generate 9% growth through the 1990s, secure the top spot among Fortune's most profitable industries in 2001, and keep productivity concerns at bay.

    A demographic shift has brought some good fortune to drug companies. As the baby boom generation nears retirement age, the number of Americans taking prescription medicines is skyrocketing—helping account for the number of drugs that meet the blockbuster standard. According to IMS Health, a market data firm in Fairfield, Connecticut, in 2000, drug costs to insurers rose nearly 15%. Although higher drug prices accounted for just under one-third of that increase, the rest reflected the reality that more patients were using medications, and each patient was more heavily medicated. This trend is most pronounced in North America, the leading region in pharmaceutical spending, accounting for 42% of the total, well ahead of Europe, which is in second place at 25%. Health care analysts think the pattern is spreading around the world.

    But even taking an aging population into account, many pharmaceutical industry watchers are concerned that the strategies that worked well in the past decade are nearing the end of their run. Perhaps most worrisome is the industry's dependence on blockbusters, which concentrate company profits in a handful of highly popular compounds, Tollman says. Because blockbusters eventually lose “exclusivity” when their patents run out, they also concentrate risk. Over the next 5 years, for example, drugs worth a whopping $30 billion will lose their exclusive sales rights; generic products are expected to eat into their markets. Although that may seem good for patients, it's forcing big companies to steer clear of antimicrobials, which are seen as not profitable enough (see sidebar, p. 1798). “There are a lot of questions on how [drug companies] will replace sales,” Tollman says. With the modest number of compounds currently in the pipeline, Tollman's forecasts suggest that the total number of blockbusters with patent protection will flatten out within the next couple of years, dragging down the industry's potential to grow (see figure).

    Exposed.

    A surge in the number of drugs going off patent threatens big pharma's growth.

    CREDIT: P. TOLLMAN/BOSTON CONSULTING GROUP

    Another expert who has noticed the gathering clouds is Peter Traber, a former executive at GlaxoSmithKline and now the president of Baylor College of Medicine in Houston, Texas. Mergers helped in the short term, but they also increased the pressure to find blockbusters, he says. Simply adding a few drugs to the product line, each bringing in a few hundred million dollars a year in sales, doesn't cut it for a giant. Pfizer, the industry's biggest company with $45 billion in revenue, for example, needs to release several blockbusters a year just to keep growing at 10% to 15% a year. And investors expect such growth.

    Last year, Pfizer swallowed Pharmacia and bolstered its pipeline, a move that industry analysts agree was successful. But in many cases, the internal chaos caused by a merger may actually harm productivity. As companies mesh large research operations, projects may be put on hold for months, even years, while higher-ups try to sort out strategy. Management reshuffles can be “devastating to R&D,” says Lee Babiss, who heads preclinical development at Roche in Nutley, New Jersey.

    In other cases, executives' ability to wring profits from new discoveries has been outpaced by competitive research. Scientists have learned how to move aggressively into areas that others have exploited. In the 1970s and '80s, pharmaceutical companies commonly had a market to themselves for 5 years or more if they were the first to develop a drug in a new class of medicines; today they're lucky if they get 1 or 2 years. “The time you have on the marketplace is very much shortened,” says Frank Douglas, chief scientist at Aventis in Bridgewater, New Jersey.

    At the same time, insurers are becoming more assertive about cost control, approving only a few medicines in a class and asking doctors to prescribe the least expensive. All of this puts pressure on companies to seek out drugs that will treat conditions that are not already treated or find a new mode of drug action. And that drives up R&D costs.

    R&D challenges

    Research, at least, has been advancing at a dizzying pace in industrial laboratories. But analysts see reason for concern here as well. According to Tufts's CSDD, it now costs companies an average of $897 million—taking account of candidate compounds that fail along the way—to develop, test, and obtain approval for a new medicine. And there are plenty of failures. A staggering 99.9% of compounds wash out of the development pipeline. Most die early in screens that test for a particular biochemical activity or toxicity. But even in the later stages of development, success is spotty: Only one-fifth of compounds that enter human testing ultimately get approved for sale.

    View this table:

    To make matters worse, it's getting more expensive to test medicines in clinical trials. Part of the reason, according to Babiss and others, is that the bottom-line need for blockbusters is forcing the industry to tackle complex objectives; companies are turning away from acute ailments such as bacterial infections and gastrointestinal problems and are targeting more profitable chronic conditions such as high cholesterol and heart disease. “The indications have changed dramatically since the 1980s,” Babiss says. The upshot is that because patients take the new medicines for much longer periods, companies must test candidate drugs on more patients for longer periods of time to ensure that they don't trigger dangerous side effects.

    According to CSDD, the mean amount of time for clinical research in one of the fastest growing sectors—biopharmaceuticals, which include proteins, antibodies, and antisense drugs—has ballooned from about 31 months in the 1980s to nearly 75 months in 2000–02. Many compounds simply don't survive this extended gauntlet.

    To deal with these challenges and to fatten their production pipelines, pharmaceutical companies have spent heavily on new R&D technologies. The industry's first wave of purchases, beginning in the late 1980s, went after a combination of high-speed tools for creating drug candidates: combinatorial chemistry, an approach to making thousands of druglike small molecules in the time it used to take to make just one, and high-throughput screening to assay the compounds for biological activity.

    Despite the high hopes, these technologies “haven't yielded as much as people thought they might,” Tollman says. One of the big lessons from the early days of combinatorial chemistry is that simply making more compounds doesn't translate into finding more drugs. Hill says: “The number of compounds you can make is almost infinite.” But he adds, “The real question is which compounds should I make and screen.” For a time, combinatorial chemists struggled to make the largest collections of molecules possible; now most groups have become more selective, targeting their collections on molecules likely to be effective and tolerated by the body.

    Enthusiasm is now giving way to sober second thoughts about the second wave of high-throughput technologies, including those used in studies of genomics and proteomics. The early results were voluminous but not commercially impressive. “So far [genomic-proteomic analysis] doesn't appear to have delivered a great deal of output,” Hill says. And in a 2 February speech to the National Cancer Institute (NCI), FDA Commissioner Mark McClellan said, “The plain truth is that many of the most dramatic scientific advances that have recently been made in the lab have not transformed medical care.”

    In fact, executives say, the new technologies have allowed the discovery of new drugs to become more complex. Companies historically focused searches for a specific therapeutic goal on about 50 different molecular targets, most often proteins. With the rise of genomics, however, companies quickly discovered thousands of potential gene- and protein-based targets. “We thought we would very quickly validate targets that were critical to disease and agonize or inhibit them as a way to start to find a drug,” Douglas says. “What we found in fact is that validating targets takes a lot of time. This is one of the big disappointments of this era.”

    Pharmaceutical executives remain upbeat, however, that in time high-throughput technologies will boost the industry's output. “I'm very optimistic that the new suite of tools will solve the problem,” Douglas says. “I think we're at the base of the next wave of innovation.” But for now the payoff seems farther away than many had originally hoped. “I think we got maybe too enamored of technology and lost focus of what we do,” Babiss says. “The 1990s were really a boon for us in terms of science. We forgot that we needed to link all of that to disease.”

    Switching targets

    Many analysts have looked at the industry's predicament and concluded that the long-successful strategy of targeting R&D to search for blockbusters is due for an overhaul. “Sooner or later the model is going to have to change,” Douglas says. Kaitin of Tufts says he believes that transformation is now under way. Except for the largest companies such as Pfizer and GlaxoSmithKline, Kaitin notes, most large pharmaceutical companies are trying to find areas to focus on—an approach long familiar to the biotech industry. For much of the last decade, Kaitin says, the blockbuster mode of R&D has been to spread bets widely, putting a modest amount of resources in many therapeutic areas with a high potential of payoff, in hopes that a few would produce big winners. But for most firms, that's simply become too expensive. As a result, companies are increasingly focusing their research on specific areas. Schering A.G., Kaitin points out, has tightened its focus on radiology, dermatology, and reproductive health, and Bristol-Myers Squibb has targeted oncology, its historic area of strength. “These firms are no longer trying to be all-purpose providers of pharmaceuticals in all areas,” Kaitin says.

    Another strategy that's getting a fresh look, Babiss and others say, known as pharmacogenomics, is to tailor drugs more precisely to the genetic profile of patients being treated. Touted for years, it's been slow to get off the ground in part because the science was so complex. Perhaps more importantly, business managers have been skeptical of an approach that limits the market to a subset of patients.

    Whether managers like the idea or not, Babiss and Douglas note, they may be driven toward pharmacogenomics because it offers a way to economize on increasingly costly clinical trials. Even drugs that pass safety and efficacy hurdles work for only about 60% of patients, Douglas says. “Nonresponders” and patients who might experience ill effects might be excluded in advance by testing for specific biomarkers. That would have two benefits, Babiss says. First, it would make a drug benefit easier to spot. Second, because there would be less noise in data from the trial, it might also give an earlier readout on therapeutic failures, allowing company managers to kill a drug earlier in the process, before the costs of late-stage clinical trials pile up. “You'll be able to weed out compounds early … and quickly reduce clinical costs,” Douglas says. “The bottom line is it's going to happen,” says Babiss.

    Large pharma companies are making other moves in hopes of boosting their output of new compounds. Three years ago, for example, GlaxoSmithKline restructured its R&D, dropping its centralized approach in favor of six separate research operations, each with a different disease focus and autonomy over which projects to pursue. The hope was that the new structure would provide scientists with an entrepreneurial environment of the kind found in biotech firms. Other companies such as AstraZeneca, Pfizer, and Abbott Laboratories have set up research operations in biotech hot spots such as Boston and San Diego. Last year, the Swiss-based drug giant Novartis opened a new R&D facility in Cambridge, Massachusetts, and said it has a 10-year plan to hire as many as 1000 researchers. The hope is that the concentration of biomedical expertise in the region will spur new scientific breakthroughs that lead to drugs.

  17. Orphan Drugs of the Future?

    1. Robert F. Service

    Eli Lilly pioneered the development of penicillin, vancomycin, and erythromycin a half-century ago, building an empire on their ability to stop bacterial infections. But in 2002 the company decided to refocus its infectious-disease research on more lucrative targets: fighting viruses and boosting the body's own immune defenses. And the Indianapolis, Indiana-based Lilly isn't the only one that's retreating from the battle against bacteria; so are Abbott Laboratories of suburban Chicago, Aventis of Strasbourg, France, and others.

    The trend is part of an industry-wide hunt for bigger and better prizes. Larger financial returns are needed, companies say, because new products must overcome staggeringly high testing costs to prove that they are safe and effective. For a variety of reasons, the barriers are worse for antibiotics, which are rapidly becoming the stepchild of the pharmaceutical industry. And infectious-disease specialists are sounding an alarm.

    Resistance.

    Antibiotics are losing effectiveness against S. aureus (above) and other organisms.

    CREDIT: KARI LOUNATMAA/PHOTO RESEARCHERS INC.

    “There is an increasing number of [antibiotic] resistant organisms and a decreasing number of drugs in the pipeline,” says David Gilbert, a clinician at Portland Providence Medical Center and the Oregon Health & Science University in Portland and the past president of the Infectious Disease Society of America (IDSA). Gilbert says the combination could be brewing up a “perfect storm.”

    Since their discovery in the 1940s, antibiotics have saved millions of lives. But as antibiotic use has grown, the bugs they were designed to kill have grown more resistant. Just a few years ago, for example, a commonly acquired bug known as Staphylococcus aureus was quickly felled by a penicillin relative called methicillin; today over 50% of hospital-acquired staph infections are methicillin resistant. Other top-selling antimicrobials such as the fluoroquinolones and vancomycin are also now facing increasing resistance.

    Although the need for new drugs is on the rise, the number of successful new antibiotics has been on the wane in recent years. According to John Powers, an infectious-diseases specialist at the U.S. Food and Drug Administration (FDA), 16 new antibacterials were approved by FDA from 1983 to 1987, whereas just nine passed approval from 1998 to 2003. And according to IDSA, only two of the compounds developed in the last 5 years had a novel mode of action, which helps evade resistance. Mark Goldberger, the acting deputy director of FDA's Center for Drug Evaluation and Research in Rockville, Maryland, agrees that there is “a concern that we are not seeing as many innovative compounds as we would like.”

    The picture's not likely to improve anytime soon. IDSA's recent survey of 11 of the largest pharmaceutical companies found that of the estimated 400 compounds the companies have in development, only five are antibacterials. And if companies continue to scrap research programs, they will have a “big problem” trying to make up the lost ground later, Gilbert says: “Discovery is not an easy or quick process. There is a huge start-up phase.” Goldberger adds, “If you wait until there is a significant problem” with resistance, it will be “too late.”

    Industry analysts give several reasons for the declining interest in antibiotics R&D; all boil down to money. First, it's a highly competitive market, filled with products that are typically cheap and effective, says Frank Douglas, chief scientific officer of Aventis in Frankfurt, Germany. Targeting a drug to a resistant organism may not improve its chances. It's a “tough sell because of the limited market size,” Goldberger says.

    Second, there's a built-in brake on success. Doctors and public health officials are reluctant to use new compounds; they want to save them as a last line of defense against resistant organisms. “Drug companies have to spend a lot of money generating a drug that the medical community wants to hold in abeyance,” says Peter Traber, who became the president of Baylor College of Medicine in Houston, Texas, last year after serving as the chief medical officer for GlaxoSmithKline, the world's second largest pharmaceutical company.

    Antibiotics also buck another trend: They treat acute conditions and work quickly, whereas companies have become more interested in compounds that treat long-term, chronic conditions, such as obesity and high cholesterol. “As a consumer, you want a drug you don't have to take very long and works very well,” Goldberger says. But that isn't the most profitable type of drug. He adds that “in some cases the economics and the public health imperative do not match up.”

    The problem is getting some attention. Last month, IDSA leaders met with FDA Commissioner Mark McClellan in an effort to make the field more attractive to industry. Goldberger, who attended the meeting, says FDA is looking at ways to streamline the drug-approval process for antibiotics and allow companies to track a compound's effect on biomarkers linked to disease, a move that should simplify clinical trials and reduce their cost.

    View this table:

    But such changes, even if they are adopted, may not improve the underlying economics. IDSA is pushing more sweeping ideas. One proposal would ask FDA to develop a priority list of antimicrobial drugs that need to be developed. One proposal would ask companies to invest in products on the list and get in return extended-life patents on these or other products. This would require congressional authorization, and no such bill has yet been introduced. But if such a bill were to pass, it could mean billions of dollars in extra revenues—and possibly a new R&D boom—for antibiotic developers.

    Consumers might view this sort of award for drugmakers as bitter medicine, Gilbert acknowledges. But he thinks it may be necessary medicine all the same.

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution