News this Week

Science  17 Mar 2000:
Vol. 287, Issue 5460, pp. 1898
  1. HUMAN GENOME RESEARCH

    SNP Mappers Confront Reality and Find It Daunting

    1. Leslie Roberts

    A year ago, a consortium of pharmaceutical giants and Britain's Wellcome Trust launched a novel venture: They would put together a database of 300,000 genetic markers called SNPs and give them away. They wanted to be sure that all researchers had access, because SNPs promised to pinpoint the genes involved in common diseases such as hypertension, cancer, and diabetes—a heretofore impossible feat. Enthusiasm grew, and SNPs became widely touted as the key to a brave new world of personalized medicine—with drugs tailored to an individual's genotype and simple tests to determine one's risk of specific diseases. But a closed meeting held last week, sponsored by the SNP Consortium and the U.S. National Human Genome Research Institute (NHGRI), concluded that those promises may be harder to achieve than expected.

    View this table:

    The meeting was called to answer a deceptively simple question: How many SNPs are needed to find genes that confer susceptibility to common diseases? The question is not purely academic. When genomics companies like Genset, Incyte, and Celera began building private SNP databases in the late 1990s, National Institutes of Health (NIH) officials, worried that academic researchers would be shut out of the field, started a crash program in 1998 to find 100,000 SNPs. Like the 300,000 of the SNP Consortium, these would be made freely available.

    Since then, investigators have been finding SNPs by the barrelful. The consortium, which has since been joined by Motorola, IBM, and Amersham Pharmacia Biotech, is about to release 27,000 more SNPs, which will bring the total in NIH's SNP database to more than 50,000. But whether those efforts will yield enough SNPs to track down disease genes reliably is anyone's guess, according to participants at last week's workshop. “What emerged from the meeting is a more realistic appraisal of how complicated it is to even make those estimates,” says Nancy Cox of the University of Chicago.

    SNPs, short for single nucleotide polymorphisms, are places along the chromosomes where the genetic code tends to vary from one person to another by just a single base. They are estimated to occur about once every 1000 bases along the 3-billion-base human genome. SNPs in genes or control regions may influence susceptibility to common diseases. Others probably have no function but could provide valuable markers for gene hunters: If they lie close to a susceptibility gene, they are likely to be inherited along with it. Indeed, what galvanized researchers a few years ago was the possibility of building a map with SNPs peppered along the genome as landmarks. With such a map, investigators could compare individuals with a disease to a control group to see whether their SNP patterns varied. If so, the SNPs might lead to the genes involved in that disease.

    In general, the more markers on a map, the easier it is to find genes. But with the cost of identifying each SNP hovering at about $100, according to Collins, there's a big incentive to use the fewest possible. When the NIH effort began, NHGRIchief Francis Collins, Aravinda Chakravarti of Case Western Reserve University in Cleveland, and others pegged that number at 100,000, or about one SNP every 30 kilobases. So they were “rather rattled,” admits Collins, when Leonid Kruglyak of the University of Washington, Seattle, published a paper in Nature Genetics last summer asserting that 500,000 or even 1 million SNPs, spaced every 3 kilobases or less, would be needed to track down susceptibility genes. (Earlier studies had, in fact, pointed in that direction—see Science, 18 September 1998, p. 1787.)

    Both estimates may be right, depending on where you are in the genome, explains Nicholas Schork, a geneticist at Case Western Reserve who is on leave at Genset in La Jolla, California. In regions of the genome prone to genetic reshuffling, many more markers may be needed than in relatively stable areas. The trouble, says Schork, is that there's no way to know in advance which type of region you are working in.

    Another confounding factor, according to population geneticist Kenneth Kidd of Yale University, is that the usefulness of any one SNP varies enormously from population to population. Just one-third of the SNPs found so far seem to be widely applicable in all populations, says Kidd. That means investigators who want to study a particular ethnic group will have to find more SNPs.

    Factoring in all these complexities, Collins left the meeting thinking that 600,000 to 1 million SNPs would be ideal. “We should continue to build the SNP catalog into the millions,” especially, he says, as costs will drop once the human genome is sequenced. But, Collins added, considerable progress could still be made with a smaller set of markers. For instance, Allen Roses of Glaxo Wellcome presented unpublished data at the meeting showing that SNPs spaced about 30 kilobases apart were sufficient to pinpoint genes involved in four diseases: psoriasis, migraine, Alzheimer's, and diabetes.

    To Schork, the bottom line is that “SNP mapping works for some diseases, but it won't work for all.” Adds Pui-Yan Kwok of Washington University in St. Louis: “It is not a surefire approach by any means.”

    Despite the uncertainties, official enthusiasm for SNPs seems undiminished. NIH and the SNP Consortium are about to embark on a 3-month project to sequence massive amounts of DNA to help finish the genome project. That effort should turn up a half-million or more SNPs, says Collins. The Japanese government, too, is embarking on an effort to find 200,000 SNPs. Within a year or two, there should be enough SNPs around to figure out whether they will live up to their advance billing.

  2. SCIENTIFIC COMMUNITY

    Oxford Wins a Crown Synchrotron Jewel

    1. Michael Hagmann

    In this tale of two cities, one rejoices while the other pines for what might have been. On 13 March U.K. science minister David Sainsbury announced that DIAMOND, an $880 million synchrotron project that will allow scientists to probe the atomic structure of everything from proteins to ceramics, will be set at the Rutherford Appleton Laboratory (RAL) near Oxford. Losing out was the heir apparent—the Daresbury laboratory near Manchester, the site of the U.K.'s current synchrotron facility. Many scientists are denouncing the selection. “This is a crazy decision,” says University of Liverpool physicist Peter Weightman, a longtime synchrotron user. “It's a triumph of finance over scientific arguments.”

    Soon after the British government proposed DIAMOND in 1993, financing problems put the project on hold. Five years later, a pair of strange bedfellows came to the rescue. The Wellcome Trust, Britain's largest medical charity, offered to put $184 million toward DIAMOND, and the French government pledged $56 million up front and up to $13 million a year in operating costs (Science, 6 August 1999, p. 819).

    Then the political circus began. Wellcome officials lauded the benefits of DIAMOND at RAL, citing the ease of collaborations with an existing neutron source and a huge research community in the “golden triangle” formed by Oxford, Cambridge, and London. Secretary of State Stephen Byers talked up Daresbury, while his own advisers were touting RAL. Whatever tipped the scales toward RAL, the U.K. government isn't telling. “The decision was arrived at behind closed doors rather than through open discussions,” contends crystallographer Paul Barnes of Birkbeck College in London. A spokesperson for the U.K.'s Department for Trade and Industry suggests that the preferences of Wellcome and the French were key. “If there hadn't been anyone else involved, [Daresbury] would have been an option,” he says.

    To soften the blow, Sainsbury also announced a boost for science in England's Northwest, including Manchester, pledging $80 million in new science spending in the region. But the consolation prize may not halt Daresbury's decline. The government has promised to keep Daresbury's synchrotron running until 2 years after DIAMOND comes online, at least another 7 years from now. “I imagine the whole thing will shut down in the long run because the synchrotron really defines Daresbury,” says physicist Bob Cernik, Daresbury's trade union representative. Four of Daresbury's 270-strong synchrotron staff have left in the last few months, and Cernik fears the brain drain will accelerate.

  3. ASTRONOMY

    Distorted Galaxies Point to Dark Matter

    1. Adrian Cho

    Never have so many astronomers been so eager to claim they can't see straight. Groups working with three different telescopes have detected weak lensing, a distortion of distant galaxies that reveals dark matter strewn across deep space. The results provide a first direct glimpse of the vast tangle of massive, invisible stuff that astronomers and astrophysicists believe makes up most of the mass of the universe.

    Almost as interesting as the results themselves is how the researchers chose to make them public: Within days, all three groups rushed their findings into print—or, rather, into preprint—on Astro-Ph, an unrefereed Web server maintained by Los Alamos National Laboratory in New Mexico. Two did so with some misgivings, to avoid being scooped.

    Such posting frenzies are becoming common, says Tony Tyson, an astrophysicist with Lucent Technologies' Bell Labs in Murray Hill, New Jersey, and leader of one of the teams. “Various groups have their results in various stages, and then somebody jumps in [with a preprint], and then everybody jumps in.” A team working with the Canada-France-Hawaii Telescope (CFHT) in Hawaii made the first splash, followed by a group working with the William Herschel Telescope in the Canary Islands. Tyson's group, working at the Cerro Tololo Inter-American Observatory's Blanco Telescope in Chile, was the third to post its results.

    Pricking the three groups' heels was the knowledge that weak lensing may prove the best tool for studying the colossal dark matter infrastructure of the universe. Researchers hope to use the technique to measure the distribution of the ripples and undulations in the intergalactic tangle of dark matter—information that would tell cosmologists precisely how the universe grew up after its birth in the big bang.

    To glimpse dark matter, the three teams studied light from galaxies billions of light-years away. Such galaxies appear as faint luminous ellipses in the sky; gravity from intervening dark matter deflects their light, slightly squashing the ellipses in any small patch of sky so that, like schooling fish, neighboring ellipses tend to point in the same direction. But to see the gravitational effect, the astronomers first had to filter out similar but much larger distortions caused by optical imperfections and the atmosphere. For that, they turned to stars within our own Milky Way galaxy that lie close to the line of sight of the distant galaxies. At such close range, weak lensing could not affect the stars' images; any distortions had to be due to optical and atmospheric effects. By calculating how to turn the blurred images of the stars back into points and then adjusting the shapes of the distant galaxies in the same way, the researchers could isolate the distortion due to dark matter—a distortion so slight that each group surveyed tens of thousands of galaxies to see it.

    The three groups spent years analyzing their data. They picked up the pace as soon as the competition for recognition was on. On 27 February, the CFHT team posted a preprint detailing its results on Astro-Ph. Within 5 days, the Herschel and Blanco groups followed suit. On 7 March, the CFHT group issued a press release claiming to have seen weak lensing first.

    The three preprints are just a drop in a still-gathering tsunami of unofficial publications in astronomy, astrophysics, and other physical sciences. In 1995 researchers posted 1663 papers on Astro-Ph; in 1999 they posted 5639. Last month alone, 531 papers appeared on the site.

    “[Astro-Ph] has taken over [from the journals] as far as I'm concerned,” says Nick Kaiser, an astronomer at the University of Hawaii, Manoa. “Every morning the first thing I do is read the Astro-Ph e-mail I get.” The server will soon put traditional journals out of business, Kaiser predicts, and formal peer review will give way to some sort of electronic dialogue. That wouldn't surprise Princeton astrophysicist David Spergel. “For me personally, publication doesn't matter,” he says. “I've pretty much stopped reading the journals.”

    The leaders of the three weak-lensing teams don't go that far. Tyson thinks it's presumptuous to post a paper before it's been accepted for journal publication. In fact, he says, his group's paper was in review with a journal when the CFHT preprint forced his hand. Richard Ellis, an astronomer at the California Institute of Technology in Pasadena and leader of the Herschel telescope team, tells a similar story and says he is leery of what can happen when fear of being wrong loses out to fear of being late. “There's a terrible danger that the standards go down,” he says, “that it becomes just a race.”

    Yannick Mellier, an astronomer at the Institut d'Astrophysique de Paris and leader of the CFHT group, is more sanguine. Researchers should post as soon as they are confident of their results, he says; fear of humiliation will prevent them from posting weak or incomplete work. “If you do a bad job in this aspect, submitting a bad or nasty paper, you are almost immediately criticized by your colleagues.” But even he would regret the passing of the traditional journal, he says, “because [journal publication] means a paper has been completely refereed; it has been officially accepted.”

    Whether or not traditional publications survive, weak lensing seems sure to thrive. Each of the three groups has already collected more data with which to sharpen and expand its results. Moreover, the Sloan Digital Sky Survey, a nine-institution collaboration working with a telescope at the Apache Point Observatory in New Mexico, intends to survey a full quarter of the celestial sphere and capture images of roughly 80 million galaxies by 2005. Within a decade, astronomers and astrophysicists may be telling long, detailed stories about the universe's childhood. And you're likely to read them on a server like Astro-Ph first.

  4. SCIENTIFIC MISCONDUCT

    Cancer Researcher Sacked for Alleged Fraud

    1. Michael Hagmann

    It seemed too good to be true: Where others had only disappointing results, Werner Bezwoda found that breast cancer patients, blitzed with drugs and then given a bone marrow transplant, lived longer than patients on standard chemotherapy. Now investigators say the promising findings were indeed too good to be true. On 10 March, officials at the University of the Witwatersrand in Johannesburg, South Africa, brought to a close a 6-week probe triggered by a report from a U.S. expert team, which concluded that Bezwoda misrepresented his findings and had failed to obtain approval for the trial before proceeding. The university fired him.

    Besides dashing hopes of a new last-ditch weapon against breast cancer, the ignoble episode may tarnish the public image of clinical research, some experts fear. “The message [Bezwoda's] behavior sends out is ‘Medical researchers should not be trusted,’” says Peter Cleaton-Jones, chair of Witwatersrand's Committee for Research on Human Subjects. Others agree. “It may become increasingly difficult to recruit patients” for clinical trials, says oncologist Robert Rifkin of U.S. Oncology, a national network of cancer clinics.

    Before his fall from grace, Bezwoda had raised high hopes in the cancer community. At the American Society of Clinical Oncology's annual meeting in Atlanta last May, he described a trial involving 154 breast cancer patients whose advanced tumors were removed but who remained at high risk of metastasis. According to his presentation, Bezwoda gave each of 75 patients two treatments with a high-dose drug cocktail. Designed to kill the cancer cells, the bombardment also inflicts heavy collateral damage: It destroys bone marrow, where blood cells are formed. To compensate, Bezwoda transplanted the patients' own marrow cells after each round of chemotherapy.

    Compared to control patients given a low-dose drug therapy, Bezwoda reported, the high-dose group survived about twice as long without a relapse, on average. Similar blitzkriegs have worked against testicular cancer and some leukemias, so “many people were very enthusiastic and thought we should go ahead” with a major trial based on Bezwoda's protocol, says oncologist Marc Lippman of Georgetown University's Lombardi Cancer Center in Washington, D.C.

    Observers were puzzled by a major discrepancy, however: At the Atlanta meeting, three other trials, all similar to Bezwoda's, reported that a high-dose regimen offered no benefits over standard therapy. When Rifkin and others met last December at the U.S. National Cancer Institute (NCI) to sketch out plans for a follow-up study, Rifkin recalls, “we felt we should go over there and have a closer look.” NCI dispatched a seven-member audit team to South Africa on 25 January.

    They were in for what Rifkin calls “a big surprise.” As outlined in the audit team's report, published 10 March on The Lancet's Web page (www.thelancet.com), Bezwoda could produce only 58 records of patients treated with high-dose chemotherapy, 17 fewer then he claimed in Atlanta to have treated. By his own protocol, the majority of patients should never have been enrolled, the auditors reported. Even more disturbing, there were no records on any of the 79 control patients. “It's unclear whether [the missing patients] ever existed,” says Rifkin. Bezwoda offered no documentation that any patients gave informed consent to take part in the trial, and when asked by the audit team, the university's ethics board had no record that the study was submitted for review.

    Apprised of these revelations, Cleaton-Jones launched a probe on 31 January. One day earlier, he had received a letter from Bezwoda in which the researcher acknowledged “improving” his results by misstating which drugs were given to control patients. However, asserts audit member Allen Herman, an epidemiologist at South Africa's National School of Public Health in Pretoria, “this was a very narrow admission that did not at all correspond to the full range of his misconduct.”

    According to Cleaton-Jones, Bezwoda resigned before the investigation began, effective the end of March. But a three-member jury that presided over the hearing deprived Bezwoda of that exit, firing him on 10 March instead.

    Bezwoda could not be reached for comment, but in an 11 March statement he maintains his findings are valid. He claims his misrepresentation of the control group “does not invalidate my basic conclusions” about high-dose chemotherapy and patient survival. He denies forging patient records and says he intends to appeal his dismissal.

    The Health Professional Council of South Africa, which has the power to revoke Bezwoda's medical license, has launched its own investigation. Says Herman, “This story is far from over.”

  5. CHEMISTRY

    Nanocrystals May Give Boost to Data Storage

    1. Robert F. Service

    For companies that make magnetic disk drives, the future is scary. Over the past 5 decades, engineers have managed to control the magnetic orientation of smaller and smaller spaces on their disks. That's recently allowed them to increase data storage

    capacity by a staggering 100% a year. Industry experts aren't sure how much longer they can keep up that blistering pace, however. “Five years out, we don't know what will come next,” says Christopher Murray, a chemist who works on new materials for future disk drives. “It's an unnerving situation.”

    Now, Murray and his IBM colleagues have hit upon an answer that may steady a few nerves. On page 1989, the researchers report creating tiny carbon-coated metallic particles—each just 4 nanometers, or billionths of a meter, across—that they assemble into a thin sheet and bake into a magnetic film that could be used in hard disk drives. Down the road, if each of the tiny particles can be made to store a bit of information as a magnetic field, the films have the potential to hold terabytes of data per square inch, hundreds of times the capacity of today's disk drives. The new nanoparticle films aren't about to hit the computer superstores: Researchers must still work out how to make them compatible with the technology used for writing and reading bits of data to the films. Still, Jim Heath, a chemist and nanoparticle expert at the University of California, Los Angeles, says the progress thus far is impressive. “This is a big deal,” he says. “It means that magnetic recording could be carried down to near molecular length scales.”

    Capturing the $35-billion-a-year market for disk drives won't be easy, however. Today's hard disks owe their storage prowess to films made from a cobalt alloy that are rugged and cheap to make. Manufacturers essentially spray-paint magnetic material onto a surface under vacuum and bake it. That leaves a material full of 15- to 20-nanometer magnetic grains whose magnetic orientation can be aligned by a recording head positioned just above it. Typically, a bit of information is stored as the common orientation of hundreds of those grains.

    Engineers have long increased storage capacity by shrinking the magnetic grains in the films, so each bit of stored data takes up less space. But there's a limit to this process: Many magnetic materials, such as cobalt, lose their magnetic behavior when particles shrink below about 10 nanometers. And particles that do maintain their strong magnetic behavior tend to clump together instead of forming an even sheet.

    The IBM team—Shouheng Sun and Murray at IBM's T. J. Watson Research Center in Yorktown Heights, New York, along with Dieter Weller, Liesl Folks, and Andreas Moser at the Almaden Research Center in San Jose, California—managed to get around both problems at once with some clever chemistry. Their strategy was to make tiny particles from iron and platinum, which would start out as weakly magnetic—allowing them to form an array—but then transform them into stronger magnets at the end.

    The researchers started by concocting a solution that included two metal salts—one containing iron atoms, which are hungry for electrons, the other platinum atoms capable of donating electrons. As the salts dissolved, the iron atoms turned to the platinums for electrons, causing the atoms to begin assembling themselves into a ball. Also in the brew were soap molecules, oleic acid, and oleyl amine. As the particles grew, the soap molecules glommed onto the metal particles and stopped them growing at 4 nanometers.

    At this stage, the metal particles were weakly magnetic jumbles of iron and platinum atoms. To make an array, the IBM team simply poured the particles out of the beaker. As the solvent evaporated, the particles nestled down into a regular structure like oranges stacked in a box.

    Next, the IBM researchers baked their array like a sheet of cookies, at 500ºC for about 30 minutes. The heat fused the organic molecules into a hard carbon coat that locked the particles in place, and it caused the iron and platinum atoms to segregate into distinct atomic planes, a change that dramatically boosted the magnetic strength of the materials.

    The IBM team showed that these materials can store data faithfully at a density equivalent to that of hard disks on the market today. The particles' small size may even allow researchers to boost that density 10-fold using current read and write heads. But if heads can be improved to manipulate magnetic fields on single particles—and that's a big if—then the films could potentially store orders of magnitude more data.

    Sun and Murray are quick to point out that the new materials need more work. The biggest problem, Murray says, is that conventional recording heads work only if all the magnetic grains or particles on a disk have their crystalline axes aligned with the disk's surface. For now, however, the tiny iron-platinum particles can freeze in place facing any direction. Murray says the IBM team is working on aligning the particles by applying an external magnetic field to their films as they bake. If they succeed, the future of data storage may soon become a little less unnerving.

  6. GENOME SEQUENCING

    Clinton and Blair Back Rapid Release of Data

    1. Eliot Marshall

    It's not often that heads of state wade into a furious quarrel in the scientific community, but both President Clinton and British Prime Minister Tony Blair did so this week. On 14 March, the two leaders announced that they enthusiastically support the rapid release of human genome sequence data, a principle long advocated by Francis Collins, director of the National Human Genome Research Institute (NHGRI), and other scientists in the nonprofit sector.

    Clinton released a joint statement with Blair at Science's press time arguing for the rapid release of human genome data. Afterward, Clinton made some personal remarks that went even further. Speaking at the annual medal of science ceremony at the White House, Clinton urged private companies to “make raw [DNA] data publicly available” and make “responsible use of patents.” The statements were carefully worded to support patents on “new gene-based health care products.” But they seemed directed at the activities of some private data-marketing companies—such as Celera Genomics and Incyte—that have been engaged in high-volume sequencing of human DNA and collecting genes and genetic variations.

    Although the high-level attention to this debate is new, the debate itself is not. The largest DNA sequencing labs funded by the U.S. government and by the Wellcome Trust, a British charity, endorsed very similar principles for data release at a meeting of top genome sequencers in Bermuda in 1996. Typically, these big labs release new human DNA data within 24 hours of production, posting results on the Internet. But the labs' insistence on this practice has caused some friction with the private sector. Recently, for example, talks broke down between Celera and a group of nonprofit centers over how they might collaborate on completing the sequence of the human genome. They clashed specifically on public access to data (Science, 10 March, p. 1723).

    In addition to giving Collins's side of the debate a boost, this high-level endorsement of the Bermuda rules may have an impact on discussions within the U.S. Patent and Trademark Office (PTO). For several years, Collins and former National Institutes of Health director Harold Varmus have tried to persuade PTO leaders that they should not grant patents on simple gene discoveries. In letters and speeches, both have argued that only inventors who clearly describe the “utility” of a gene, such as a plan to develop a medical product, deserve to win a patent. Although the PTO has proposed tighter policies, it hasn't gone as far as Collins would like (Science, 18 February, p. 1196).

    Collins calls the Clinton-Blair announcement a “very encouraging” and “gratifying endorsement” of NHGRI's strategy. But presidential enthusiasm may not carry much legal weight. PTO biotech section leader John Doll says: “It doesn't seem like this is going to affect biotech patenting at all.” And Celera said in a statement that the company “welcomes” the Clinton-Blair policy, calling it “completely consistent” with Celera's plan to publish the human genome in a peer-reviewed journal and make the information “available to researchers for free.”

  7. BIOMEDICINE

    Congress Investigates Fetal Tissue Sales

    1. Gretchen Vogel

    At a packed hearing on 9 March, members of a congressional committee vowed to investigate whether some companies are profiting from the sale of fetal tissue. One committee member said after the hearing that he would introduce a bill requiring researchers to report the source and cost of fetal tissue they use. But—much to the disappointment of antiabortion groups that had hoped the hearing would spark outrage over grisly tales of trade in body parts—the testimony itself turned up no persuasive evidence of wrongdoing.

    Indeed, one key witness, a clinic technician who had made gruesome allegations in a video that an antiabortion group had been circulating on Capitol Hill, backed away from many of the claims he had made on the tape. That left for evidence a network news broadcast, aired the previous night, in which a Missouri pathologist on hidden camera seemed to admit selling fetal tissue for a profit—but committee members disagreed over whether that indicated widespread disregard for the law.

    Under a law enacted in 1993, researchers can pay for the cost of procuring and shipping fetal tissue. However, buying or selling fetal tissue for a profit is strictly forbidden. At the hearing, both Republicans and Democrats voiced support for fetal tissue research while condemning any possible for-profit sales. “Full and vigorous enforcement of the law against the sale of fetal tissue is essential to prevent a negative impact on legitimate research,” said Michael Bilirakis (R-FL), chair of the subcommittee.

    The impetus for last week's hearing arose in November, when several outraged members of Congress presented on the House floor a price list for various fetal organs. They alleged that the price list had come from a company called Opening Lines and was evidence that the company was trafficking in fetal tissue. The House passed a resolution condemning such sales and called for a hearing into the matter.

    The night before the 9 March hearing, ABC News broadcast a report on 20/20 in which the owner of Opening Lines, Missouri pathologist Miles Jones, told a reporter posing as an investor that he could make $50,000 in a week from sales of fetal tissue. On 10 March, the FBI launched an investigation into whether Jones or the Kansas City-area clinic where he apparently obtained tissue broke federal law, said Special Agent Jeff Lanza of the FBI office in Kansas City, Missouri.

    Jones, who could not be reached for comment, had been subpoenaed to testify at the hearing, but he failed to appear; the committee voted unanimously to hold him in contempt of Congress. Two workers from the clinic that allegedly procured tissue for Jones did appear, but both said they did not have personal knowledge of illegal tissue sales. One of those, Dean Alberty, was the key figure in the videotape that had been making the rounds on the Hill. But under oath, Alberty, who admitted being a paid informant for an antiabortion group while working at the clinic, hedged his earlier allegations. He said that researchers would call him to ask what kinds of tissue were available that day, but he “did not discuss prices” with them.

    The committee also called on two scientists who work with fetal tissue: Samuel Cohen of the University of Nebraska Medical Center in Omaha and Hannah Kinney of Harvard Medical School in Boston. Both testified that they had no knowledge of companies that sold fetal tissue for profit. Kinney told the committee that the fetal brain tissue she uses comes either from the pathology department within Harvard Medical School or from the fetal tissue bank at the University of Washington, Seattle, which charges a flat fee of $100, no matter what tissue a researcher requests.

    Despite the lack of evidence, the committee said it will continue its investigations. Spokesperson Steve Schmidt said committee staff are looking into the pricing practices of several organizations that provide fetal tissue to researchers, including the University of Washington. And committee member Tom Coburn (R-OK) says he intends to introduce a bill that would set up a national reporting network on fetal tissue transfers. Both tissue providers and researchers would be required to detail the source of the tissue and what, if any, fees were paid. A spokesperson for Coburn said the form would be “similar to the records a pharmacist has to keep” on sales of controlled substances.

  8. GLOBAL CHANGE

    Endorsement for Controversial Satellite

    1. Andrew Lawler

    The presidential campaign appears to be going into orbit. An Earth-monitoring satellite initially proposed by Vice President Al Gore won scientific endorsement last week from a panel of the National Research Council (NRC). President Bill Clinton's science adviser Neal Lane immediately hailed the study as “a rigorous analytic review” that gives a green light to the program and called for bipartisan support for the effort. But with Gore as the likely Democratic nominee, that may not materialize: Republican leaders in Congress have long opposed the venture, and Representative James Sensenbrenner (R-WI), who chairs the House Science Committee, promptly issued a statement complaining that the satellite “is not the best use of NASA's scarce science funding.” Nevertheless, space agency managers told Congress on 8 March that they are pushing ahead for a planned spring 2001 shuttle launch.

    Gore first proposed the satellite—named Triana for the member of Columbus's crew who first spotted land—in March 1998. The idea was that the spacecraft would beam back pictures of the whole planet, which would “awaken a new generation” to environmental concerns. NASA embraced the idea, then projected to cost only $50 million. But Gore's political nemeses on Capitol Hill, such as Sensenbrenner, complained that the program lacked scientific merit and hadn't been peer reviewed. Some said it would produce little more than an expensive screen saver. NASA tried to answer those criticisms by adding a host of global change instruments to accompany Triana's camera. Those instruments, selected through peer review, would measure Earth's vegetation and cloud coverage, and also the ozone and aerosol levels in its atmosphere.

    Those additions pushed the mission's price tag to $75 million, which only increased the furor over the project. The House voted last summer to cancel it, and a NASA inspector general's report criticized its rising cost (Science, 24 September, p. 2041). Although it was rescued at the last minute in a deal with the Senate, Congress ordered NASA to stop work on Triana for 90 days while the NRC studied the scientific merits of the venture. The committee, chaired by University of Michigan president emeritus James Duderstadt, provided a clear, though qualified, green light for the spacecraft.

    The panel concludes that Triana's instruments and its unique position—at a distance of 1.6 million kilometers it would be farther out than other Earth-observing satellites—could provide a fresh set of global data on everything from ozone to forest fires. That data will “complement and enhance” those from other spacecraft closer to Earth, and “may well open up the use of deep-space observation points … for Earth science,” the study states. Duderstadt's panel also determined that the program's cost “is not out of line for a relatively small mission.” But the NRC panel also urges NASA to conduct more extensive testing of the satellite's components before launch, and it suggests that “there may be insufficient funding for scientific analysis of the data.” Sources familiar with the program add that funding woes could be increased by the 90-day stand-down ordered by Congress, which will add as much as $10 million to $15 million to Triana's costs.

    Despite these caveats, researchers involved in the effort are delighted with the report. “We can move ahead now,” says Francisco Valero, an atmospheric scientist with San Diego's Scripps Institution of Oceanography and the mission's principal investigator. He insists that Triana will be more than a politician's daydream, capable of providing new data that will help resolve the critical issue of whether human activities are contributing to global climate change. But with the presidential election campaign getting off the ground, Triana is unlikely to shake off its political stigma.

  9. On the Hunt for a Wolf in Sheep's Clothing

    1. Michael Balter

    The revelation that scrapie probably spawned “mad cow disease” and its human variant has renewed a campaign to eradicate scrapie before it or the mad cow pathogen—perhaps masquerading as scrapie—threatens people

    TRECASTLE, WALES, AND COMPTON, ENGLANDAs Welsh farmer David Jones opens a creaky iron gate, his black sheepdog, Zack, bolts up a hill toward a dozen ewes grazing near the top. The sheep take flight, disappearing over the crest with Zack in hot pursuit. They reappear a few seconds later, running down the hill with Zack nipping at their hindlegs. “Don't bite them, Zack!” yells Jones, eager to make a good impression on his visitors. As the sheep cower next to the gate, Zack glaring at any woolly thing that dares to move, Jones leaps into the middle of the flock and collars a ewe with his blue shepherd's staff. “See these brown patches on top of her head?” he asks, pointing to where the animal has lost some wool. “That's the first sign of scrapie.”

    If Jones's diagnosis is confirmed, it would be the third case of this fatal neurodegenerative disease among his 700 ewes during the past year. Scrapie, which is the ovine counterpart to bovine spongiform encephalopathy (BSE), or “mad cow disease,” is a scourge that British sheep farmers—as well as their nation's health and agriculture officials—are treating with new respect. Like BSE in cattle and Creutzfeldt-Jakob disease (CJD) in humans, scrapie is linked to aberrant proteins called prions, which many researchers believe act alone to infect and destroy nervous tissues. BSE has apparently jumped the species barrier and caused more than 50 deaths in the United Kingdom and France from a new form of CJD in humans.

    Although there is no evidence that scrapie, which has afflicted British flocks for more than 250 years, can also infect people, the devastating experience with BSE provides little room for comfort: It demonstrated that animal prions can harm humans, which means that scrapie should not be disregarded as a potential health threat. Even more worrisome, prion researchers have been able to infect sheep with BSE; the resulting disease, which causes the animals to tremble and stagger and eventually die from a progressive loss of brain cells, closely resembles scrapie. This has raised serious concerns among public health officials that if a BSE epidemic did break out among sheep, it might not be recognized immediately.

    For these reasons, in 1996 the Spongiform Encephalopathy Advisory Committee (SEAC), an expert panel that advises the British government on transmissible prion diseases, recommended beefing up the low-profile scrapie research then under way, with the ultimate aim of eradicating the disease. A major scrapie initiative is now in progress at the Ministry of Agriculture, Fisheries, and Food's (MAFF's) Veterinary Laboratories Agency in Weybridge. And for the past 2 years, scientists at the Compton headquarters of the U.K.'s Institute for Animal Health (IAH) have been harnessing the tools of epidemiology, genetics, and mathematical modeling to lay the scientific groundwork for the task. Much of their initial work has meant studying scrapie at its source. IAH scientists attend sheep shows and slog through muck on sheep farms like the one Jones runs in Trecastle, looking for clues to how this poorly understood disease spreads and how it might someday be brought to heel.

    Although the research is in its infancy, the Compton team's preliminary findings are providing new clues to risk factors for scrapie infection. They have also bolstered previous studies showing that some sheep are endowed with genes that help them resist a scrapie infection, while others have genes that make them highly susceptible—a finding that might allow farmers to breed in resistant genotypes and breed out susceptible ones. The scientists are also on the lookout for BSE in sheep masquerading as scrapie.

    As they gather knowledge about this ancient killer, researchers are building bridges to farmers, whose cooperation will be essential to any eradication effort. Although the fractious dispute over BSE and British beef often sowed distrust between the two communities, the fight against scrapie is uniting scientists and farmers in a common cause. After all, says IAH mathematical biologist Angela McLean, who leads the Compton project, the primary goal is one that everyone agrees on: “We need to get prions out of the food chain.”

    A resilient enemy

    Despite scrapie's long history in the British Isles, surprisingly little is known about how it is transmitted from sheep to sheep and flock to flock, or what puts individual animals at risk, aside from evidence that ewes may infect lambs during pregnancy or just after birth. “You can't just dive in and try to eradicate scrapie instantly,” says Richard Cawthorne, MAFF's deputy chief veterinary officer. “We need to understand a lot more about the disease's basic mechanisms.” Indeed, the most obvious strategy, killing off infected sheep in hopes of purging the pathogen, has failed. For example, soon after the first scrapie cases appeared in the United States in 1947—apparently a result of imports of British sheep—the U.S. Department of Agriculture (USDA) declared a state of emergency and ordered the slaughter of whole flocks each time a single sheep fell ill. Yet this scorched-earth policy, which continued until the early 1980s, hardly made a dent in the U.S. scrapie toll.

    The mass slaughters discouraged farmers from reporting the disease and “drove it underground,” says veterinarian Linda Detwiler, coordinator of the USDA Animal and Plant Health Inspection Service's working group on transmissible prion diseases. The agency now relies on voluntary programs to identify and isolate infected flocks, and—as is current practice in the United Kingdom—only infected animals are killed. The average number of cases has dropped by about half in the past decade, but several dozen new cases are still reported each year among the 11.5 million sheep in the United States. “We have tried every scrapie control and eradication program known to mankind, but they did not work,” says Detwiler.

    On the hoof.

    Since notification became mandatory in 1993, yearly numbers of reported cases have edged up.

    CREDIT: MAFF

    Although scrapie flourished, the disease intrigued few researchers—until it became clear in the mid-1990s that people had become infected with BSE, apparently from eating meat or other products from infected cattle. “If the BSE epidemic had not occurred, I suspect scrapie would still be a scientific backwater,” says molecular biologist Chris Bostock, IAH's director.

    But there was more at play than merely a missed research opportunity. The persistence of scrapie may well be responsible for the rise of BSE in cattle. Many researchers believe that the BSE epidemic began when scrapie-infected meat, bones, and nervous tissue from sheep were added to cattle feed during the decade leading up to July 1988, when the practice was banned. Support for this idea emerged last year, when IAH researchers reported in the January 1999 issue of the Journal of General Virology that one “strain” of scrapie prion isolated during the 1970s—called CH1641—is strikingly similar to the BSE prion. Although the concept that protein-based prions come in strains like viruses or bacteria is controversial, researchers agree that certain characteristics—such as how many sugar molecules are bound to the prion's amino acids—generally remain stable even when the prions infect other species. The key evidence that humans have been infected with BSE, for example, came from the fact that prions isolated from the brains of variant CJD (vCJD) victims are nearly identical to BSE prions.

    Yet even if the BSE prions that jumped from cattle to humans originated in sheep, this does not necessarily mean that scrapie prions can infect humans. The most persuasive evidence that scrapie prions can't—for now—harm people is that decades of exposure to scrapie-infected sheep has not caused a single known infection in humans. Indirect support also comes from experiments demonstrating species barriers against prion infection: Mice fed BSE prions become infected, while hamsters do not. “There appears to be an absolute block between cows and hamsters,” says Bostock. However, if brain extracts from BSE-infected mice are fed to hamsters, they do become infected. Some researchers suggest that a similar block may exist between sheep and humans, but not between sheep and cattle: Scrapie prions must be modified somehow in cattle before they are capable of infecting people. Because cattle are no longer being fed ground sheep, the presumed BSE link between scrapie and vCJD should now be severed. Therefore, says Byron Caughey, a virologist at the U.S. National Institutes of Health's Rocky Mountain Laboratories in Hamilton, Montana, “there are no grounds for hysteria.”

    Lurking in the madding crowd?

    The scarcer BSE is in the sheep population, the harder it will be to flush it out. The bars indicate how many samples are required to be 95% certain of detecting at least one BSE case hidden among 5000 suspected scrapie cases.

    CREDIT: SEAC

    A far more serious concern is that sheep might harbor BSE prions. Until the practice was banned in 1988, the remains of both cows and sheep often wound up in ruminant feed. And sheep certainly are capable of being infected. In 1993, researchers at the IAH's branch in Edinburgh, Scotland, first reported that BSE could be transmitted to sheep via infected cattle brain extracts—results that have been confirmed by many other groups.

    There's no evidence yet that BSE is lurking in sheep, but if these prions do find their way into a new animal reservoir, they will be easy to miss. Although the number of reported cases of scrapie is relatively low—averaging about 500 per year among the U.K.'s 40 million sheep—an anonymous survey of more than 11,000 sheep farmers, carried out by the Compton and MAFF teams, indicates that the actual occurrence may be eight times the official numbers. Underreporting of scrapie cases, combined with the ability of BSE symptoms to masquerade as scrapie, makes it unlikely that BSE in sheep would be detected unless it amounted to more than 5% of scrapie cases, according to SEAC calculations. For these reasons, says McLean, “deep down” the Compton project “is about BSE in sheep.” Eradicating scrapie, she says, would be the best insurance against an undetected BSE epidemic in these animals.

    Ewegenics for rams?

    The key to defeating scrapie—and defusing the BSE threat—may lie in figuring out which sheep should be allowed to reproduce. Over the past decade, geneticist Nora Hunter of the IAH's Neuropathogenesis Unit in Edinburgh and other researchers have established that sheep vary in their susceptibility to scrapie depending on variations—called polymorphisms—in the nucleotide sequence of the gene coding for PrP, the normal protein that apparently causes disease if it converts to the abnormal, or prion, form (see sidebar). Some polymorphisms make sheep nearly impervious to scrapie, while others make them highly susceptible. Testing for these polymorphisms, and only allowing sheep with beneficial ones to breed, might help eradicate scrapie.

    Both supporting and complicating this picture are findings from a pilot study last year of four British flocks—two of which were scrapie-infected and two scrapie-free—led by IAH field epidemiologist Matthew Baylis. The surprising news is that the scrapie-free flocks had only slightly lower proportions of susceptible genotypes than did the scrapie-infected flocks. But the number of older animals with susceptible genotypes in the infected flocks was much lower than expected. This implied, the team concluded, that the real scrapie toll in these flocks had been much higher than thought—either due to underreporting or because farmers were not recognizing scrapie. If so, the two infected flocks must have started out with significantly higher proportions of younger susceptible sheep, which then succumbed to scrapie.

    These findings support the idea that selective breeding should at least cut the toll from scrapie, although researchers caution that there are many uncertainties. “Breeding for resistance might remove clinical signs of scrapie,” says Hunter, “but there is a slight worry that the resistant animals might still be carriers of infection.” And McLean says that although breeding programs might eradicate today's strains of scrapie, it's unknown whether the resistant sheep could withstand altered or “mutant” prions that might arise later. Still, says veterinary epidemiologist Linda Hoinville, who leads the scrapie epidemiology group at the Veterinary Laboratories Agency, “this is the most promising strategy at the moment.”

    McLean and her team have begun asking farmers like David Jones—who has spray-painted his sheep to mark those with susceptible or resistant genotypes—to breed lambs only from scrapie-resistant ewes and rams. And Hoinville's team believes it has evidence that this effort may pay off: In preliminary, unpublished work on one flock in which scrapie was rampant (carrying a whopping 6% infection rate at the outset), the Weybridge group found that only allowing resistant rams to breed reduced scrapie incidence in later generations to negligible levels. If these results are confirmed, it could mean that researchers and farmers may soon be able to write the final chapter on this deadly disease's 250-year history and the human health concerns that have become an unsettling subtext to the story.

  10. Writing Scrapie's Coda, Codon by Codon?

    1. Michael Balter

    Over the past decade, geneticists have begun to unravel why some sheep are more vulnerable than others to scrapie. They have found that different variations, or polymorphisms, in the gene coding for PrP—a cellular protein that many scientists believe becomes infectious when it converts to an abnormal form called a prion—seem to confer varying degrees of susceptibility. This correlation raises the possibility that genetically susceptible sheep could be bred out of the population, leaving only scrapie-resistant animals (see main text).

    Studies of sheep experimentally infected with scrapie have shown that three codons, or positions, in the PrP gene—codons 136, 154, and 171—are critical in determining whether the animal comes down with the disease. Each codon gets translated into one of the 256 amino acids of the sheep PrP protein. Individuals most vulnerable to scrapie have the amino acids valine, arginine, and glutamine at the respective positions dictated by the three codons. Using the single-letter code for amino acids, this polymorphism is referred to as VRQ. At the other extreme, sheep with the polymorphism alanine-arginine-arginine (ARR) are the most resistant. Indeed, out of hundreds of scrapie-infected sheep tested worldwide, only one, in Japan, has turned out to be ARR. Three other polymorphisms (shown at below) apparently lead to intermediate levels of vulnerability to the disease.

    Muddling this neat picture, however, are some bizarre differences in the effect of polymorphisms in different sheep breeds. For example, Suffolk sheep with the genotype ARQ are susceptible to scrapie, whereas ARQ Cheviot sheep are resistant. “We really don't understand this,” says Nora Hunter, a geneticist at the Institute for Animal Health's Neuropathogenesis Unit in Edinburgh. Hunter and her colleagues are currently testing several hypotheses, including the possibility that the two breeds are being infected by different prion strains, or that Suffolk sheep may produce higher levels of PrP and thus have more protein available for conversion to the prion form.

    More clear, however, is why PrP polymorphisms correlate with scrapie susceptibility in the first place. Findings reported in the 17 July 1997 issue of Nature and in the 13 May 1997 Proceedings of the National Academy of Sciences show that the VRQ version of normal PrP protein is easily converted into the prion form when mixed with other prions in the test tube. (Most researchers studying prion diseases believe this mechanism is responsible for the creation of new prions in infected animals.) The ARR polymorphism, on the other hand, strongly resists this conversion, while polymorphisms corresponding to intermediate scrapie susceptibility fall in between. This biochemical confirmation of the importance of PrP polymorphisms has bolstered the view that breeding VRQ and other susceptible genotypes out of the sheep population might be the best course toward eradication. Says Hunter: “At the moment, there really isn't any good alternative.”

  11. QUANTUM MECHANICS

    'Spooky Action' Passes a Relativistic Test

    1. Charles Seife

    Framed by physicists trying to catch them in a contradiction, entangled photons stick to their quantum story

    The most unnerving idea in quantum mechanics may be “spooky action at a distance”—the notion that certain particles can affect one another almost instantly across vast reaches of space. Einstein spent half his life wielding such so-called nonlocal effects against quantum theory, and other physicists have followed suit on paper and in the lab ever since. Recently in Geneva, that aspect of quantum surreality survived the most cunning trap so far, a series of experiments that pitted it against basic principles of Einstein's relativity. The results gave the most accurate estimate yet of how rapidly the “spooky action” might operate.

    “I was excited to see that [the Geneva lab] had really done it,” says quantum computer expert Gilles Brassard, a professor at the University of Montreal. “Of course, if the outcome had been the other way, and the correlation disappeared, it would have been a Nobel Prize experiment.”

    The Swiss team probed an intimate linkage between quantum particles known as entanglement. To entangle photons, for example, a scientist might set up a device that creates them in pairs with opposite, but unknown, polarizations. If one photon is polarized vertically, then the other must be polarized horizontally, and vice versa. In the quantum world, however, a photon can exist in an undecided “superposition” of horizontal and vertical states, declaring its polarization only when somebody measures it. If one of the entangled photons then becomes (say) horizontally polarized upon being measured, the other photon must simultaneously decide to become vertically polarized—even if it is a billion light-years away. Somehow the first photon has sent a signal to its distant twin.

    Such linked particles are known as EPR pairs, after Einstein and two collaborators who used them in 1935 to attack quantum theory on the grounds that they violated the relativistic ban on faster-than-light communication. (Physicists later showed that there is no violation, because the “quantum information” EPR pairs carry can't be harnessed to send useful messages.) In 1996, however, two Swiss physicists, Antoine Suarez and Valerio Scarani, proposed that EPR pairs run afoul of another cornerstone of Einstein's theory, the relativity of time.

    Einstein showed that the flow of time, and even the order of events, depends on how fast an observer is moving. Suarez and Scarani dreamed up a thought experiment in which an experimenter creates an entangled pair of photons and fires each toward a different particle detector. If both detectors are standing still, it is easy to tell which particle arrives first. But if one detector is moving close enough to the speed of light, Suarez and Scarani showed, relativistic distortions can create a situation in which each particle sees itself reaching its detector while the other particle is in midflight. Each particle will think that it drops out of superposition first, chooses its polarization, and then signals the other particle to assume the opposite polarization.

    Mad whirl.

    In the Swiss experiment, a spinning drum gave photons a relativistic outlook.

    If two particles disagree about who is the sender and who is the receiver, how can they be communicating? They can't, Suarez and Scarani said. Contrary to the standard interpretations of quantum mechanics (which require that the particles somehow stay entangled even in such a “before-before” situation), entanglement will break down, and each particle's fate will become independent of the other's.

    The Geneva scientists put the rival outcomes to an experimental test. In their lab, they shot a laser beam into a crystal made of potassium, niobium, and oxygen. On absorbing a photon from the laser, the crystal spat out two entangled photons, which sped down different fiber-optic cables to detectors in the nearby villages of Bernex and Bellevue, 10.6 kilometers apart. What entangled the photon pairs in this case was not polarization, but energy and timing, which are tied together quantum mechanically. “Each of the photons has an uncertain energy, but the sum of the energy of the two photons is very well defined,” says Nicolas Gisin, a member of the team.

    By measuring with incredible precision (about 5 picoseconds) when various photons in the beams reached the detectors, the scientists could tell which photons had been entangled. Almost incidentally, they determined that quantum information must travel faster than 107 times the speed of light. Otherwise, the correlation between entangled particles would have broken down.

    Then the fun began. To create a relativistic “before-before” anomaly, the scientists in effect cranked the Bellevue detector up to relativistic speeds by making it spin in place at 10,000 revolutions per minute. An ordinary measuring device would have flown apart in seconds, so the Geneva team made a clever—and controversial—substitution. Using a device called an interferometric analyzer, they split the Bellevue beam again, sending the photon down two paths simultaneously (as wave particles are wont to do). One path led to a dummy detector made of a sheet of black paper wrapped around a whirling drum, and the other to a stationary detector. A nonclick at the stationary detector meant that the photon had struck the black paper. Thus, Gisin says, “there is not a real difference between an absorbing black surface and a detector.”

    Because that surface was spinning away from the crystal that entangled the photons in the first place, its motion created a relativistic before-before situation. Photons in Bellevue were convinced they had arrived before their EPR partners in Bernex, and vice versa. Then negative readings at the stationary detector told the experimenters exactly which pairs of photons had been entangled. “The ‘no click’ on the detector—that is the only information we need,” Gisin says.

    Gisin's team measured the correlations between the Bernex and Bellevue photons, both when the dummy detector was stationary and when it was spinning. The result: Entangled photons stayed entangled, even if each thought it had struck a detector first. Einstein's frames had no effect on spooky action.

    Other scientists say the results are impressive but not quite airtight. “They are very beautiful from an experimental point of view, but there are too many assumptions,” says Anton Zeilinger, a physicist at the University of Innsbruck in Austria. For example, no one is sure that a moving piece of paper is, in fact, as good as a moving detector.

    The experimenters also assumed, as most physicists do, that the photon chooses its quantum state at the moment it strikes a detector. In some formulations of quantum mechanics, however, the photon makes its choice at other points in the experiment—even as late as the time when a conscious being finally looks at the data on the computer. Zeilinger hopes to narrow the possibilities, perhaps by inserting rapid, randomly activated switches into the experimental setup.

    However they interpret the results, scientists agree that the Geneva experiments are a technological feat. “This is, in a certain sense, a new line in experimental work,” says Suarez. “You are putting quantum mechanics in a relativistic frame.”

  12. MATHEMATICS

    Why Double Bubbles Form the Way They Do

    1. Barry Cipra

    Need to entertain a child? Try blowing soap bubbles. Need to keep a mathematician busy? Just ask why bubbles take the shapes they do. Individual soap bubbles, of course, are spherical, and for a very simple reason: Among all surfaces that enclose a given volume, the sphere has the least area (and in the grand scheme of things, nature inclines toward such minima). On the other hand, when two soap bubbles come together, they form a “double bubble,” a simple complex of three partial spheres: two on the outside, with the third serving as a wall between the two compartments. Scientists have long considered it obvious that double bubbles behave this way for the same minimum-seeking reason—because no other shape encloses two given volumes with less total surface area. But mathematicians have countered with their usual vexing question: Where's the proof?

    Now they have it. An international team of four mathematicians has announced a proof of the double bubble conjecture. By honing a new technique for analyzing the stability of competing shapes, Michael Hutchings of Stanford University, Frank Morgan of Williams College in Williamstown, Massachusetts, and Manuel Ritoré and Antonio Ros at the University of Granada have shown that only the standard shape is truly minimal—any other, supposedly area-minimizing shape can be ever so slightly twisted into a shape with even less area, a contradiction which rules out these other candidates.

    What other shape could two bubbles possibly take? One candidate—or class of candidates—has one bubble wrapped around the other like an inner tube. But it could be even worse: Mathematically, there's no objection to splitting a volume into two separate pieces, so it's possible that siphoning off a bit of the central volume and reinstalling it as a “belt” around the inner tube would actually reduce the total surface area. And conceivably, then, siphoning a bit of the inner tube and placing it as a band around the belt would lead to smaller area yet, and so forth. There's not even any obvious reason that the true, area-minimizing double bubble can't have “empty chambers”—enclosed regions that don't belong to either volume.

    Just about the only thing that's (relatively) easy to prove is that the solution must have an axis of symmetry—in other words, it can't have lopsided bulges. Hutchings took the first big step toward ruling out the more bizarre possibilities in the early 1990s. He ruled out empty chambers and showed that the larger volume must be a single piece. Besides the standard double bubble, his results limited the possible solutions to ones consisting of a large inner tube around a small central region, perhaps with a set of one or more belts circling the outside. Hutchings also found formulas that provide bounds on the number of belts, as a function of the ratio of the two volumes. In particular, if the two volumes are equal, or even nearly equal, there can be no belts, so the only alternative is a single inner tube around a central region.

    Based on Hutchings's work, in 1995 Joel Hass of the University of California (UC), Davis, and Roger Schlafly, now at UC Santa Cruz, proved the double bubble conjecture for the equal-volume case. Their proof used computer calculations to show that any inner tube arrangement can be replaced by another with smaller area. “Ours was a comparison method,” Hass explains. He and Schlafly found they could extend their results for volume ratios up to around 7:1, but beyond that the possible configurations to be ruled out became too complicated.

    Surprisingly, the general proof requires no computers, just pencil and paper. The key idea consists of finding an “axis of instability” for each inner tube arrangement. Twisting the two volumes around this axis—with a motion rather like wringing out a washcloth—leads to a decrease in surface area, contradicting the shape's ostensible minimality. “We always thought that these remaining cases were unstable,” Morgan says. The proof confirms their suspicions, although it leaves open the possibility that some nonminimizing configuration could also be stable. The twisting argument is new and a bit subtle, Morgan notes. The hardest part is figuring out where to position the axis of instability so that the twisting procedure wouldn't change the volumes of the two regions as well as the surface area. “For a while, it was hard to frame the right questions, especially in Spanish.”

    Although the proof is only now being announced, the main results were established last spring, when Morgan visited Granada during a sabbatical. Since then, a group of undergraduates in a summer research program at Williams College has extended the results to analogs of the double bubble conjecture in higher dimensions. (The two-dimensional double bubble conjecture was proved by an earlier group of undergraduates in 1990.) Ben Reichardt of Stanford, Yuan Lai of the Massachusetts Institute of Technology, and Cory Heilmann and Anita Spielman of Williams College have shown that an axis of instability always exists for nonstandard shapes in the four-dimensional case, and also in higher dimensions under the mild assumption that the larger volume consists of a single, connected region.

    What about triple bubbles? Once again, nature provides a relatively simple and obvious answer, but, Hass notes, “we don't know how to get started” proving it. The triple bubble problem is even open in two dimensions, with equal-sized area (for example, what's the least amount of fencing required to create three acre-sized pens, to separate, say, sheep from goats from hippopotami?) And it gets less certain from there, Hass says. “Once you get up to 20 or 30 regions, we don't even have a conjecture.”

  13. MATHEMATICS

    Triple Star Systems May Do Crazy Eights

    1. Dana Mackenzie*
    1. Dana Mackenzie is a writer in Santa Cruz, California.

    The ancient Greeks spoke of the “music of the spheres,” a mystical harmony supposedly produced by the stars circling Earth. This theory did not survive the Copernican revolution, but mathematicians have now produced a modern counterpart: the dance of the stars. They have proved that three stars can chase each other forever in a figure-eight pattern, with each one passing between the other two in turn—and that this orbit is stable. Somewhere in the universe, as yet unnoticed by Earth-based astronomers, a trio of stars could be dancing a Scottish reel.

    The orbits of multiple stars have long puzzled mathematicians and astronomers. Isaac Newton's theory of gravitation explained well enough why binary stars orbit each other in ellipses. But for 300 years, the only kinds of stable, repeating orbits known for groups of three or more stars have been minor variations on the Newtonian theme. For example, in Alpha Centauri—the nearest star cluster to us—a small third star makes large elliptical loops around a stalking pair of sun-sized giants.

    The problem of computing the motion of three objects, interacting solely according to Newton's inverse-square law of gravitation, is known as the “three-body problem.” Scientists find approximate solutions every day with computers, but finding and proving exact solutions is notoriously difficult. For this reason the class of proven periodic solutions, in which the objects return exactly to their initial starting places after a certain amount of time, has remained embarrassingly small. One such example occurs when three bodies form an equilateral triangle, an arrangement known as the Lagrange configuration. Such orbits have been used for satellites and are seen in the moons of Saturn. Another class of periodic orbits, in which one of the three objects is extremely small compared to the other two, was discovered about a century ago by the French mathematician Henri Poincaré. These take the form of slightly perturbed Newtonian ellipses.

    In the decades since then, no fundamentally new periodic orbits have been found. In fact, mathematicians had moved in the opposite direction, discovering a wide variety of inherently unpredictable, chaotic orbits (especially as more bodies are added to the problem). Searching for periodic orbits began to look old-fashioned.

    Three-body solution.

    Assigning points on a sphere to triangles formed by three-star systems (top) allowed mathematicians to find a stable figure-eight orbit.

    CREDIT: (BOTTOM) R. MONTGOMERY

    “As a graduate student, I never wanted to work on the three-body problem,” says Richard Montgomery, a mathematician at the University of California, Santa Cruz. “It felt to me like digging in graves. There were 300 years of history, and you never knew whose work you were repeating.” But a colleague suggested that an idea Montgomery had used to work on another old chestnut—the problem of how a cat lands on its feet—could apply to the three-body problem as well.

    The idea was not to study the three motions separately, but to study how the shape of the triangle formed by all three stars evolves. Each shape can be represented by a single point on the surface of a sphere, called a “configuration space” (see figure). The sphere's north and south poles correspond to the two possible equilateral triangles, points at the equator correspond to arrangements in which all three stars lie on a single line, isosceles triangles run along six of the meridians, and all the intermediate triangles are ranged in between. As the stars move in an orbit, the triangle they form may change, and so the corresponding point in configuration space may move around on the sphere as well. The Lagrange configuration, because it is always an equilateral triangle, does not change its shape at all and thus is represented by a single point in configuration space (the north or south pole). A system of an eclipsing binary and a distant companion, like the Alpha Centauri system, makes only a small loop in configuration space—it passes through the equator twice, once for each time the three stars line up to form an eclipse. But Montgomery wondered whether any three-body system could ever trace a more complicated figure.

    After several false starts, Montgomery, together with Alain Chenciner of the Bureau des Longitudes in Paris, found a way to do it. Assuming that the three bodies all have equal mass, the orbit is symmetrical enough that the entire orbit can be derived from a small piece of itself—the first twelfth. To derive the first piece of the orbit, Montgomery and Chenciner used a version of the classical “principle of least action,” which essentially reduced to finding the shortest path between two points in configuration space.

    Although Montgomery's path in configuration space was relatively complicated, Chenciner soon realized that the individual stars in real space traced out simple figure eights. The two announced their find at a December conference in honor of the 60th birthday of Donald Saari, a leading expert on celestial mechanics. But shortly before the conference, they received a “birthday present” of their own, when Carles Simó of the University of Barcelona drew the first accurate computer rendition of the orbit and showed that it remained stable even if the bodies changed their mass or their initial position slightly. This was quite unexpected, because the original argument required equal masses. Moreover, it means that the solution could conceivably be observed in the universe. But the window of stability is very small. “It becomes unstable if the mass of any one of the bodies differs by more than one part in 100,000,” says Joseph Gerver, a mathematician at Rutgers University in Camden, New Jersey. “Hence it seems unlikely that any real stars follow such an orbit. On the other hand, the universe is a big place, so who knows?”

  14. MATHEMATICS

    Random Packing Puts Mathematics in a Box

    1. Charles Seife

    Anyone who's been on a crowded subway has unwillingly experienced random close packing. Mathematicians and physicists, on the other hand, relish the subject. For decades, they have been arguing about a simple version of the crammed subway car: How closely can you pack randomly arranged spheres into a box? Now a team of engineers appears to have settled the debate with a surprising answer: There is no single answer.

    Visit any supermarket, and you'll see that the grocer already knows how to pack oranges or grapefruit—or any other uniformly sized spherical object—in the most efficient way possible. The little pyramids of oranges are packed in the so-called face- centered cubic configuration, in which only about 26% of the pile is empty space. In 1611, Johannes Kepler wrote a booklet called The Six-Cornered Snowflake, in which he guessed that this was the tightest packed configuration possible. Two years ago, Michigan mathematician Thomas Hales proved Kepler's conjecture: It's impossible to pack spheres so that the “packing fraction” is more than about 74% (Science, 28 August 1998, p. 1267).

    Kepler could rest easy, but mathematicians and physicists kept arguing about a related problem: How tightly can you pack spheres if you dump them randomly into a box? Beginning in the 1960s, experimenters put ball bearings and other spheres in rubber balloons, shook them into boxes, and simulated them on computers. Their conclusion: The maximum packing fraction was about 64%. This “maximally packed” state was dubbed random close packing. Yet scientists couldn't agree on exactly what that state was. “If you look in the literature, people ended up getting different values,” says Salvatore Torquato, a materials scientist at Princeton University. Most recently, in 1997, researchers at the École Polytechnique in France showed that they could get packings as high as 67% by shaking their apparatus in different ways. However, despite these differences, most people in the field still assumed that there was a universal constant, a maximum random close packing fraction.

    Using computer simulations of spheres being compressed in a box at different speeds, Torquato and his colleagues show that there is no such constant. “What we found was that you can go way beyond what we thought was the maximum,” says team member Pablo Debenedetti, a chemical engineer at Princeton. In the experiment, described in Physical Review Letters, the team got higher and higher packing fractions by compressing the spheres ever more gently, finally approaching the ultimate limit set by Kepler.

    “What we conclude is that you can always pack things more and more densely, but you get more and more order,” says Debenedetti. That is, “random” and “close packed” are not independent concepts; looking for the maximally close-packed random collection makes no more sense than searching for the tallest short guy in the world. “The fact that there's a maximal value turns out to be ridiculous,” says Torquato. “It's not mathematically well defined.”

    “The assumption had been that there was a unique random closest packing number, but I think Torquato and his collaborators have unequivocally demonstrated that this is not the case,” says Frank Stillinger, an engineer at Lucent Technologies in New Jersey. Even though the lab experiments and simulations got values of roughly 64%, it was due to the laboratory conditions rather than to any universal rule—which explained why the experimenters never could quite agree on the true value.

    Torquato and colleagues suggest a more precise way of approaching the problem. Instead of looking at “close packing,” they investigate “jammed” states, where no spheres are free to rattle around if you shake the box they are in. Not only might there be a jammed state that is maximally random—the analogous, but more precise, concept to a random closest packed state—but there might also be some jammed structures that have a very low packing fraction. “They would be jammed but have an enormous amount of open space,” says Stillinger. Straphangers, take heart.

  15. GEOLOGIC MODELING

    Seeing a World in Grains of Sand

    1. Erik Stokstad

    Sophisticated physical models of how sediment flows through rivers into the sea are offering high-tech views into the genesis of complex stratigraphy

    Housed in a cavernous laboratory on the banks of the Mississippi is one of the most expensive sandboxes in the world. The rectangular tank is half the size of a tennis court, can hold some 200 tons of sand, and cost about half a million dollars to build. Dubbed “Jurassic Tank,” the apparatus is on the leading edge of a new generation of physical models that can simulate the rise and fall of sea level, the effects of swings in climate, and the sinking of tectonic plates.

    In initial runs with this new device, which has just been completed, sedimentary geologist Chris Paola, civil engineer Gary Parker, and their team at St. Anthony Falls Laboratory in Minneapolis are creating scaled-down versions of complex geology to figure out how intricate patterns of sediment layers are deposited by rivers and ocean currents. “You could think of the stratigraphic record as an old violin: You can examine it, take it apart, analyze it, model it, but even with all that, you still aren't entirely sure how it was made,” Paola says. “Now, just imagine you could watch Stradivari at work.”

    Although by no means a perfect simulation of the real world, the sandboxes are the first attempt to reproduce entire sedimentary basins in a quantitative way. The effort is decidedly high-tech. The Minnesota scientists map the evolving landscape with a laser and, as they painstakingly dissect the fresh terrain with a meter-long blade akin to a giant cheese slicer, capture digital images that can be reassembled into a three-dimensional (3D) computer mock-up. Researchers can then fly through the virtual underground strata and examine any type of sediment layer, such as potentially oil-rich sands.

    What makes Jurassic Tank truly unique, though, is a computer-controlled floor that sinks or deforms with a precision of 100 micrometers. By warping this flexible surface, the team can reproduce the effects of rising plumes of subsurface salt, a gently subsiding continental margin, or almost any other sort of tectonic activity. This ability will help researchers figure out how sea level and basin tectonics conspire to build different kinds of stratigraphy. “That's a major question,” says Rudy Slingerland, a sedimentary geologist at Pennsylvania State University, University Park. “This is a perfect methodology to answer it.”

    That sinking feeling

    Jurassic Tank has a long pedigree. Generations of geologists have studied faults by squeezing boxes of sand and clay, recreated colliding plates with wax-filled trays, modeled subduction zones by pouring together fluids of various densities, and built miniature river systems to shed light on the secrets of meanders and flooding.

    Yet stratigraphic modeling lacked a key factor: subsidence. For sediment to accumulate without being eroded by waves or wind, either the basin bottom must sink or the water level rise. High sea levels tend to fall again, though, and then erosion wipes away marine deposits. The most reliable way to keep sediment intact for millions of years is for crustal plates to sink. “Subsidence is the absolute sine qua non for creating and preserving stratigraphy,” Slingerland says.

    Geologists can glimpse how subsidence operated in the field, and they can mimic it with computers. Most computer models, however, offer only cross-sectional views into stratigraphy. Physical models provide a 3D perspective of an entire basin. And compared with algorithms, they are comfortably solid, as wet and dirty as the real world they seek to simulate.

    A sandbox renaissance began in the early 1990s. For example, when Shell Research & Technology Services of Rijswijk, the Netherlands, wanted to verify and calibrate a 2D mathematical model of sediment transport, the company approached George Postma, a sedimentary geologist at Utrecht University, and asked him to build a tank that would track sand migration as sea level fluctuated. The 8-meter-by-4-meter basin was kept top secret for 2 years. A team at Colorado State University, Fort Collins, was already doing experiments on the effects of changing sea level on valley formation, while researchers at the University of Leeds, United Kingdom, also ran experiments simulating the deposits of rivers.

    In 1994, Paola was developing computer models of stratigraphy and experimenting with river deposits. Working at St. Anthony Falls Laboratory, an arm of the University of Minnesota that specializes in physical experiments about hydraulics, he and Parker had an idea: build a large basin whose bottom could drop, mimicking subsidence and creating stratigraphy. “We sat around for 3 to 4 months and brainstormed about how to do it,” Paola recalls.

    Then a team member from the Midwest, Jim Mullin, was inspired by agricultural silos. Taking a cue from the way a pile of grain collapses in a hopper as seeds fall out from below, he developed a honeycomb of gravel-filled funnels. A computer controls the flow out of each funnel, and the dimples from adjacent funnels merge to make any desired landform. A year later, they had a prototype with 10 subsidence funnels that could evolve a natural-looking basin floor.

    Where's the beach?

    In two preliminary runs, the tank has shown that it can realistically reproduce a number of geologic processes. Within its geological microcosm, networks of so-called braided streams whisk sand to deltas that mirror those found in the Gulf of Mexico and elsewhere. Shorelines and river systems shift and swerve in ways that geologists in the field would recognize—if they could watch landscapes evolving over millions of years.

    The first experiment, in fact, gave field geologists some welcome reassurance about the interplay between shoreline location and sea level. Shorelines are a key feature of paleogeography, and they yield clues about a smorgasbord of geologic questions, including the volume of oceanic spreading centers, the ebb and flow of ice sheets, and the size and quality of potential petroleum reservoirs.

    Ancient shorelines can be readily though crudely tracked in outcrops. A cliff where deposits of beach sand are topped by deep-sea clays tells a geologist that water got deeper, either because sea level rose or because the land sank. Whichever it was, the shoreline must have moved inland. If ancient shorelines in basins around the world show the same sorts of changes at the same time, geologists can conclude that they have found a global signal—a literal sea change.

    Or can they? In 1978, Walter Pitman of the Lamont-Doherty Earth Observatory in Palisades, New York, proposed that the commonsense scenario might be wrong: Over the eons the rise and fall of the oceans might redistribute sediment on the coast so that shorelines wind up misbehaving, even moving seaward when oceans rise. Long-held interpretations of sea level might be all wet.

    Field geologists were stymied. Pitman's theory was a thorn in their side, but they could not remove it. They could not date deposits precisely enough to tell whether changes in the shoreline were subtly out of synch with sea level. Nor was anyone sure exactly how long, for typical continental margins, the oddball behavior would take to appear—whether tens of thousands or millions of years. In some cases, the knowledge of ancient sea levels was shaky, too.

    The St. Anthony Falls team set out to test Pitman's idea. For about a year, they honed the subsidence mechanism, wrote software to deform the floor precisely, and built a prototype basin. Then it took just a few days to carefully load the funnels with gravel, smooth out the floor, and fill hoppers with sand. After the 54-hour run was finished and the tank drained, researchers spent about 2 months slicing the 1.3-meter-long stack of sand layers like salami to produce a “Visible Basin” akin to the online “Visible Human Project.”

    The results, which will appear next month in Sedimentology, should reassure geologists trying to figure out how global sea levels changed in the past, Paola says. Over simulated eons, as the modeled ocean rose and fell, the shoreline stayed in synch. But some Pitman-like oddities did crop up. When rapidly rising sea level began to slow down, for example, the shoreline stopped moving inland and began to migrate toward the sea. According to observations of the tank as well as a mathematical model developed by team members John Swenson and Vaughan Voller, the decelerating sea level upset the subtle balance among subsidence, sea-level rise, and sediment supply that maintains the shoreline. Once the rate of sea-level rise had slowed down enough, sand pouring out of the rivers was able to force the beach outward. For the most part, however, the shoreline behaved just as common sense said it should.

    The postmortem showed real-world features underground as well. Once the stratigraphy was sectioned, the team noticed a series of dramatic faults. “It was completely unexpected,” Paola says. “For most of the experiment there was no indication whatsoever on the sediment surface of the presence of the faults.” These so-called growth faults represent slow, continuous subsurface deformation as sediment piles up. Not only are growth faults a hazard for cities like Houston, which is built on a thick accumulation of sediments, but they also strongly influence the distribution of sand reservoirs that host hydrocarbons. Oil companies would like to know how the faults might shift sediment layers, perhaps isolating sand bodies.

    During the shakedown run, the team used the 10-funnel baby Jurassic Tank. Last April, the team finished constructing a full-size basin, which is now 6.5 meters by 13 meters and has 432 subsidence funnels. In the first test of the completed tank, the researchers turned from shorelines to rivers. They created a 4-meter-long river system in a valley that was sinking faster downstream, like a hinged trap door. The shifts in terrain were subtle. Even at the fast end, the basin was dropping only about a centimeter per hour, too slow for anyone to notice. Instead, most observers were mesmerized by the constant shifting of the river channels. The spell was broken every minute or so as gravel was shot out of the funnels to lower the river valley. “It sounds like money falling out of a slot machine,” Paola says. The 218-hour experiment featured a light show too: A laser beam scanned the surface every 40 minutes to map the topography.

    Team members are still slicing the deposits. They hope to find new ways in which stream-borne sands that harden into rock can help them figure out how a basin has subsided over time. “If there was a way you could walk up to an outcrop and say it was a period of high or low subsidence, that would be awfully convenient,” Paola says. So far, no clear-cut stratigraphic giveaways have turned up. However, the model has supported one well-established pattern—but with a twist.

    River channels should be drawn to the area of highest subsidence. But according to initial results to be presented next month at the annual meeting of the American Association of Petroleum Geologists by team members Ben Sheets and Tom Hickson, channel sands seem most concentrated somewhere else—a region between the center of the basin and the axis of greatest subsidence. Apparently the rivers are torn between attraction to the fastest subsidence and the urge to shoot straight down the center of the basin, the shortest route to sea. More findings should appear soon.

    Get real

    Compelling as it may be to watch sand grains bouncing down a Lilliputian river, Paola and his colleagues acknowledge that Jurassic Tank will never be anything but a simplification of the real world. The tank contains no clay (it would just float through without settling), and so the sediment can't stick together. Bays and groundwater can't precipitate carbonate sediments. No plants can sprout and stabilize the model riverbanks. As a result, no meandering rivers can form, only braided streams. There is no single scale for translating model time into geologic time. This means experimental models are a kind of lens, Paola says. “They give us a glimpse—distorted and imperfect, certainly, but nonetheless formed of real grains and real water—of processes that in nature occur on time scales as far removed from human experience as the depths of space.”

    But the tank is getting more realistic. The St. Anthony Falls team will add waves and currents this summer, to better simulate the continental shelf. The group is also trying to figure out how to make a kind of undersea avalanche called a turbidity current form spontaneously in the tank. And Utrecht's Postma has just received funding to build a new 6.5-meter-by-14-meter tank that will be able to simulate subsidence patterns.

    Whatever features may be added in the future, for geologists the biggest attraction will always be what they cannot get in the field: comprehensive knowledge and absolute control. “One of the beauties of this experiment is that you have a chance to see the entire stratigraphy. With rocks, it's more like seeing the trailer of a movie—you get only glimpses,” Paola says.

    That kind of certainty could be a big relief for oil company geologists, who have to predict the caliber of potential reservoirs, often based on fuzzy seismic images and a few core samples. As drilling wells becomes increasingly expensive the farther companies prospect offshore, questions about stratigraphy turn into high-stakes gambles. A physical model would provide welcome confirmation of a proposed geologic scenario. “When you're talking about a billion dollars … [Jurassic Tank] is a cheap test of your ideas,” Slingerland says.

    Companies are already using initial findings to teach young geologists about petroleum reservoirs. Computer images from the experiments reveal the geometry of sand deposits, while sediment cross sections offer vistas unmatchable in the field. In the university classroom, instructors ask students to interpret the geologic forces that created them—and then reveal the right answer. The exercises shed some interesting light on how geologists think, Paola says: Even a professional, interpreting the evidence, will almost always make the geologic history of a model out to be much more complex than it really was. “If there's one broad theme, it's a better appreciation of how a simple cause can create complicated stratigraphy.”

  16. When Pharma Merges, R&D Is the Dowry

    1. Bruce Agnew*
    1. Bruce Agnew lives in Bethesda, Maryland.

    Pharmaceutical companies' continuing quest to get bigger is not just megalomania. Staying on top in the global drug market requires doing more and better research. And unit costs keep rising

    Around the time that Britain's giant Glaxo Wellcome PLC first began talking with the equally huge SmithKline Beecham PLC about a merger that would create the largest pharmaceutical research and development machine in the world, Glaxo chair Sir Richard Sykes declared the death of the traditional approach to drug discovery—screening thousands of chemical compounds to find one that works. “The future,” Sykes said, “is in molecular genetics, cell biology, and the modern sciences.” It would take more than 2 years for senior management from both companies to agree on who would call the shots in the new enterprise and to proclaim, on 17 January of this year, that they had achieved a “merger of equals.” But nobody disputed the unspoken message behind Sykes's vision: The pharmaceutical company in the best position to exploit nascent discoveries would be the one with the fattest research budget.

    The idea that bigger is better in R&D has become an article of faith in the pharmaceutical industry. Just 3 weeks after the Glaxo SmithKline announcement, Pfizer Inc. and Warner-Lambert Co. linked hands to create a behemoth with an even bigger R&D platform. And Pharmacia & Upjohn Inc.—itself the product of a 1995 merger—beat them both to the altar by announcing in December that it would merge with Monsanto Co. to create “a first-tier pharma company with a first-tier growth rate.”

    The forces driving this mania for mergers—there have been 30 in the past 15 years—are straightforward. Although no company has even a double-digit slice of the global prescription drug market, each must have enough market share and product lines to maintain “a significant presence” in the United States, Europe, and Japan, the three major world markets, says Sergio Traversa of Mehta Partners, a pharmaceutical and biotech investment firm in New York City. With nearly 7% of the world pharmaceutical market each, Glaxo SmithKline and the merged Pfizer may be big enough for the time being, says Traversa, but others will try to catch up.

    Other forces behind the recent mergers stem from the need for ever-larger R&D establishments, both to take advantage of new opportunities and to cover the continuing rise in the cost of developing new drugs. The Human Genome Project—the effort to identify all 3 billion nucleotide bases in human DNA—is yielding clues to thousands of new research targets for drug development. But following up on those clues will be hugely expensive and will require a supersized scale of operation. The easiest way to achieve that magnitude quickly is to merge.

    Rising costs

    Already, the pharmaceutical industry is spending a hefty sum on R&D. The Washington, D.C.-based Pharmaceutical Research and Manufacturers of America (PhRMA), the industry's U.S. trade association, estimates that U.S.-based research by its members topped $21 billion last year; worldwide, it exceeded $24 billion. That's nearly three times the 1990 level and a robust 21% of sales, up from 16% in 1990. The Centre for Medicines Research International in the United Kingdom, which surveys more companies, pegs the global figure at $43 billion.

    This growth mirrors the steady rise in the cost of bringing a drug to market. Although exact numbers remain elusive, the most widely quoted figure, from a 1993 study by the Boston Consulting Group, is about $500 million in R&D for each new drug that makes it to market. Others place the cost lower, at $300 million—still a hefty amount. These estimates, of course, include projects that are canceled along the way.

    The cost per drug is likely to go higher, say industry R&D chiefs, for several reasons. One is the slew of new drug targets emerging from the genomics revolution, which is dramatically widening the playing field for companies. “We've gone from a period, not long ago, where pharmaceutical companies all were working on the same targets,” says Göran Ando, Pharmacia's executive vice president and president of R&D, who will oversee a pharmaceutical R&D budget that may reach $2.4 billion this year. “Now we are into an abundance of novel targets.”

    “Abundance” may be an understatement. The full human genome sequence will contain “all of the targets for drug intervention in mankind” except for antimicrobials, notes Tadataka Yamada, who will be the R&D chair in the new Glaxo SmithKline. “The territory has been staked out. Now, people have to go out and identify the opportunities for pharmaceutical intervention within that territory. It's going to be a very competitive effort,” says Yamada, who will have $4 billion a year to spend on that task. Industry analyst Hemant Shah of HKS and Co. in Warren, New Jersey, agrees: “Look at the kind of research targets these companies are going after. They are far more difficult, far more complex, and far more uncertain.”

    The cost of identifying the potential drug targets from among an estimated 80,000 to 100,000 human genes is one of the reasons why Pfizer Inc. fought so hard to corral Warner-Lambert. The merged company will boast the biggest R&D budget in the industry, an expected $4.7 billion this year. Pfizer chair and CEO William Steere Jr. is betting that amount will help make the new entity “the fastest growing major pharmaceutical company in the world.” And pharmaceutical companies will need all the muscle they can muster.

    Although the full human genome sequence will contain a wealth of potential drug targets—PhRMA guesses 3000 to 10,000 targets, compared with 500 that the drug industry has so far exploited—the trick will be to recognize and then validate them. By itself, Ando observes, the raw genome sequence “is sort of like getting the Manhattan phone book with only the numbers, not the names.”

    Both Ando and Yamada expect that some of the nameless gene sequences will stand out as more interesting than others. For instance, genes coding for a family of receptors known as G protein-linked receptors—which are key interchanges in cell signaling—will be recognizable because of sequence similarity to known receptors. So if a gene sequence seems likely to encode such a receptor, “that's a clue that the gene encodes a potential pharmaceutical target,” Yamada says. The crucial tip-offs will be “relationships, similarity to other structures, clues derived from computer algorithms. This is what bioinformatics is all about.”

    As industry researchers try to determine where a protein fits in a biochemical pathway and what happens if it's eliminated or augmented, the industry will be drawn more deeply into functional genomics. Ando notes that pharmaceutical labs have been doing some of this work all along. But the current era requires a stronger commitment, which translates into greater resources spread over a wider area. “Researchers may find themselves studying a receptor and find a compound that inhibits it without [having] any idea what disease the compound might be most useful against,” says Yamada. “The receptor may be important in cardiac function, or bladder function, or bowel function.” In cases like this, he adds, only a company that is active in all those therapeutic areas can truly capitalize on that discovery.

    Raising the bar

    It's not just the genomics revolution that is pushing up the costs of R&D. For a variety of reasons, drug development has become more costly and more complicated. In a major switch from just 20 years ago, a growing proportion of drugs is now prescribed for chronic conditions rather than as short-term antibiotics or pain-killers. This change has forced government regulators to look more closely at each drug's long-range effects. “We keep raising the bar on what our performance should be,” says Roger Perlmutter, executive vice president for basic research at Merck Research Laboratories in Rahway, New Jersey.

    Therapies aimed at conditions such as hypertension, high cholesterol, diabetes, or neuropsychiatric diseases, he notes, “are in principle going to be taken day in, day out for years. And they have to have a squeaky-clean safety profile.” In addition, many of these drugs are being taken by older patients who also are on other medications. Eliminating the potential negative side effects of such drug interactions is “paramount,” says Perlmutter, adding that a drug that might have been marketable 20 years ago is no longer acceptable if it shows signs of inhibiting the body's metabolism of other drugs.

    On the other hand, companies can't always pass along to consumers the full cost of drug R&D. In a Catch-22, drugs already proven safe, tolerable, and efficacious must also be “cost effective,” says Yamada, if they are to succeed in the current era of managed care. Without such proof, managed care operators and national governments—which, in most of the world, are at least partial payers for prescription drugs—will not include a drug among the medications they will pay for. “These [outcomes] studies are very expensive, and they extend the length of time that research must be done on a drug,” says Yamada.

    View this table:

    Companies are also striving to shorten the time from discovery to marketing—to get the most out of the finite length of a patent and, often, to beat a competitor to market. But this competition raises costs by forcing companies to juggle more balls. “You need to run more things in parallel, rather than sequentially,” says Ando. As a result, “you do a few more things than you would have.”

    The cost of clinical trials makes up a huge share of pharmaceutical companies' R&D budgets, nearly 40% by one independent estimate, and both per-patient costs and the costs of sophisticated tests seem likely to keep rising. Another 10% goes toward developing processes to synthesize compounds on a huge scale while controlling their production so precisely that, as one official puts it, “the billionth pill is identical to the first.” In addition, a substantial fraction of R&D money goes to screen compounds that might produce a new or improved drug—such as the search for a more effective inhibitor of the COX-2 enzyme, which promotes inflammation, pain, and fever, to develop a better arthritis therapy.

    Although the proportion of industrial budgets taken up by basic exploratory research is quite low, says Perlmutter, it is not irrelevant to the overall biomedical research enterprise. For example, Merck researchers were the first to determine the three-dimensional structure of the HIV-1 protease enzyme in 1989 (see p. 1954), “and we published that structure so that everybody else could work on it, too,” Perlmutter notes. Earlier, Merck researchers had performed the mutagenesis experiments demonstrating that disabling the protease enzyme halted HIV-1 propagation.

    For all those reasons, being the biggest kid on the block has become the hottest game in town. Unless government antitrust agencies step in, several of the recent proposed mergers are expected to be finalized later this year. The newly created Glaxo SmithKline and the new Pfizer certainly seem big enough to tackle all these challenges. Pharmacia, with a relatively large R&D budget in proportion to overall drug sales, is getting there, and it may have a new growth surge before the year is out—industry analysts believe it is eyeing another potential partner, American Home Products Corp. Whether or not these deals go through, Traversa of Mehta Partners predicts that such corporate ballroom dancing will continue: “You don't want to be the last one left without a partner.”

    • *(12 months ending 9/30/99) Excludes mail order and sales in the Netherlands.

    • †1998 Includes $600 million in agricultural R&D.

  17. Structural Genomics Offers High-Speed Look at Proteins

    1. Robert F. Service

    With the human genome nearly sequenced, the next frontier is understanding the shape and function of the encoded proteins

    The research was only a tiny victory in the ongoing war against AIDS. But it showed the power of a new technique that's likely to have a major impact on the world of drug discovery.

    In the late 1980s, researchers at the pharmaceutical giant Merck and elsewhere managed to snap a series of pictures of the HIV protease, the protein that helps the deadly virus replicate. The pictures, three-dimensional (3D) close-ups, revealed the precise locations of all of the protein's component atoms. The scientists used those maps to design drugs that would interact with the protease and gum up its works. Their research has helped to cut drastically the number of AIDS deaths in developed countries in the last few years.

    Such “rationally designed” drugs are a step up from the trial-and-error process that has produced most drugs on the market today. One limiting factor has been the lack of detailed structural maps of the target proteins. But the current number of 1500 unique proteins for which these maps exist—about 1% of the estimated total in humans—is set to rise dramatically as powerful computers, robots, and other high-tech tools are enabling researchers to obtain structures for hundreds of proteins in the time it used to take to get one. Says Aled Edwards, a biochemist at the University of Toronto in Canada: “Structural biology has turned the corner.”

    The new direction is a souped-up version of the field called structural genomics. It applies high-speed techniques to make a systematic survey of all protein structures, cataloging the common ways in which proteins fold. That information, in turn, could eventually lead to computer programs capable of predicting the shape and function of any protein from the simple linear sequence of A's, G's, C's, and T's in genes. Now, with this newfound efficiency in sight, researchers have begun to think audaciously about surveying the complete spectrum of protein structures in the same way that geneticists have used DNA sequencing machines to decode entire genomes. The approach is expected to extend the genomics revolution from a catalog of genes to a catalog of the 3D shapes of the proteins for which the genes code.

    This untold wealth of biochemical information at the atomic level is also likely to provide drug designers with both new targets for drugs and hints for making them successful, as it did with HIV protease inhibitors. But some researchers see a darker side, too. They worry that the nascent field will spawn a high-stakes race between publicly funded academic researchers and private companies, similar to the competition in the genomics community over the last decade.

    Taking a fresh look

    Historically, 3D protein structures have been created with the help of techniques such as x-ray crystallography. But the process is slow. To obtain a crystal structure, for example, structural biologists must first isolate proteins of interest, purify them, and coax them to form crystals. Then they must subject them to beams of x-ray photons, measure the way those photons ricochet off the crystal, and work out the configuration of atoms that must be present in the crystal to produce the diffraction pattern. For many proteins, this process has taken months or even years. As a result, structural biologists have tended to seek out crystal structures for proteins for which the biochemical function was already known in order to better understand how the molecules accomplished their tasks.

    All that is beginning to change. Today, virtually every aspect in the structure- determining process is being automated and scaled up. Researchers are now creating high-throughput systems for cloning genes, expressing proteins, growing crystals, and collecting x-ray data (Science, 27 August 1999, p. 1342). More powerful x-ray beams at synchrotrons around the world have improved the quality of diffraction data, and better computers and software have made interpreting results both easier and faster. “All these things have suddenly made high-throughput crystallography possible,” says David Eisenberg, a structural biologist at the University of California, Los Angeles. Similar advances are propelling work in nuclear magnetic resonance (NMR) spectroscopy, a related approach to determining structures. The upshot, says Steven Almo, an x-ray crystallographer at the Albert Einstein College of Medicine in New York City, is that “we can do structural genomics on the same scale that people do genome sequencing.”

    Genomics is also bringing a sea change in structural biology. “The genome project has changed our attitude completely,” says Tom Terwilliger, a crystallographer at the Department of Energy's Los Alamos National Laboratory in New Mexico. Gene sequencing, he explains, has revealed the linear sequence of amino acids for thousands of proteins for which no function has yet been discovered. To determine their functions, researchers are now looking to structural genomics for help.

    The idea is to take closeup photos of each protein and study them for clues to what that protein might be doing. By reversing the starting and ending points, the approach turns structural biology “on its head,” says Mark Swindells, scientific director of Inpharmatica. This U.K.-based start-up is looking to provide pharmaceutical companies with computer predictions of important protein structures. It's akin to asking a car mechanic to troubleshoot an unfamiliar engine using engineering diagrams of key components: A component that looks like a fuel valve is not likely to work as a spark plug.

    A handful of preliminary research projects are already beginning to show that 3D crystal structures can provide important insights into the role of proteins inside cells (see sidebar). “There's a widespread feeling that if more structures were available, cell biology and medicine would move more rapidly,” says Eisenberg.

    Indeed, governments and industry are ramping up efforts to search for such structures en masse. Although most efforts include both experimental work and computer modeling techniques, it's the verifiable 3D maps that provide the most reliable information. This fall the U.S. National Institutes of Health (NIH) will announce up to six winners of a $20 million competition for structural genomics pilot centers. If fruitful, the investment is seen as a likely first step toward a larger program. Last month, the Ontario provincial government announced plans to spend $23 million over 3 years on a structural genomics center at the University of Toronto. And three more countries—Japan, Germany, and the United Kingdom—either have funding in place or are considering plans for similar investments.

    For industry, structural genomics promises not only a wealth of new drug targets but also help in eliminating those not likely to be useful. The Genomics Institute of the Novartis Research Foundation in La Jolla, California, is keen to improve its high-throughput approach to x-ray crystallography, says director Peter Schultz. Researchers there are working to automate virtually every aspect of the process, from finding proteins to solving their structures. And a company spin-off, Syrrx, is looking to commercialize the approach.

    Syrrx is one of a handful of biotech start-ups in the field. A few, including Structural Genomix in San Diego, intend to resolve the structures of thousands of unique proteins over the next several years, providing information that could be useful to clinical researchers in many fields. If they succeed, their efforts would roughly equal the number of protein structures resolved over the last 4 decades.

    View this table:
    View this table:

    Hard choices

    Even at this relatively blistering pace, however, 3D structure determination and related structural genomics projects remain too slow and costly to provide an atomic map for all human proteins, at least anytime soon. “Structural genomics is a lot harder than sequencing,” says Terwilliger. Whereas the tools used in sequencing can decode virtually any gene, proteins are more finicky. They require more attention to find the right conditions under which they will be expressed, purified, and coaxed into forming a crystal.

    That complexity also makes the work expensive—it costs roughly $100,000 to resolve a single protein structure. At that price, researchers may need to be content in the near term with structures for fewer than 10% of the human protein, forcing experts to choose from among their preferred targets. NIH officials say one desired outcome from the pilot centers is technology that would drastically lower the cost of resolving new structures.

    So far there is no consensus in academia on which proteins to pursue. One camp, says Terwilliger, aims to resolve the structures of specific unknown proteins to gain clues about their biochemical function. A second emphasizes the need to survey general shapes of proteins by lumping those thought to be similar into families and obtaining the structure of at least one member of each. A third approach seeks structures for as many proteins of a given organism as possible, such as pathogenic bacteria. And a fourth—the approach most similar to conventional structural biology—homes in on proteins thought to be important for understanding diseases and basic biochemistry.

    No matter what the strategy, one concern for academics is that few research groups have the expertise to combine all the pieces of a structural genomics effort. “A single investigator really can't do much on his or her own,” says Emil Pie, a crystallographer at the University of Toronto. One result is that academic groups that produce proteins are teaming up with others that specialize in crystallography and bioinformatics. One early pilot project that's looking at proteins from a heat-loving archaeon, for example, involves 17 labs at seven institutions, including the University of California, Los Angeles, Los Alamos National Laboratory in New Mexico, and the University of Auckland in New Zealand.

    The situation is a bit better in industry because it's often easier for companies to marshal both money and expertise to tackle specific problems. “We have the ability to build [high-throughput] tools that are difficult for the university sector to [build],” says Schultz of Novartis, who also retains an academic appointment at The Scripps Research Institute in La Jolla, California. Another reason, say Shultz and others, is that industrial researchers are more narrowly focused on short-term goals. “We're not looking to define protein fold space,” says Tim Harris, president and CEO of Structural Genomix. “We will work on protein families we know are of interest to industry and [on those] we predict they will be interested in.” One such example, says Harris, are protein kinases, which regulate information traffic inside cells and have been shown to be involved in a wide variety of diseases such as leukemia.

    One big unknown is whether public and private efforts will dovetail or if, as happened with sequencing, the sectors will race each other to bank their data. Harris says he believes the focused industrial work “will complement what the public domain is doing”—namely, going after the big picture of how proteins fold. Others aren't so sure. “We may see a replay of the situation in genome sequencing,” says Chris Sander of the Massachusetts Institute of Technology's Center for Genome Research and of Millennium Pharmaceuticals in Cambridge, Massachusetts.

    In the genome arena, says Sander, industry has competed with the publicly funded genome project in a scramble for new medicines, diagnostics, and agricultural products. Although Sander believes that this duplication has accelerated the pace of genome research, Terwilliger sees a downside for structural genomics. If industry keeps half of the data secret, he says, “there will be an awful lot of waste and duplication.” Asked whether academic groups will have access to his company's structural data, Harris is noncommittal: “We anticipate that at least some of the structures will be available to the public domain.”

    Next month NIH is sponsoring a meeting at the Wellcome Trust's Sanger Centre for genome sequencing near Cambridge, U.K., to coordinate international financing of structural genomics as well as to address issues surrounding intellectual property and access to data. If all goes as planned, the result will be an agreement akin to one that grew out of a 1994 meeting in Bermuda that helped promote the coordinated international effort to sequence the human genome, says John Norvell, who is heading up the structural genomics program at the National Institute of General Medical Sciences on the NIH campus in Bethesda, Maryland. But even such an agreement is unlikely to give researchers all the help they will need as they travel along the new road of structural genomics.

  18. Early Successes Hint at Big Payoff, But the Road to New Drugs Is Long

    1. Robert F. Service

    With researchers racing to set up structural genomics efforts in both academia and industry, two unanswered questions loom over the field. How well will this highly experimental approach work? And will it lead to the development of new drugs? “It's early days in the field,” says John Moult, who heads a budding structural genomics group at the University of Maryland's Biotechnology Institute in Rockville. “But we are very encouraged.”

    Moult's team is attempting to glean structural information from dozens of proteins in Haemophilus influenzae, a microbe that causes bacterial meningitis, among other diseases. The bug's full gene sequence was worked out in 1995 by researchers at The Institute for Genomic Research in Rockville, Maryland. Then, starting with these genes, Moult's group cloned 55 of them into Escherichia coli, coaxed bacterial hosts to overexpress them, purified the proteins, crystallized them, and used x-ray crystallography to resolve their structures.

    Moult and colleagues have run just a few proteins through this gauntlet, but the results have come quickly. One newly identified protein folds in a manner similar to the anticancer protein endostatin. And its shape suggests that it spends its days binding to either DNA or RNA, project member Osnat Herzberg reported last summer at the American Crystallographic Association meeting in Buffalo, New York. Other protein structures that the team has resolved but not yet reported have been even more revealing of function, she says.

    That structural knowledge clearly speeds the process of understanding how the protein functions inside the cell, asserts Herzberg. Without structural information, she says, “if you don't know anything about the function, you have to do tens of thousands of assays to understand the biochemistry. That's not practical.”

    The Maryland team is not the first to tease out structural information from unfamiliar proteins using this approach. In the 22 December 1998 issue of the Proceedings of the National Academy of Sciences, Sung-Hou Kim and his colleagues at the University of California, Berkeley, described a protein from a microbe called Methanococcus jannaschii, which lives in the extreme conditions found near volcanic vents. From the structure, they deduced that the protein was either an energy-producing enzyme or a molecular on-off switch. The group performed a couple of quick biochemical assays and found that it is an on-off switch. “The conclusion is that structural information is powerful for inferring functional information of unknown proteins,” says Kim. Meanwhile, other structural genomics centers around the globe are beginning to turn up equally encouraging results.

    Although the structural genomics approach promises important information on a protein's basic biology, it's not clear whether such information would benefit companies searching for new pharmaceutical and agricultural compounds. True, the three-dimensional structure of the HIV protease made it possible for researchers to design the hugely successful AIDS drugs called protease inhibitors. But that may be an exception, some caution. One concern is that the bulk of the high-speed structure-determining techniques may work only on proteins that make the least interesting targets for new drugs. Most drugs are designed to target proteins that straddle cell membranes and help to control the molecular traffic of cells. But membrane proteins are exceedingly difficult to crystallize even one at a time and thus aren't ready for these new high-speed techniques. As a result, researchers using these techniques will focus initially on nonmembrane proteins to hone their skills and hope to generate enough good science to justify the effort.

    Although most drugs are still discovered through painstaking trial and error rather than design, Chris Sander, the chief science information officer at Millennium Pharmaceuticals in Cambridge, Massachusetts, is convinced that such techniques are rapidly becoming obsolete. “The fraction [discovered by design] will increase steadily but surely,” he predicts. If he's right, the race to resolve protein structures could get very hot, very soon.

  19. Malaria Researchers Wait for Industry to Join Fight

    1. Martin Enserink

    A huge toll of illness and death, a dire need for new treatments, and rapid scientific progress are inspiring researchers to battle malaria. But most drug companies balk at investing in what they see as a Third World disease

    By most indicators, the time is right for the pharmaceutical industry to make a major push against malaria. Existing treatments are failing in the face of rising resistance from both the parasite and its mosquito vectors. At the same time, the first results of a major effort to sequence the genome of the parasite have identified some chinks in its armor that could lead to promising new drugs. And there's no shortage of patients: Each year brings as many as a half-billion new cases, including more than 1 million deaths, most among children.

    But there's a problem: The drug industry is not interested. Companies need to make money to satisfy shareholders, and as staggering as malaria's toll may be, medicines to combat or prevent malaria do not set an investor's heart beating faster. The obvious target populations live in sub-Saharan Africa and South Asia, where most patients can't afford expensive new drugs. Wealthy tourists from industrialized countries offer a market, but, at $100 million to $250 million annually, it is modest in pharmaceutical terms. “The commercial return is negligible,” says Simon Campbell, a retired senior vice president and head of drug discovery at Pfizer Inc., “so it really is not cost efficient for pharmaceutical companies to undertake malaria research on their own.” Another major factor keeping companies out of malaria research is insufficient patent protection in the developing world, says Bert Spilker, senior vice president of the Pharmaceutical Research and Manufacturers of America (PhRMA) in Washington, D.C. “It is difficult for companies when countries pirate drugs,” he says.

    Add to those obstacles the declining potency of current treatments and the paucity of drugs in the pipeline, some of which may not pan out, and the result is that some parts of the world may soon have nothing with which to counter the scourge. “It's a paradoxical situation,” says Dyann Wirth, director of the Harvard Malaria Initiative. “It's a very exciting time from a scientific point of view, and yet we don't get enough new drugs to the market.”

    Resisting capture

    A half-century ago, the world seemed poised to deliver malaria a knockout blow. New synthetic drugs such as chloroquine killed Plasmodium falciparum—the most deadly of the four malaria parasite species—with few side effects. And insecticides, notably DDT, slashed populations of the Anopheles mosquitoes that transmit the disease. But times have changed. Resistance to both drugs and insecticides is rampant in large swaths of the world, bringing malaria back with a vengeance. Indeed, the effectiveness of atovaquone, the latest drug in the arsenal, began to wane even during the course of its first human trials. Meanwhile, attempts to create a vaccine have proved vexingly difficult, and another hopeful approach—replacing natural mosquito populations with new ones unable to transmit the disease—is still a long way off.

    One factor behind widespread drug resistance is that doctors typically treat malaria with one drug only, making it relatively easy for the parasite to adapt. To combat resistance, researchers say they will have to rely more on drug combinations, much like the cocktails that hold down HIV. For instance, Glaxo Wellcome Inc., which is marketing atovaquone, has combined it with an older drug, proguanil, to increase its life-span. (The combination, given the brand name Malarone, has hit the market in 30 countries.) “Perhaps we wouldn't be in such a state today if we had started with combination therapies [earlier],” says Daniel Goldberg, a malaria researcher at Washington University in St. Louis.

    Until now, discovering new malaria drugs was a fairly haphazard process. Many of the drugs available today were found by producing and testing countless chemical variations of quinine, a compound from Cinchona tree bark that has been known for centuries as an effective antimalarial. Members of this group, including mainstays like chloroquine and mefloquine, all have a chemical structure called quinoline, or one that's closely related. Most are thought to kill the malaria parasite as it's breaking down and gobbling up hemoglobin—a protein complex that transports oxygen—inside red blood cells. Chloroquine and quinine, for instance, block the removal of heme, a byproduct of hemoglobin degradation that is toxic to the parasite. As a result, heme heaps up, poisoning the organism in its own waste.

    Researchers are still adding members to the quinine-based drug family. For instance, the Walter Reed Army Institute of Research in Silver Spring, Maryland, the source of most recently developed drugs, has developed a compound called tafenoquine, which effectively kills parasites that have become resistant to multiple other drugs. Tafenoquine is about to enter phase III trials in a collaboration with SmithKline Beecham. But scientists don't expect to discover many more related compounds. “The quinoline structure is pretty much exploited,” says Colonel Wilbur Milhous, director of Walter Reed's division of experimental therapeutics.

    Over the past decade, some researchers have turned to another naturally occurring compound. In 1972, Chinese scientists isolated the active ingredient (called artemisinin) from Artemisia annua, or sweet wormwood, a plant that had been used locally to treat malaria for more than a century. Since then, factories in Southeast Asia, where resistance problems are the most pressing, have started growing Artemisia and treating patients with artemisinin on a large scale. The exact mechanism is unknown, but researchers think artemisinin may kill Plasmodium because it produces free radicals when it binds to heme; these may damage the parasite's proteins and the lipids in its membrane. Again, they are trying to exploit this trait further by synthesizing a host of similar compounds in the lab and testing their antimalarial activity. The Walter Reed Institute, for instance, has patented a stable, water-soluble derivative called artelinic acid and is testing its efficacy in animals.

    A higher gear

    Other researchers are following a different path. Instead of building on compounds known to work, even if the mechanism is unclear, they are searching for enzymes that are unique to the parasite and then producing compounds tailored to thwart them. For instance, Goldberg's group at Washington University has taken a closer look at the breakdown of hemoglobin, a process that requires the activity of protein-chopping enzymes called proteases. The group has identified and determined the three-dimensional structure of three such proteases—called plasmepsins—each of which seems to have a specific role in bringing the giant hemoglobin complex to its knees.

    To find inhibitors for the plasmepsins, which would effectively starve Plasmodium rather than poison it, Goldberg has teamed up with Jonathan Ellman, a chemist at the University of California, Berkeley. Ellman specializes in combinatorial chemistry, a technique in which researchers use robots to produce tens of thousands of new chemical variations on a theme—in this case, a small molecule that has about the right size and shape to block plasmepsins. Each of the new molecules is then screened for its ability to bind to a plasmepsin. The most promising candidates have been tested in a mouse version of malaria “with some hopeful results,” says Goldberg.

    Recently, the search for proteases has shifted into higher gear, thanks to the Malaria Genome Project. Sequencing machines at the Sanger Centre in the United Kingdom, The Institute for Genomic Research in Rockville, Maryland, and Stanford University in Palo Alto, California, have been working for 3 years to crack the entire 30-million-base genome of Plasmodium falciparum. A rough draft is expected by the end of this year. In sifting through the data released so far, Goldberg says he has already found what look like the genes for several other proteases. “We're eagerly pursuing them,” he says.

    He's not the only one keeping an eye on the daily data flow. Last year, Ewald Beck and his colleagues at Justus Liebig University in Giessen, Germany, found that Plasmodium has two enzymes belonging to a pathway that plants and bacteria use to produce isoprenoids, a group of compounds that includes cholesterol. Because this biochemical route, called the DOXP pathway, doesn't occur in humans, the researchers realized that the enzymes made interesting drug targets. While looking for a drug that might block one of the enzymes, the group hit upon fosmidomycin, an antibiotic developed by a Japanese company in the 1970s that never made it to the market. A recent report (Science, 3 September 1999, p. 1573) found that fosmidomycin was effective in mice; clinical trials in Africa are next, says Beck. Meanwhile, the group is trying to find even more enzymes along the DOXP pathway.

    Discoveries such as Beck's, which would have been impossible without sequence data, are exactly what Malaria Genome Project researchers were hoping for. “That paper in Science warmed my heart,” says Richard Hyman, who leads the sequencing effort at Stanford. Adds Goldberg: “All of a sudden we have a wealth of targets that we can choose from. It will be an enormous boon to drug development.”

    Reducing the risk

    But new drugs won't help much if they aren't economically viable. One way to hold down costs, researchers say, is to hitch a ride on work under way to combat other, more profitable, diseases. Atovaquone, for example, was developed as a cure for Pneumocystis lung infections in AIDS patients. Indeed, “if an old drug is found to have some activity in malaria, I can personally guarantee the companies would fight each other to license it,” says PhRMA's Spilker. Similarly, the development of drugs based on the DOXP pathway may be helped by the fact that the same pathway occurs in microbes that plague residents of industrialized countries, such as Escherichia coli and Helicobacter pylori. In fact, that prospect helped lure investors to a company Beck and co-workers started to further develop fosmidomycin and other drugs. “Our strategy,” Beck says, “was to tell them we will find a lot of new antibiotics against very critical bacteria.”

    Another strategy is to prepare the ground for industry. That means identifying interesting leads and turning them into candidate drugs by doing animal, human safety, and small-scale efficacy tests. The goal is to minimize risk for any company interested in capitalizing on the results. “You have to have [the drug] ready to go without any problems,” says Walter Reed's Milhous, “before a company will jump in.” For many years Walter Reed handed out such compounds for free, as the government had no procedures to share royalties; Hoffmann-La Roche, for instance, didn't pay for the rights to mefloquine in the 1980s. Today, says Milhous, the institute asks for a percentage of the proceeds of newly developed drugs.

    Enticing industry to step in by curbing its financial risks is also the philosophy behind the Medicines for Malaria Venture (MMV), a foundation sponsored by the World Health Organization and several governments and nongovernmental organizations. MMV funds academic researchers with promising drug-discovery projects and tries to hook them up to companies. The companies, in turn, help out with support in kind, such as advice and access to chemical libraries. Founded last year, MMV has an annual budget of about $4 million and is currently supporting three research collaborations in which academics work with Glaxo Wellcome, SmithKline Beecham, and Hoffmann-La Roche. Its goal is a budget of $30 million, says MMV director Robert Ridley, enough to generate one new malaria drug every 5 years.

    Although industry may be reluctant to undertake malaria research on its own, initiatives like this help to get it on board, says former Pfizer executive Campbell, who volunteers as chair of MMV's scientific advisory panel. “The industry will do everything possible to help in kind, to offer advice, et cetera,” he says. “They're delighted to hear about this initiative. Whenever I talk about it, people's ears prick up.”

    Some, such as Harvard's Wirth, think it's time for the government to step in, perhaps with tax breaks and longer patents for certain drugs on the condition that the revenue be used for new antimalarials. “The industry recognizes this [disease] as a problem, but it's not going to move without some sort of incentive,” says Wirth. “And with so many great drug targets on the horizon, it's critical that we somehow come to grips with this.”

  20. U.S., Europe, Japan Look to Speed Up Drug Reviews

    1. Dennis Normile,
    2. Eliot Marshall

    Drug companies are hoping to save time and money by submitting the same safety and efficacy data to regulators in the three leading global markets

    TOKYO—Why did the Japanese government take more than 30 years to approve “the pill” and only 6 months to OK Viagra, a drug for male impotence? Some blame cultural attitudes toward women for the snail's pace acceptance of the oral contraceptive and decry a double standard. Others say that the answer is not sexism but rather the first fruits of a little-known international effort to streamline the drug review process. This fall, if all goes well, officials from the United States, Europe, and Japan will agree on a common application form in hope of speeding the delivery of other new treatments and, perhaps, lowering the cost to consumers.

    The expected agreement is part of a 10-year-old reform project in the three regions that is known as the International Conference on Harmonization of Technical Requirements for the Registration of Pharmaceuticals for Human Use (ICH). “Harmonization” makes it possible for regulators in one country to review more quickly safety and efficacy data already submitted to another country. Although the ICH has drawn little attention, it could have a big impact on consumers. “Drugs should be able to reach larger populations faster because a company can do studies and meet the requirements of all three regions,” says Sharon Smith Holston, deputy commissioner of the U.S. Food and Drug Administration (FDA). The streamlined format, she says, should also result in “lower costs, because [companies] are not doing multiple [repetitive] studies.”

    Since 1990, a select group of bureaucrats, drug executives, and academics from the United States, Europe, and Japan have gathered at ICH conferences every other year to identify differences in regulatory approaches and hammer out common standards. They have produced and adopted, at last count, 37 guidelines covering almost every conceivable detail of pharmaceutical development, from the design of clinical trials to the interpretation of statistics. The guidelines have been introduced one by one and are already having the desired effect of standardizing and simplifying procedures, says Rolf Bass, an official with the European Agency for the Evaluation of Medicinal Products. But to fully realize the promised benefits of common standards, says Bass, who has been involved with ICH since it began, the parties must agree on a “common technical document” that, in theory, will enable manufacturers to compile a single dossier on a proposed drug and submit it for review in all three ICH regions.

    The common application form, which participants hope to approve at a November meeting in San Diego, would be a “mega breakthrough,” says Bert Spilker, a member of the ICH steering committee and senior vice president of scientific and regulatory affairs for the Pharmaceutical Research and Manufacturers of America in Washington, D.C. Bass says that the parties have already settled on the scientific principles but that altering practices developed over decades is a major undertaking. “It's not easy to say, ‘OK, let's start over and do it better,’” he says. Still, he and others are optimistic about reaching an agreement this fall.

    However, sometimes the differences among national standards go far deeper than procedures for clinical trials and paperwork. Traditionally, drugmakers that wanted to sell globally had to repeat time-consuming and costly clinical trials in each major market. This has been a particularly slow process in Japan. Although ICH is starting to improve the situation, drugs often reach the Japanese market 3 to 5 years after being introduced in the United States or Europe. The reason for the delay, say Japanese health officials, is data showing that Asians respond differently to certain drugs and, therefore, may require different dosages.

    To address this issue within ICH, Japanese officials offered what came to be known as the Ethnic Factors Guideline. It spells out criteria to determine whether a drug is likely to have variable effects in different populations. Drugs determined to be “ethnically insensitive,” based on historical experience with similar types of drugs and other factors, may be approved with no additional testing. Others may require a small clinical trial, called a “bridging study,” to verify that the dose- response, safety, and efficacy data from the original clinical trials can be extrapolated to the target population.

    Conferees see these guidelines as one of their key accomplishments, paving the way to a system in which clinical trials need not be repeated. “This was a major step forward,” says Spilker. The system of adjusting for ethnic factors is “now being discussed with other countries in Asia,” he adds, “and the industry is very pleased with the progress.”

    The ICH adopted the guideline in February 1998, and Viagra was the first major drug to take advantage of it. The drug went on sale in the United States in March 1998, and by the summer its manufacturer, Pfizer Inc., had submitted it to Japan's Ministry of Health and Welfare. Relying on ICH guidelines and a bridging study, Japanese officials approved it in January 1999. Viagra also benefited from new fast-track procedures under which the ministry gives preferential treatment to drugs backed by high-quality data. Not all drugs will go through so rapidly, although Osamu Doi, councilor for pharmaceutical and medical safety at Japan's Ministry of Health and Welfare, says the goal is to review every drug within 1 year of submission.

    So far the ICH process has attracted little criticism. However, a Ralph Nader-led U.S. group has expressed concern that the drive to reach a global common denominator could lower U.S. drug safety or efficacy standards. In particular, Peter Lurie, a physician for Public Citizen in Washington, D.C., complains that ICH documents push for the use of placebos in clinical trials whenever possible. Lurie believes that placebos are often unnecessary, and that it's unethical to use them when a viable therapy is already available. But many ICH participants, including the FDA's Smith Holston, say that the emphasis on placebos favors the best research, as placebo-controlled trials often produce a more dramatic difference between the test group and the control group, yielding stronger statistical results. She argues that, rather than opting for the lowest common denominator, “we're agreeing on the highest standards.”

    But reaching a consensus on how data are to be submitted doesn't mean that ICH participants will necessarily agree on how to evaluate them. Each regulatory authority will continue to exercise its judgment under local laws. And in Japan, at least, clearing the scientific and medical hurdles may not always be straightforward, as the experience with birth control shows.

    The rapid approval of Viagra incensed advocates of the birth control pill, who had been trying for decades to win over the government. Doi says that officials were initially concerned about side effects. In 1990, after a new, low-dose version of the drug was submitted for review, opponents changed their tune and began arguing that it would discourage the use of condoms, leading to a rise in AIDS. Last spring, after newspaper editorials and women's groups attacked the ministry's stance, the government finally approved sale of the contraceptive. “The ministry felt that the situation regarding public knowledge of AIDS had changed,” says Doi, and that the risk of spreading AIDS had declined.

    Doi and others see the next frontier for ICH as developing standards for evaluating emerging technologies, such as gene therapy. “The challenge in the 20th century was harmonizing the regulations that had been put in place in each country,” he says. The next century, he predicts, must find ways “to jointly develop regulations from the start.”

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution