News this Week

Science  23 Jul 2004:
Vol. 305, Issue 5683, pp. 458
  1. SCIENTIFIC PUBLISHING

    U.K. Lawmakers Urge Prompt Access to Published Papers ...

    1. Daniel Clery

    CAMBRIDGE, U.K.—The nascent “open-access” publishing movement got a qualified, but high-profile, endorsement this week. After a 7-month investigation, the House of Commons Science and Technology Committee concluded that “the current system [of scientific publishing] is not providing the access needed for the progress of science,” says committee chair Ian Gibson. In a 114-page report,* the committee recommends that papers produced by publicly funded research be put in repositories at universities and research institutions, where they would be open to all, free of charge, soon after publication. It also says that online journals that have authors pay the cost of publication and that allow free access to all “could be viable,” and it calls for a major study of the idea.

    Information gap.

    Committee chair Ian Gibson (Labour, Norwich North) says the system is not meeting needs.

    The committee has no legislative authority; its recommendations are purely advisory. But open-access publishers see the report—along with a call to make many U.S. publications freely available (see box below)—as a major victory. It “marks the beginning of a new era,” says Vitek Tracz, chair of Current Science Group, a collection of publishing companies including U.K.-based open-access publisher BioMed Central that has more than 100 journals. Traditional for-profit publishers and scientific societies are, however, less enthusiastic. Eric Merkel-Sobotta, director of corporate relations for the Anglo-Dutch publishing giant Elsevier, says he thinks some of the committee's concerns are “overstated” but adds, “there's nothing revolutionary” in the report.

    The committee says that shrinking library budgets combined with ever-rising subscription prices mean that many libraries can no longer afford a full range of journals. It questions the high prices some publishers charge, accusing them of maintaining excessive profit margins, and berates the British government for lacking a coherent strategy on research journals, considering how much it invests in science.

    As a first step, the committee recommends that all U.K. universities and research centers set up institutional repositories for preprints and papers their scientists publish. The government should help fund these repositories and ensure that they are linked, and the research councils and other government funding agencies should require grantees to send papers to them “within one month of publication or a reasonable period to be agreed following publication.”

    CREDIT: EPS LTD., JUNE 2004

    The committee stops short of giving all-out support for open-access publishing, however, saying such an endorsement would be “simplistic.” Instead it calls for increased experimentation with author-pays models and says the research councils should subsidize these experiments by establishing a fund to which their grantees can apply for publication charges. “This is clearly the most important recommendation for open-access journals,” says Peter Newmark, editorial director at BioMed Central.

    While this experimentation is going on, the committee says the government should commission a comprehensive, independent study of the costs of author-pays publishing. The study should include the potential impact on scientific societies, which often publish their own journals and plow any profits back into other society activities. “Any system of publishing must avoid financially damaging these organizations,” says John Enderby, vice president of Britain's Royal Society, which publishes several journals.

    Another issue the committee raises is that businesses, particularly pharmaceutical companies, may get a “free ride” from open-access publishing. They are major buyers of journals—for which they pay top-rate subscription fees—but produce very few papers themselves. So in an author-pays world, they would make little financial contribution to scientific publishing.

    The report now goes to the Labour government, and the committee says it will also push for a network of publication repositories around the world.

  2. SCIENTIFIC PUBLISHING

    ... Congress Puts Similar Heat on NIH

    1. Jocelyn Kaiser

    In a surprise move, a U.S. House committee last week recommended that the National Institutes of Health (NIH) post its grantees' papers on a free Internet site when they have been published by a journal. Scientific societies and for-profit publishers were stunned by the language, which they say would drive traditional journals out of business. As Science went to press, House staff, besieged with complaints, were considering softening the directive.

    The language, authored by Representative Ernest J. Istook Jr. (R-OK), is part of a report accompanying the Labor/Department of Health and Human Services (HHS) appropriations bill passed last week by the House Appropriations Committee. It “recommends” that when a journal accepts a paper generated with NIH support, a copy of the manuscript be sent to PubMed Central, the NIH National Library of Medicine's (NLM's) archive of full-text articles. NLM would post the manuscript 6 months after the paper is published, unless NIH funds are used to pay publication costs, such as page charges or costs for color figures. In that case, the manuscript would be posted immediately on publication, the report says. It instructs NIH to submit a report by 1 December 2004 on how to do this.

    A coalition of libraries and open-access publishers that pushed for the language says it does not require scientists to publish in open-access journals—just that their final manuscripts be made public. “This isn't about the journals, it's about taxpayer access” to the data, says Richard Johnson, director of the Washington, D.C.-based Scholarly Publishing and Academic Resources Coalition.

    But scientific societies—many of which charge page fees—say subscriptions would dry up if essentially the same material were available immediately for free on the Web. And NIH-funded authors may not be able to publish in certain journals that retain copyright to papers, notes Paul Kincade, president of the Federation of American Societies for Experimental Biology (FASEB). “I think scientists should be pretty disturbed about this. It's taking away their choice,” says Kincade. FASEB points out that many journals already make full-text research articles freely available within 6 months or a year (the policy of Science).

    As the appropriations bill headed for a possible House vote earlier this week, Istook and House Labor/HHS subcommittee chair Ralph Regula (R-OH) were preparing to issue a statement on the House floor that would modify the directive. It would say that the intent is for NIH to “bring all the stakeholders to the table to come up with a model” to improve public access, said Istook's spokesperson. That is exactly what NIH Director Elias Zerhouni intends to do “with input from all parties,” says NIH spokesperson John Burklow. Publishers hope so: “Open access is such a radical change, it should be debated in public,” says Barbara Meredith, a vice president of the Association of American Publishers.

  3. PHYSICS

    Energy Curve Confirms Paired-Up Fermi Condensate

    1. Charles Seife

    Birds do it, bees do it. Fermionic particles, however, tend not to get cozy with one another. Getting these otherwise unfriendly atoms to form a special type of association known as a “Cooper pair” would open up a new way of studying the physics of superconductors and superfluids. Several groups have been racing to create such a state of matter (Science, 8 August 2003, p. 750).

    Over the past few months, they have had a remarkable string of successes. A German and Austrian team reports online in Science this week (www.sciencemag.org/cgi/content/abstract/1100818) the latest evidence for the formation of Cooper pairs of fermions in the laboratory. “This is great work,” says Wolfgang Ketterle, a physicist at the Massachusetts Institute of Technology who won a Nobel Prize for his work with Bose-Einstein condensates (BECs).

    BECs are a state of matter in which a handful of bosons—particles whose “spin” is 1 or 2 or 3 or another integer value—all inhabit the same quantum state, causing the whole bunch of particles to behave like a single big particle. But that isn't so easy with fermions, which have “spins” of 1/2 or 3/2 or 5/2 or other half-integers. Unlike bosons, no two fermions can be in the same quantum state, so, on the face of it, fermions should never be able to condense as bosons can.

    There is a way out, however. Take two fermions and bind them so that their spins effectively become an integer, and you get a boson. Recently, scientists across the world have been exploiting this loophole with gusto. After cooling a bunch of fermions such as lithium-6 using lasers and other techniques, they use magnetic fields to force the fermions to condense. Tune the magnetic fields in the correct way, and, in theory, you can get the fermions to create Cooper pairs, duos of fermions that affect each other's motion. Cooper pairs of fermions are thought to be responsible for the bizarre properties of superconductors (such as their total lack of resistance) and of superfluids (such as lack of viscosity and a strange, quantized flow within vortices). Achieving Cooper pairing in fermion condensates would enable scientists to examine the properties of Cooper pairs with unprecedented flexibility and would help unravel the enduring mysteries of the physics behind superconductors.

    Chamber of secrets.

    Magnetic coils outside this bowling ball-sized atom trap coaxed atoms into Cooper pairs.

    CREDIT: INSTITUTE OF EXPERIMENTAL PHYSICS/UNIVERSITY OF INNSBRUCK

    In January, Deborah Jin and colleagues at JILA in Boulder, Colorado, got fermions to condense in a way that might well have created Cooper pairs (Science, 6 February, p. 741). Although Jin's team didn't claim to have formed Cooper pairs, physicists hoped their new “fermionic condensate” would eventually exhibit Cooper-pairing-type properties, such as superfluidity. At the same time, several other teams reported similar results.

    Shortly thereafter, Rudolf Grimm of the University of Innsbruck, Austria, and his colleagues, as well as another group at Duke University, demonstrated that if you thump these fermionic condensates by momentarily turning off the atom trap, they keep vibrating for an enormously long time. “We see damping rates so low that they are no longer compatible with normal hydrodynamic theories,” says Grimm. The damping is more compatible with theories of superfluids, evidence that the fermions are indeed forming Cooper pairs.

    Now, Grimm's group has announced another line of evidence for Cooper pairing. The researchers irradiate the condensate with radio waves and, by observing which frequencies are absorbed, determine how many atoms are bound together and how tightly they are bound. With this technique, says Grimm, “you show how the binding energy changes with temperature.” The change matches what you'd expect with Cooper pairs, he says. As the researchers lower the temperature of the condensate, he adds, “all the completely unpaired particles disappear,” implying that the condensate should be “deeply in the superfluid regime” at cold temperatures.

    Ketterle praises the work but cautions that “this experiment is not proof of superfluidity. It's an important piece of the puzzle, but there are still other pieces missing.” He adds that researchers should have a smoking gun—such as observing quantized vortices—before declaring victory in creating Cooper pairs from fermion condensates. Only then will scientists know that each fermion has found its perfect match.

  4. GLYCOBIOLOGY

    Synthetic Vaccine Is a Sweet Victory for Cuban Science

    1. Jocelyn Kaiser

    Sin azúcar no hay país—no sugar, no country. That Cuban saying reflects the country's historic dependence on producing sugar, an industry hit hard in recent years by falling sugar prices. But some Cuban researchers now see economic—and medical—promise in another type of sugar, the kind found on the surfaces of microbes.

    On page 522, a Cuban-Canadian team reports the first large-scale production and clinical testing of a synthetic polysaccharide vaccine, one that targets the bacterium Haemophilus influenzae type B, or Hib, a major cause of meningitis in young children. Although commercial Hib vaccines already exist, the synthetic vaccine “has advantages for production: It's higher quality and purer,” says study co-author Violeta Fernández-Santana, a University of Havana chemist.

    Currently, companies producing carbohydrate-based vaccines resort to growing the targeted microbes and collecting their surface sugars. But manufacturing such vaccines by fermenting pathogenic bacteria in giant vats is messy, expensive, and inexact. Better, purer vaccines could be made by fashioning the sugars from scratch. But because this kind of chemistry is so dauntingly complex, nobody has developed and clinically tested a synthetic carbohydrate-based vaccine until now.

    Besides leading to a cheaper, safer Hib vaccine, this work “is going to pave the way for a new generation of vaccines” against other pathogens, says immunologist John Robbins of the U.S. National Institute of Child Health and Human Development, who co-developed the first Hib vaccine. “It's a pivotal step.”

    The achievement is also a giant step for Cuba, which has built up a substantial biotechnology research program despite an economy crippled by the U.S. trade embargo and the country's socialist system. Cuban scientists are celebrating another milestone this month as well: the first U.S. license for several promising Cuban cancer drugs.

    Sugar shot.

    Simpler synthesis of carbohydrates has led to a new vaccine for Haemophilus influenzae type B.

    CREDIT: CIGB

    Until Hib vaccines were introduced in the 1990s, the bacterium was a leading cause of meningitis and pneumonia in children under 5. Few infections now occur in the industrialized world, but Hib still kills 600,000 children each year in developing countries. Producing the Hib vaccine by fermentation isn't ideal—it's hard to control the size and configuration of the sugars and costly to purify the product.

    Chemists led by Vicente Vérez Bencomo at the University of Havana began working on a synthetic Hib vaccine in 1989. After meeting at a conference, Vérez Bencomo's group teamed up with chemist René Roy of the University of Quebec in Canada and spent 2 years streamlining the synthesis of Hib sugars—for example, making an eight-unit oligomer in a single reaction rather than in 16 steps.

    The chemists ultimately coupled a sugar from Hib to a tetanus toxoid protein, which stimulates a strong and long-lasting immune response. Working with four other Cuban institutes, including the Center for Genetic Engineering and Biotechnology, they tested this compound in animals, then adults, and finally children in Cuba. The synthetic vaccine generated an antibody response comparable to that of existing vaccines. The potentially cheaper Cuban vaccine could help the World Health Organization reach its goal of vaccinating all children against Hib, notes Roy.

    Cuba is working on other synthetic vaccines, including one against the pneumococcus bacteria that cause pneumonia, Fernández-Santana says. Indeed, the Hib example will spur “a major move in the entire area of carbohydrate-conjugate vaccines” for diseases ranging from staph infections to malaria and AIDS, predicts chemist Peter Seeberger of the Swiss Federal Institute of Technology in Zürich.

    The success of the synthetic vaccine attests to the growing strength of Cuban biotechnology, an industry in which President Fidel Castro has invested more than $1 billion since the 1980s (Science, 27 November 1998, p. 1626). Cuban scientists have pushed ahead despite obstacles such as U.S. Treasury rules that ban companies with U.S. branches from licensing Cuban technologies.

    That makes a Treasury Department decision last week to allow a California company called CancerVax to license three Cuban cancer vaccines “a huge breakthrough,” says David Allan, CEO of YM BioSciences Inc. in Toronto, the company that initially licensed the technology from Cuba's Center of Molecular Immunology. The United States approved a license for Cuba's meningitis B vaccine in 1999, but this is the first license for a biological product, according to Allan.

    Biotech experts say they expect much more from Cuba in the coming years. “Their pipeline is very, very deep now,” says James Larrick, a biotechnology entrepreneur in Palo Alto, California. “It's gone into an adolescence and it's looking pretty good.”

  5. STEM CELL RESEARCH

    Advocates Keep Pot Boiling As Bush Plans New Centers

    1. Constance Holden

    Stem cell research was back in the news with a vengeance last week. The Bush Administration announced plans to speed up work on approved cell lines, while Democrats revealed that Ron Reagan Jr. will address their national convention on the promise of stem cell research (see p. 473). A conservative senator held a hearing to showcase the virtues of adult stem cells, which are ethically less troublesome than embryonic cells. All this activity suggests that the issue will be visible in the fall election despite the lack of any planned votes in Congress.

    The first move came on 14 July with a letter from Health and Human Services Secretary Tommy Thompson to House Speaker Dennis Hastert (R-IL). Thompson announced plans for a “National Embryonic Stem Cell Bank” and three National Institutes of Health (NIH)-funded “centers of excellence” on “translational” stem cell research. The new centers will supplement three university-based centers focusing on basic stem cell research. “Before anyone can successfully argue that the stem cell policy should be broadened, we must first exhaust the potential” of currently available lines, Thompson wrote, referring to the 9 August 2001 decision by President George W. Bush that only lines derived by that date would be eligible for federal funding.

    Making a statement.

    Irving Weissman, left, waits to testify as some audience members wear their views on stem cell research.

    CREDITS: C. HOLDEN

    The organization that wins the contract to run the bank—which will cost an estimated $1 million a year—will characterize cell lines, ensure their viability and quality, and distribute them at low cost to researchers. A central repository can also serve as a “help desk for researchers,” says NIH Director Elias Zerhouni. NIH is already negotiating with WiCell, the company that distributes lines developed at the University of Wisconsin, in hopes of lowering the current $5000 price tag to several hundred dollars, says James Battey, chair of the NIH Stem Cell Task Force. “Most providers seem willing to make lines available,” adds Battey, although the bank may not be able to stock all 21 lines currently available. Officials hope to start the bank within a year.

    Research advocates welcome the initiatives but say they don't go far enough. Representatives Michael N. Castle (R-DE) and Diana DeGette (D-CO), who want to give government scientists access to additional lines, said in a statement that the bank won't solve the basic problem: “The federal government is still banned from supporting the development of disease-specific stem cell lines and … from granting scientists access to new lines.” NIH has given infrastructure grants to institutions holding 23 lines. Battey admits that the fate of other pre-9 August 2001 lines “is entirely unclear.”

    The Senate hearing, chaired by Sam Brownback (R-KS), focused on adult stem cells. The implicit message: There's no need to expand work on human embryonic cells. But many attendees seemed to disagree. Some sported red T-shirts and caps proclaiming their support for ESCs (embryonic stem cells) and SCNT (somatic cell nuclear transfer, or research cloning). And the three other committee members who attended the hearing, all Democrats, tried to counter Brownback's message, invoking Nancy Reagan and pointing out that even the NIH Web site stresses the limitations of adult stem cells.

    The key scientific witness, Stanford University stem cell researcher Irving Weissman, wasn't too helpful, either, arguing for research cloning so scientists can study hereditary diseases in vitro. Brownback has proposed banning such work, as well as reproductive cloning. “Whoever of you acts to ban this research is responsible for the lives it could save,” said Weissman.

    Meanwhile, state-based initiatives are moving ahead. Last month New York legislators introduced the Ronald Reagan Memorial Stem Cell Research Act, and a bill in Illinois has been renamed the Ronald Reagan Biomedical Research Act.

  6. RESEARCH MANAGEMENT

    Security, Safety Probes Shut Down Los Alamos National Lab

    1. David Malakoff

    Scientists at Los Alamos National Laboratory received an unwelcome break from work this week, along with a fiery warning from their boss. Director George P. (Pete) Nanos indefinitely suspended nearly all activity at the sprawling New Mexico facility after employees apparently violated security and safety rules. The retired Navy admiral also vowed to find and fire any miscreants.

    “This willful flouting of the rules must stop, and I don't care how many people I have to fire to make it stop,” Nanos wrote in a 16 July memo to employees. “If you think the rules are silly, if you think compliance is a joke, please resign now and save me the trouble.”

    Nanos, who replaced John Browne in January 2003 in the midst of an investigation into accounting and security breaches, may also be worrying about his own job. The contract to manage the 10,000-employee, $2.2-billion-a-year laboratory is up for competition next year, and “these repeated incidences certainly do not help” the University of California (UC), which has run the lab since its founding in 1943, says Senator Pete Domenici (R-NM), a longtime lab backer.

    No joke.

    Los Alamos Director Pete Nanos says he will fire those who don't follow the rules.

    CREDIT: LANL

    Nanos's harsh communiqué had its origins in a 7 July inventory that concluded that two computer storage disks holding classified data were missing from the lab's Weapons Physics Directorate. “Once again, the failure … to follow [security rules] has brought disrepute to Los Alamos,” Nanos said 2 days later, announcing the suspension of some classified research while investigators looked into the losses, which officials now say may involve more disks. Last week, amid growing criticism from lawmakers, UC officials, and Department of Energy (DOE) leaders, Nanos expanded the suspension to the rest of the laboratory, which also conducts a range of nonclassified work. Contributing to that decision, lab officials said, were several safety lapses, including a recent case in which an intern was injured by a research laser that was being used in an experiment. The work stoppage was needed, Nanos wrote, to review rules and make employees “aware [of] how serious this situation is.”

    That message wasn't lost on several Los Alamos scientists, who described a “somber” and “shell-shocked” atmosphere in laboratories conducting nonclassified studies. “Some people are frustrated by the delay, but they also realize the university's fate may be decided by this,” said one, who asked to remain anonymous. Although there was no official word, some scientists were hoping they could be back to work within days.

    Meanwhile, senior DOE officials and the House Committee on Energy and Commerce, which oversees the lab, are conducting their own probes. “Frankly, nobody understands how we have gotten ourselves into this mess,” Nanos wrote. UC President Robert C. Dynes and UC Regents Chair Gerald L. Parsky also plan to visit. “Their message could not be more clear,” Nanos said in his statement. “The culture at [Los Alamos] must change and it must change now if UC is to continue as lab manager.”

  7. SCIENTIFIC MEETINGS

    NIH Scientists in a Spin Over Foreign Travel

    1. Jocelyn Kaiser

    Alternating yes-no directives have confused biomedical staff members at U.S. agencies about whether they will be permitted to attend certain international scientific meetings. The muddle arises from an effort by the Department of Health and Human Services (HHS) to set limits on foreign travel, a move that has met resistance at the National Institutes of Health (NIH).

    In the past month, HHS issued last-minute orders for 34 NIH staffers to cancel plans to attend two scientific meetings in Canada. Then in mid-July, according to staffers, HHS lifted the limits on travel to another Canadian meeting. What happens next is not clear: “It's a complex situation and we're working with the department to smooth things out,” says Michael Gottesman, NIH director of intramural research.

    HHS's Office of Global Health Affairs (OGHA), headed by William Steiger, has been concerned about international travel costs for 2 years. NIH responded by keeping to a total annual travel budget, says an NIH official speaking on background. Last summer, HHS also issued a policy requiring agencies to list foreign meetings with more than 20 expected attendees before each fiscal year began. Then this spring, Steiger restricted the number of U.S.-based HHS staff attending the XV International AIDS Conference in Bangkok to 50 (20 from NIH)—compared with 236 at the last global AIDS gathering.

    Since then, HHS has moved to cut participation at more meetings. In early June, OGHA ordered NIH to tell 13 scientists planning to attend a brain-mapping meeting in Hungary to stay home, then at the last minute allowed them to travel. Three didn't get the message in time and missed the meeting (Science, 9 July, p. 162). Then, just before a 26 to 30 June meeting of the Research Society on Alcoholism in Vancouver, OGHA told NIH to select 18 staffers to stay home out of 75 registered to go. The office then approved only 57 of 73 staff members planning to attend the American Society of Virology meeting from 10 to 14 July in Montreal. Responding to OGHA orders, NIH officials issued e-mails canceling trips 2 days before these meetings.

    Some staffers who were kept home had not planned to give talks or presentations. But alcoholism researcher Daniel Hommer says, “I felt terrible” that a student of his who intended to present a poster had to stay home. NIH was anticipating another big cut before this week's International Congress of Immunology in Montreal. But in the end, Steiger's office allowed all 101 who planned to go to attend.

    HHS spokesperson William Pierce— observing that NIH scientists are “whiney” —says the cuts reflect the fact that, while it's not written down, the HHSpolicy issued last summer limits attendance at foreign meetings to 40 people, with occasional “exceptions.” NIH erred, he says, by not including some of these meetings on its list last year and informing HHS “at the last minute.”

    The NIH official acknowledges that some meetings weren't on the list but says they had not heard of the 40-person limit. NIHspokesperson John Burklow says that HHS officials have now made clear “that 40 is about the number that should be going.”

  8. SPACE ASTRONOMY

    U.S. Academy Panel Urges NASA to Upgrade Hubble Telescope

    1. Andrew Lawler

    Hubble's devoted fans won a victory last week when a National Research Council (NRC) panel recommended that NASA upgrade—and not just service—the aging space telescope. The panel's interim report urges NASA to take a year to figure out whether that upgrade should be done by a robot or by sending the shuttle to fix Hubble. Both options face big problems: The panel acknowledges that technology capable of pulling off a robotic mission doesn't yet exist, and NASA Administrator Sean O'Keefe has previously rejected a shuttle mission.

    The National Academies convened the panel in March at NASA's request after a public outcry forced O'Keefe to backtrack on a January decision to cancel all further missions to Hubble. Astronauts have visited the 13-year-old telescope four times for repairs and upgrades, and NASA had originally planned a fifth shuttle mission for next year. But it scrapped that plan after the Columbia disaster last year put all shuttle flights on hold. Calling Hubble “arguably the most important telescope in history,” the 20-member panel, led by Louis Lanzerotti of Bell Labs and the New Jersey Institute of Technology in Newark, came down hard in favor of upgrading it. The panel told NASA to keep both the robotic and shuttle options open for the coming year.

    Politicians echoed the panel's enthusiastic support for Hubble. “We will continue to work with NASA to see that the agency keeps all its options open concerning a Hubble mission and sets aside the funding needed to carry out any mission, whether manned or robotic,” says Representative Sherwood Boehlert (R-NY), who chairs the House Science Committee. Senator Barbara Mikulski (D-MD), who sits on the panel that controls NASA's budget, called the findings “enormously encouraging.” Astronomers were ecstatic. “I'm delighted,” said John Bahcall, a Princeton University astronomer. “But will NASA listen?”

    Well armed.

    A technician demonstrates control of a robotic arm being developed at Goddard Space Flight Center.

    CREDIT: CHRIS GUNN/GODDARD SPACE FLIGHT CENTER

    NASA spokesperson Glenn Mahone says that a Hubble servicing mission by the shuttle “should not be precluded.” But the academy's advice runs counter to the direction the space agency has been moving. Last month, the agency asked industry to propose how to bring the telescope down robotically without necessarily extending its life.

    A robotic upgrade could take one of two forms: a flight to renew fading systems such as the batteries and gyroscopes, or a more complex effort that would also replace two of Hubble's instruments. How much either would cost at this point is unknown. Meeting last month with panel members, O'Keefe warned that any robotic mission “is going to be pretty tough.” But not as tough, he added, as returning astronauts to the Hubble without the safe haven of the international space station, which flies in a different orbit. A second shuttle would have to be readied in case of trouble, and a rescue attempt might require a walk in open space.

    Lanzerotti's panel agreed that any robotic mission would be difficult. Engineers would need to figure out how to get a long robotic arm to handle heavy payloads and carry out delicate motions while coping with a 2- second time delay. For that reason, the panel suggested that NASA talk with the Defense Advanced Research Projects Agency, which is aiming to launch a test mission in 2006, called Orbital Express, that would lead to the automatic servicing of spacecraft. The $100 million project is being led by Boeing Corp. of Chicago, also the lead contractor on the space station. NASA should begin “an immediate active partnership” with the military, says the panel, to ensure access to the most sophisticated technologies.

    The NRC panel plans to submit a final report later this summer after visiting NASA's Johnson Space Center in Houston and boning up on the state of robotic technologies. In the meantime, Lanzerotti says, NASA should dust off its plans for a shuttle servicing mission and get serious about a robotic repair mission. “We're saying, ‘Proceed vigorously,’” he says. Time is of the essence: Hubble's batteries and gyroscopes may fail by 2007.

  9. THEORETICAL PHYSICS

    Physics Enters the Twilight Zone

    1. Charles Seife

    Parallel universes have launched a thousand bad science-fiction plots. Now mainstream physicists and cosmologists are arguing that they are both useful and, in an infinite cosmos, inevitable

    As is your habit, you are reading Science at breakfast (today's treat: an omelet made with dodo eggs). But as soon as you finish this paragraph, a carnivorous wombat crashes through the door into your apartment and chomps angrily on your prehensile tail. Right … now.

    Ridiculous? Certainly—here. But it's true somewhere in the universe, according to many scientists. An increasing number of mainstream physicists have espoused an almost unspeakably bizarre picture of the cosmos, one filled with mirror worlds and parallel universes, with doppelgängers and alternate histories. In many of these parallel universes—countless ones—an exact duplicate of you is doing exactly what you're doing: reading this article in Science magazine. In others, you exist with subtle (and not-so- subtle) changes from your present-day life—you sport horns or speak in Latin or make a living by juggling hedgehogs at cocktail parties.

    This picture of parallel universes may seem like science fiction or a cosmologist's playful mind game. But multiple, independent lines of argument support it. Even among skeptics, most experts tend to accept two basic and uncontroversial premises about the nature of the universe—premises that, followed to their logical conclusion, imply the existence of infinite mirror worlds and infinite identical copies of you inhabiting many of those worlds. And there are other theoretical reasons to believe in parallel universes as well.

    Talk of more than one universe sets some physicists' teeth on edge. “There's way too many worlds,” says David Wineland, an experimental quantum physicist at the National Institute of Standards and Technology in Boulder, Colorado. “Life has to be simpler.” But proponents say the skeptics just lack the courage of their own perfectly orthodox convictions. “They backslide,” says David Deutsch, a physicist at the University of Cambridge, U.K. That's to be expected, he says. “When you hear an argument that makes sense logically but seems wrong, you don't accept it right away. You flag it as a problem, as a weirdness.” If Deutsch and like-minded colleagues are right, though, weirdness is the most natural thing there is.

    Reasons to believe

    The basic argument for parallel universes goes like this: Space is infinite. Within any finite volume of space, however, matter and energy can be arranged in only a finite number of ways. So if you carve space into enough same-sized regions, sooner or later they will start repeating themselves.

    Most cosmologists accept the first premise. “If you ask people to wager, they think that an infinite universe is the best bet,” says Max Tegmark, a physicist who is shortly to join the faculty at the Massachusetts Institute of Technology in Cambridge. One reason is that the data seem to point that way.

    The evidence comes from subtle ripples in the cosmic microwave background—the fossil light left over from an era less than 400,000 years after the big bang. When sensi- tive microwave telescopes such as BOOMERANG, DASI, and WMAP spotted fine fluctuations in the past few years, the data matched cosmologists' predictions quite well. The largest were 1° wide, and the abundances of the ripples of different sizes were pretty much as expected. Not only was this result a ringing victory for the reigning theory of how the universe was born, but it also showed that the cosmos was very, very flat.

    Albert Einstein's general theory of relativity describes spacetime—the fabric of the cosmos—in the mathematical language that geometers use to describe a curved surface. In the simplest scenario, there are only three possible kinds of surface that spacetime could be: curved like a giant ball, warped like an enormous saddle, or slate flat.

    The cosmic microwave background all but eliminated the first two possibilities. Both the ball-shaped and saddle-shaped geometries distort the apparent sizes of distant objects. Because the ripples in the cosmic microwave background, which come from a surface billions of light-years away, showed no such distortion, the universe appeared to be flat (Science, 28 April 2000, p. 595). Other data, such as those from distant supernovae and from galaxy clusters, also support the idea.

    A flat universe is an infinite universe. Unlike a ball-shaped universe, which is “closed” and has a finite volume, a flat universe goes on forever unless some sort of exotic warping happens billions and billions of light-years away. Scientists are looking for signs of such weird geometries. Apart from a few quirks, however, all the evidence so far points toward infinity.

    Even without the new data, there would be a reason to believe in an infinite universe. The theory of inflation—which underpins modern cosmological thought—seems to imply that the universe is much, much bigger than what we can see in the sky. According to the theory, in a billionth of a billionth of a billionth of a second after the big bang, the cosmos ballooned from the size of an atom to the current size of the visible universe. This “inflation” phase explains not only the size and abundances of the ripples in the cosmic background radiation but also several other properties of the universe. The theory also implies that the fabric of the cosmos is constantly stretching in regions well beyond the edge of the visible universe. That makes the cosmos unfathomably huge. “Inflation generically predicts infinite space,” says Tegmark. “Not just big, but infinite.” Hard as that may be to visualize, cosmologists tend to believe it, or at least to be agnostic about it.

    The second premise in the argument for parallel universes—the one about the arrangements of matter and energy—is more subtle. It came out of trying to figure out what happens to matter or energy when it falls into a black hole.

    Black holes are the ultimate devouring machines. If you are unlucky enough to fall into one, you are expunged from the universe; all the matter and energy contained within your body is entirely lost to the outside world. It is as if every single atom in your body, as well as all the information stored upon those atoms, has been removed from the cosmos. All that remains is the faint remnants of your mass. As the black hole adds your mass to its own, it becomes a little bigger. Its event horizon, the roughly spherical surface that defines the boundary of the black hole, grows a tiny bit. All that's left of you, or any other matter or energy that falls into the black hole, is a span of surface area on the black hole's horizon.

    In the early 1990s, physicists such as Gerard't Hooft of Utrecht University in the Netherlands and Leonard Susskind of Stanford University realized that this relation among matter and energy and surface area implies something very odd about the nature of matter and energy. If the second law of thermodynamics applies to a black hole (as most physicists believe), then any chunk of matter and energy enclosed in a finite ball can be arranged in only a finite number of ways. You can turn it into a Ford Pinto or a brace of Chihuahuas or a flying purple monkey with lightning bolts coming out of its ears—there are lots of possibilities. Indeed, the possibilities seem endless, but there are only a finite number of them. This restriction is known as the “holographic bound.”

    “If you believe in the second law of thermodynamics” and that it applies to black holes, says Jacob Bekenstein, a physicist at the Hebrew University of Jerusalem, “you can derive the holographic bound very easily.” This, too, is a relatively well-accepted principle in physics.

    You'll never walk alone

    The infinite universe and the holographic bound are relatively uncontroversial on their own. But if you combine the two, all hell breaks loose. You get parallel universes. The argument is surprisingly simple.

    Imagine a sphere that encompasses the entire visible universe—a giant ball 100 billion light-years across would do just fine. Vast as it is, this sphere is finite, so by the holographic bound the matter and energy inside can take only a finite number of configurations. That huge number of configurations—call it a “gazillion”—includes the arrangement that makes up our universe.

    Our bubble is 100 billion light-years across, but if the universe is infinite, then you can draw a 100-billion-light-year sphere right beyond it. And one beyond that. And one beyond that. And one beyond that. In an infinite universe, you can draw a gazillion and one spheres, each of which is 100 billion light-years across. But there are only a gazillion possible configurations of matter and energy inside those spheres. Thus, at least two of those spheres must share the same configuration of matter and energy. These two are identical in every way—down to the last atom. But why stop here?

    A gazillion and two … a gazillion and three … two gazillion … 10 gazillion. … Out of 100 gazillion spheres, 99 gazillion must be exact duplicates. If the matter and energy in the universe are created by random quantum fluctuations as inflation dictates, then, on average, there will be 100 copies of every possible configuration of matter and energy for a 100-billion-light-year sphere. Including ours.

    In a collection of 100 gazillion spheres, on average, 100 will be identical to our visible universe. In each of those spheres, there will be an exact copy of our Earth and our sun and our galaxy, and of you and me. “It would be impossible to tell which is which,” says Tegmark. And in each universe, a copy of you is reading Science right now.

    There's no reason to stop at 100 gazillion spheres. In an infinite universe, if the holographic principle is correct, there are infinite copies of our universe, infinite copies of you and me, floating about in space. According to Tegmark's calculations, on average, the nearest of these identical copies of our visible universe is about 10 to the 10100 meters away—colossally farther than light has been able to travel since the universe was born, but that identical universe is there nonetheless. So is every possible variation on it. Not only are there infinite copies of you, there are infinite copies of you with a tail (if that is an allowable configuration of matter and energy) and infinite copies of you getting eaten by a purple alligator (if this, too, is possible).

    Accept the idea of inflation and the holographic bound, says Tegmark, and you're forced to conclude that the universe is populated by infinite copies of you—and infinite copies of you, altered in bizarre and disturbing ways.

    Worlds without end

    Even if the universe is finite or the holographic bound is wrong, there is still reason to believe in parallel universes. By accepting their existence, physicists can explain all the weirdnesses of quantum theory in an elegant way—a way that makes almost a century's worth of troublesome paradoxes suddenly disappear.

    In the 1920s, physicists Werner Heisenberg and Erwin Schrödinger formulated a revolutionary theory that explained the world of atoms and electrons and very small objects with great precision. The only problem was that although the mathematics behind quantum theory was relatively easy to handle, the physical interpretation—what the mathematical equations really mean—made no sense whatsoever.

    For example, the equations of quantum theory allow a quantum object such as an electron to be in two places—on the left and on the right—at the same time. But when you try to measure where the electron is, it instantly appears to “choose” one or the other alternative. Other properties, such as the up-or-down character of a particle's magnetic “spin,” behave in the same way.

    To understand how such seeming violations of common sense could happen, many physicists embraced the so-called Copenhagen interpretation of quantum mechanics. A Copenhagenist, generally, thinks of particles as wave functions: wavelike mathematical objects that can spread out and interfere and be in several places at the same time. When someone tries to measure the spreading wave function, it “collapses” into a tight packet whose location is “chosen” probabilistically. This, at least, was something that might happen in the real world.

    But the Copenhagen interpretation had its own problems. Einstein came up with a thought experiment that underscored one of them: the so-called EPR paradox. In an EPR scenario, two particles are “entangled” so that if one particle is spin up, the other must be spin down, and vice versa. But the laws of quantum theory say that both particles can counterintuitively be spin up and spin down at the same time—until you measure one of the particles. At that moment, the wave function collapses, and the particle “chooses” to be spin up or spin down. This means that the other particle, at the same moment, must “choose” the opposite spin—even if you never measure it, and even if it is halfway across the universe. It's as if the two particles somehow “communicate” faster than light, a “spooky action at a distance” that Einstein could not abide (Science, 17 March 2000, p. 1909).

    Copenhagen, however, is not the only interpretation in town. In the late 1950s, Princeton University graduate student Hugh Everett III proposed a way to banish quantum weirdness without invoking ugly ideas like wave-function collapse and spooky action. The cost of his “Many Worlds” interpretation is that you have to accept the idea of parallel universes—to many, an equally spooky notion.

    Imagine that our universe is a transparent sheet. You and I and everything else in the universe are like cartoon characters embedded in that sheet. A particle in superposition, such as an electron with spin up and spin down simultaneously, looks like two distinct spots on that sheet—but when an observer tries to measure where that particle is, something odd happens. The universe splits in two. Suddenly there are two sheets, one on which the particle is spin up and another on which it is spin down. Gathering information about the particle's state doesn't cause a wave function to collapse and the particle to “choose” its fate; instead, the information reveals that what appeared to be a single sheet was, in fact, two sheets stuck together, with two observers looking at two particles (see figure).

    Split decision.

    n the “Many Worlds” interpretation of quantum mechanics, the both-up-and-down spins of entangled particles reflect conditions in parallel universes, which peel apart when observers measure the spins and communicate the results.

    CREDIT: P. HUEY/SCIENCE

    A godlike creature would see the cosmos as a multifoliate object, made of sheets that are constantly splitting (and occasionally fusing) at various places as information about quantum objects travels from point to point. This incredibly complex object is the “multiverse,” and unlike the previous version of parallel universes, these world sheets can interact with one another.

    An observer embedded on the sheet—in one universe within this multiverse—doesn't have such a broad view. She, too, splits in two when the sheets separate. Each observer's view is limited to a single sheet, one universe. One of these twofold observers sees the particle as spin up; the other sees it as spin down. Each concludes that the particle has “chosen” and its wave function has collapsed, unaware of the doppelgänger in the sheet that just peeled away.

    The EPR paradox makes sense in the multiverse—no longer is there a need for faster-than-light “spooky action.” The pairs of particles inhabit two sheets that are stuck together. On one sheet, particle A has spin up and B has spin down, and vice versa for the second sheet. When someone makes a measurement of one of the particles, the world sheets divide. The particles needn't “choose” spins and conspire to have opposite spins upon measurement; instead, the measurements simply reveal spins that were already there, embedded in their now-separated sheets.

    Although quantum physicists agree that Many Worlds gives a consistent explanation for the weirdness of quantum theory, many recoil from the idea of a multiverse. “Many Worlds is a self-consistent way to describe our nature,” but it is still “unsatisfying,” says Wineland. None of the interpretations of quantum mechanics has the ring of truth, but “experimentalists can avert our eyes to the [interpretation] problem,” he says. Bekenstein, a theorist whose work on black holes helped lay the groundwork for the holographic bound, agrees. Speculating about the nature of reality, he warns, can go too far. “I know the rules for quantum mechanics, and they are accurate and very uncontroversial,” he says. “Other people worry why these are the rules. That's how you get into the morass of the Everett interpretation and the Copenhagen interpretation. I don't find that this is very fruitful.”

    Deutsch, on the other hand, argues that physicists must traffic in explanations as well as equations. “Both are necessary conditions for something to be good science,” he says. And Tegmark, who believes that cosmological data provide solid, if indirect, support for the existence of the holographic-type parallel universes, says that physicists are now entitled to speculate about such things. “The borderline between physics and philosophy has shifted quite dramatically in the last century,” he says. “I think it's quite clear that parallel universes are now absorbed by that moving boundary. It's included within physics rather than metaphysics.” At least it is in this universe.

  10. SPACE SCIENCE

    NASA Reining in PI-Led Planetary Missions

    1. Richard A. Kerr

    After NASA PI-led missions to the planets busted budgets and one ended catastrophically in space, the agency is grappling with containing costs while maximizing scientific return

    When a spacecraft called MESSENGER blasts off early next month from Cape Canaveral bound for Mercury, many planetary scientists will be holding their breath. MESSENGER is the seventh in NASA's innovative Discovery Program of planetary missions—a popular program that is now under intense review. A spectacular success, or failure, could have weighty repercussions for the entire program.

    NASA launched the Discovery Program a decade ago with an appealingly simple concept. As a NASA associate administrator put it at the time: “We're asking for PIs to come in with a whole mission. If we like it … we'll buy it, pay you, and you do it.” Of course, running a third-of-a-billion-dollar mission to study an asteroid, comet, or star dust was never quite that hands-off; NASA always exercised periodic oversight. Still, relatively freewheeling missions led by principal investigators have come to permeate NASA's planetary exploration programs.

    But troubles in the program have led NASA to clamp down on these missions. “We allowed the pendulum to go way too far in some cases,” says NASA solar system exploration division director Orlando Figueroa. Failures in a variety of NASA missions, as well as a wave of cost overruns in supposedly cost-capped Discovery projects, have made NASA wary. “We're looking for a happy medium” between heavy-handed oversight and runaway costs, says Figueroa. Patches are in place, but a long-term fix may rest with the National Research Council (NRC); at NASA's request, it has just launched a study of PI-led missions.

    The origins of NASA's sensitization to risk are clear enough. Mission after mission has ended catastrophically in recent years. In 1999, the agency lost two spacecraft at Mars, one to a silly confusion of English and metric units. The space shuttle Columbia broke up on reentry because a potential hazard was overlooked. And Discovery's own CONTOUR comet mission blew itself up while leaving Earth orbit because its rocket motor was set too deeply within the spacecraft (Science, 24 October 2003, p. 546). “When the Discovery Program started, one could have so-called ‘acceptable risk,’” says Stamatios Krimigis of the Applied Physics Laboratory (APL) in Laurel, Maryland, where he has been involved in three Discovery missions. “As time went on, in particular after the Columbia accident, risk was thought to be totally unacceptable.” The Mars program independent assessment—which looked at all of NASA's “faster, cheaper, better” missions like Discovery—pointed to meager funding of mission development as a root cause of failures.

    Scorched.

    The ambitious MESSENGER mission suffered cost woes on its way toward Mercury.

    CREDIT: NASA/JOHNS HOPKINS UNIVERSITY APPLIED PHYSICS LABORATORY/CARNEGIE INSTITUTION OF WASHINGTON

    As fear of failure grew, Discovery missions still in development began running into serious budget problems. But the problems had little in common. On MESSENGER, a deputy project manager walked off the job without warning just weeks before he was to replace the retiring project manager; a dozen layered circuit boards built by APL began delaminating; and a supplier was 4 months late delivering the spacecraft's skeletal structure, among other problems. On Deep Impact—the mission to study a comet's interior by blasting it with a giant bullet—a number of computer problems required expensive and time-consuming software fixes or modifications to circuit boards.

    Last summer, NASA's associate administrator for space science, Edward Weiler, responded to the mounting Discovery troubles by demanding higher reserves in budgets of missions still in development. PIs had been free to decide how much of their budgets to set aside for unforeseen costs; some reserved as little as 10% to 15% at the development stage before serious construction began. Weiler laid down a minimum of 25% reserves at that stage, helping trigger budget slashing in Dawn—an ion-propelled mission to two great asteroids—and Kepler—a sun-orbiting spacecraft in search of extrasolar planets. NASA actually cancelled Dawn last Christmas Eve, but it was resurrected by a contractor's offer to forgo its fee.

    After attracting NASA's intensive scrutiny by busting the $299 million Discovery budget cap, both MESSENGER and Deep Impact had launch delays imposed and had brushes with cancellation. Every mission not yet in space suffered “descopings,” self-imposed trimming of nonessential science. Perhaps the most disheartening of these came when the Dawn PI deleted his own magnetometer from the spacecraft, losing the chance to search for a subsurface ocean within the asteroid Ceres.

    Although most observers consider the upping of cost reserves to be prudent, many see it as a temporary fix. In the long run, the question is, “How do we assure with a high level of probability that we're not going to hit a cost crunch?,” says Noel Hinners of the University of Colorado, Boulder, who has held a number of high-level positions in NASA and the aerospace industry. Part of the problem, says Hinners, is that the Discovery process is so competitive that scientists tend to overestimate how much science they can get in under the cost cap.

    In addition, “the easy missions have been done,” says planetary geophysicist Maria Zuber of the Massachusetts Institute of Technology. “The recent proposals have been more challenging, and the budget has not gone up in proportion to the degree of difficulty.” The cost cap has now been raised to $360 million, but both MESSENGER and Deep Impact started out at or near the cost cap while they had to design complex spacecraft and instruments from scratch.

    In the coming months, two groups will be grappling with how best to contain these cost pressures without descending into the onerous NASA oversight triggered by the recent cost overruns. An NRC study committee chaired by space physicist Janet Luhmann of the University of California, Berkeley, plans to deliver a report by next May on lessons learned from all of NASA's PI-led space science missions. That will be just in time to offer some guidance to another group: evaluators picking a winner from the latest crop of about 20 Discovery proposals submitted by the 16 July deadline. May the most science bang for the buck win.

  11. PSYCHOPHARMACOLOGY

    Volatile Chemistry: Children and Antidepressants

    1. Jennifer Couzin

    More than a decade after doctors began prescribing SSRIs for young people, investigators are trying to interpret ambiguous data about their side effects and efficacy

    Even in the earliest studies, there were troubling hints: Some children seemed to fare poorly on a type of drug now widely used to treat depression. Take the case of F., a chronically anxious 12-year-old boy with obsessive-compulsive disorder (OCD). More than 13 years ago, F. received Prozac—which boosts levels of serotonin, a signaling protein in the brain—as part of a study at Yale University. In the weeks that followed, he told doctors he was wracked by nightmares of killing his classmates and himself. He went off the drug and spent a month in a psychiatric unit before recovering.

    F. wasn't the only one. Five other young people among 42 receiving Prozac for OCD at Yale experienced similar symptoms, according to a published account. The drug, a selective serotonin reuptake inhibitor (SSRI), made them feel “like they were jumping out of their skin,” recalls Yale child psychiatrist Robert King, who helped run the study and write the report.

    Today, King looks back on those six patients, three girls and three boys, as harbingers of what has become a badly muddled debate in psychiatric medicine. After 16 years during which SSRIs have been viewed as lifesaving and have been widely prescribed for depression, experts face an unsettling possibility—that in a small number of young people, some of these drugs may trigger suicidal thinking or behavior. Recently released clinical trial data, moreover, reveal flaws in the evidence that SSRIs ease depression in children and adolescents.

    Although doctors have been giving SSRIs to children for more than a decade, the controversy over risks and benefits erupted a year ago, after a manufacturer sought approval for use of one such drug, Paxil, in children. Not only was the request denied, but it triggered new inquiries into an alleged association between SSRIs and suicidal thinking or behavior. The U.S. Food and Drug Administration (FDA) has commissioned an outside review (Science, 6 February, p. 745) and aims to issue a report by September.

    CREDIT: GETTY IMAGES

    In contrast, many psychiatric researchers credit SSRIs for a drop in youth suicides in the last decade. They doubt that the FDA review will settle the matter, partly because it lacks reliable information. Only a handful of large studies have been conducted on SSRI use in youngsters, and none was designed to assess suicidality.

    Nor is mining data likely to address another troubling issue, psychiatrists say: If SSRIs can induce suicidality, who is at risk, and why? Scientists have suggested that SSRIs may unleash dangerous behaviors in patients with bipolar disorder, and that some individuals may be acutely sensitive to these drugs.

    Companies that make SSRIs and the doctors who prescribe them are now on the defensive. In June, New York State Attorney General Eliot Spitzer accused U.K.-based GlaxoSmithKline (GSK) of concealing negative data on Paxil (Science, 11 June, p. 1576), and Congress is planning an inquiry. Psychiatrists worry, meanwhile, that the studies needed to resolve this issue may never be done. “Something is occurring with the medications,” says Timothy Wilens, a child psychiatrist and pharmacologist at Massachusetts General Hospital in Boston. But researchers say they don't know enough about SSRIs to explain why they work—or fail—in young patients.

    Guilt by association?

    In his Riverside Drive office overlooking the Hudson River, Donald Klein, director of the New York State Psychiatric Institute, dismisses the SSRI fuss as “a tempest in a teapot.” Like most others in his field, Klein says that SSRIs transformed the treatment of depression. An earlier type of antidepressant, the tricyclics, sometimes caused cardiac arrhythmias or fatal overdoses—a serious concern for potentially suicidal patients. And these drugs were never shown to work in children.

    Prozac was the first SSRI to hit the market, in 1988. Child psychiatrists began prescribing it. The first large study of Prozac in children appeared in 1997: Ninety-six volunteers received the drug or a placebo for 8 weeks. Fifty-six percent of those on Prozac improved, compared to 33% on placebo.

    “Quickly, practice patterns began to shift across the country,” recalls David Fassler, a child and adolescent psychiatrist in Burlington, Vermont. The use of SSRIs in young people took off, he says, “despite the fact that we had limited research.”

    Drug companies followed up with a handful of published SSRI studies in youngsters. Although few drugs on the market have been tested in children, physicians may prescribe them to youngsters “off label.” But companies were drawn to pediatric testing by the 1997 FDA Modernization Act, which promised a 6-month patent extension for any drug tested in children. Annual sales of Prozac reached $2.5 billion at the time. For some companies, the carrot was irresistible.

    In 2002, GSK submitted data to FDA on trials of Paxil in children. An FDA reviewer asked for more information about patients who suffered from what Glaxo called “emotional lability.” Many of these cases, GSK explained, involved self-harm or thoughts of self-harm that might be suicide-related. In one pediatric Paxil study, six depressed youngsters exhibited “possibly suicide-related events” compared to one on placebo; five attempted suicide, compared to none on placebo. Two other pediatric Paxil studies of depression, though, reported no difference in suicidality between drug and placebo patients.

    Reaction was swift. Both the U.K.'s drug regulatory arm, the Medicines and Healthcare Products Regulatory Agency, and FDA issued warnings against Paxil's use in children. Both agencies also launched reviews of other antidepressants to determine whether they, too, might cause suicidal thinking or behavior.

    Although such effects have been seen in adults taking SSRIs, it is unclear whether they occur more frequently in children, and no similar review is being undertaken—in part because many clinical trials have found that SSRIs work in adults, shifting the risk-benefit calculus. The ongoing pediatric reviews include Prozac, the only SSRI approved for use in depressed youngsters.

    Many observers, meanwhile, hope that a $17 million National Institutes of Health study, the Treatment for Adolescents With Depression Study, will provide insight. It compares the effectiveness of Prozac and psychotherapy in depressed teenagers over 9 months and assesses suicidality. Preliminary results are expected later this summer.

    Unbottled emotion

    Psychiatrists have long spoken of “overresponders,” children whose thoughts or behavior seem to surge in strange directions on SSRIs. Child psychiatrist David Shaffer of Columbia University in New York City says he encountered one this summer. He was treating a 15-year-old girl who had been profoundly depressed since the death of a friend. Several days after starting on a low dose of Prozac, she burst into Shaffer's office for an appointment, announcing that the hatred she'd long bottled up was coming out and that she'd been mean to her mother. “She had overresponded,” says Shaffer, who quickly halved her dose. The symptoms soon receded.

    In a study Mass General's Wilens published last year, one in five children on an SSRI had an adverse event, including disinhibition, agitation, and sleep disturbance. But Wilens and his colleagues didn't see any hints of suicidality.

    Only a handful of published reports have. The first to find it among adults appeared in 1990; a year later, King and his colleagues detailed cases among children on Prozac for OCD. The adult finding prompted FDA hearings in 1991 at which Prozac's maker—Eli Lilly of Indianapolis, Indiana—presented data that convinced the agency and its advisory panel that Prozac was safe in adults. (The company hadn't yet tested the drug in children.)

    But the concerns never completely disappeared. “Things can go very sourly wrong in the first few weeks” on an SSRI, says Martin Teicher, director of the developmental biopsychiatry research program at McLean Hospital in Boston and the lead author of the 1990 study in adults.

    Even the staunchest advocates of SSRIs agree that a small minority of pediatric patients do poorly on them. “Everybody, myself included, said it's clear there are patients who are put on these meds who get worse,” says John March, chief of child and adolescent psychiatry at Duke University Medical Center. How many—if any—of those patients become suicidal as a result of taking the drugs isn't known. It's possible, March and others say, that many of these patients are already suicidal; the SSRIs may supply them with the impulsivity to act.

    King doubts that this explanation covers all cases, however. In youngsters with OCD, suicidality is not pervasive. In his study, “the notion that these were just depressed kids” who were disinhibited by the drug “doesn't make a lot of sense,” he says.

    Drug benefit?

    Teen suicide rates for males (top) and females declined after the introduction of Prozac.

    SOURCE: CDC

    Researchers have sought a biological explanation for the adverse effects in children, but so far with little success. SSRIs boost serotonin levels, but how this affects the developing brain isn't well understood. Serotonin facilitates nerve signal transmission, modulating mood, sleep, appetite, and a variety of brain functions. Depression may occur when too little serotonin is available in nerve synapses. SSRIs block “reuptake” or absorption of serotonin after it has been released, making more of it available.

    In chemical structure, SSRIs differ subtly, but the clinical implications are poorly understood. Paxil has a short half-life in the body, for example, just 8 or 9 hours compared with 2 to 7 days for Prozac. It's not clear how that affects young patients, says Wilens, or whether SSRIs differ in the regions of the brain they affect: “We're just now developing the technology to look at that.” David Rosenberg, chief of child psychiatry at Wayne State University in Detroit, Michigan, is conducting some of the first imaging studies to see how Paxil, Prozac, and Zoloft each affect certain brain structures in children and adolescents—and whether youngsters with specific brain patterns to begin with fare better on the drugs.

    The biological picture is even murkier when it comes to explaining why SSRIs might cause suicidality. Teicher believes that the doses used in some studies—including the pediatric studies of Paxil that have come under scrutiny—were too high and may, paradoxically, have had the opposite of their intended effect. High doses, he suggests, may desensitize serotonin receptors, diminishing serotonin neurotransmission and worsening depression. Teicher and others also think that many children who react badly to SSRIs are suffering from bipolar disorder rather than depression. In patients like these—who are notoriously hard to diagnose because they often experience several depressive episodes before a manic one—SSRIs can precipitate mania. And this may lead to suicidal behavior, says Vermont's Fassler. Many child psychiatrists favor an extended evaluation of patients before prescribing an SSRI to try to make sure they're not bipolar and starting at very low doses.

    “Do we really know how to diagnose depression in kids?” asks Bill Potter, vice president of clinical neuroscience at Merck Research Labs outside Philadelphia. “The truth is, no. We don't even know that the people we are treating with childhood depression have the same illness as the people we are treating with adult depression.”

    Balancing risks

    Psychiatrists say they must weigh the risks and benefits of SSRIs in treating very sick children and adolescents and choose the course that seems most likely to help. The threat of suicide is ever-present: Two thousand teenagers take their lives every year in the United States, and 2 million make attempts. Many psychiatrists are convinced that drug therapy offers the best hope of reducing that toll, despite a small associated risk that some patients may respond poorly. If this were childhood cancer, says John Mann, a psychiatrist at Columbia University, the public wouldn't think twice about giving a drug that may have a small chance of causing suicidal thinking and behavior.

    But as the SSRI controversy has ballooned over the last year, the risk-benefit calculus has become knottier. First, unpublished studies of Paxil released by GSK suggest that this therapy didn't consistently help depressed children any more than a placebo did. The one published pediatric study of Paxil had shown the opposite.

    The concerns have also prompted regulators to reanalyze unpublished data for other SSRIs. In April, a set of experts commissioned to advise the U.K. government found that unpublished data suggested that four of five SSRIs were unlikely to benefit children—sometimes in contrast to what published data implied. The British team expects to publish its final recommendations in May 2005.

    FDA, meanwhile, has asked a team at Columbia to reclassify more than 400 adverse events from 25 pediatric trials of nine different drugs. Columbia received a mishmash of reports describing children who slapped themselves, stabbed themselves with pencils, held pocketknives to their necks, or otherwise behaved adversely. They're charged with assessing which are suicidal as opposed to self-destructive.

    Shaffer, who recused himself from the review effort in February after being criticized at an FDA hearing for his support of antidepressants, is pessimistic: “I don't know that [the review] will clarify anything,” he says.

    Indeed, this may be the one point on which SSRI supporters and antagonists agree. “The material that's gone to Columbia is worthless,” says David Healy, a psychiatrist at the University of Wales College of Medicine in the U.K., who has long warned about the suicide risk he believes is associated with SSRIs. FDA officials say they're convinced that the reclassification will help settle this issue once and for all.

  12. HIV/AIDS

    International AIDS Meeting Finds Global Commitment Lacking

    1. Jon Cohen

    “Access for All” was the theme when nearly 20,000 people met in Bangkok for the biennial stocktaking on the global epidemic, but that goal is still remote

    BANGKOK—The XV International AIDS Conference got off to a rocky start here on 11 July. After a series of uplifting speeches about the need for bold leadership to confront a disease that is killing 8000 people a day, the vast arena went dark, thousands of delegates turned on flashlights distributed to them, and United Nations Secretary-General Kofi Annan and Thai Prime Minister Thaksin Shinawatra lit a ceremonial candle to commemorate those who have died from the disease. It was a moving reminder of the ultimate focus of this weeklong event. But when the lights came back on, droves of dignitaries and celebrities left the arena—and thousands of delegates followed them. The problem was, the ceremony wasn't over. Two epidemiologists scheduled to speak faced rows of empty seats. They ceded their time to Paisan (Tan-Ud) Suwannawong, an HIV-infected representative from the Thai Drug Users' Network, who, choking back tears, spoke to the few hundred people who remained.

    When the biennial meeting got down to business the next day, however, it quickly found its footing. Once again, it lived up to its reputation as a potent brew of basic and clinical researchers, policymakers, health care workers, drugmakers, community representatives, activists, and journalists. The first AIDS conference hosted by a developing country in Asia, it drew the largest crowd ever, with nearly 20,000 participants. “It went beyond our expectations,” said Sombat Thanprasertsuk, who heads the AIDS program for the Thai Ministry of Public Health. David Cooper, an Australian AIDS researcher who co-chaired the conference's scientific committee, said the location was “incredibly important.” Many Asian scientists and clinicians who attended can translate what they learn into efforts that could have far-reaching effects, Cooper said. “The infrastructure is better than in Africa,” he noted, “and doing prevention science and clinical science here will help fuel advances elsewhere.”

    In keeping with tradition, politics overshadowed science, and protests—directed largely at the Bush Administration's policies and its decision to strictly limit the number of government scientists who could attend the meeting—flared all week long. “Access for All,” the conference theme, was a particularly contentious topic: The delivery of effective treatment and prevention to the people most in need has moved at a snail's pace since the last international AIDS conference, held in Barcelona 2 years ago. In one of several apologies voiced by public health officials, Jim Yong Kim, head of the HIV/AIDS department at the World Health Organization (WHO), said, “We have failed miserably to do enough in the precious time that has passed since Barcelona.”

    Spreading the message. The XV International AIDS Conference was the first of these gatherings to be held in a developing Asian country

    CREDIT: J. COHEN/SCIENCE.

    A WHO report released at the conference estimated that a mere 440,000 of the 5.5 million people in developing countries who most need anti-HIV drugs now receive them. WHO contributed to a second report by the Futures Group, an organization headquartered in Washington, D.C., that said few people in developing countries who face the greatest risk of HIV infection receive key prevention services: Only 3.6% of injecting drug users (IDUs) had access to “harm-reduction” strategies such as needle exchange and methadone, 9 billion more condoms are needed each year to curb sexual transmission, and only 3% of HIV-infected pregnant women received drugs to prevent transmission to their infants.

    Funding remains a major roadblock. The Joint United Nations Programme on HIV/AIDS (UNAIDS), in yet another new report, estimated that the world needs $12 billion next year to comprehensively address the global epidemic—but commitments are well short of that amount (see graph). Just as loudly as the delegates urged wealthy countries to invest more money, they debated how best to distribute the funds that do exist.

    For many delegates, the 2-year-old Global Fund to Treat AIDS, Tuberculosis, and Malaria—which makes money available to any country in need—emerged as the favored mechanism. Many praised its transparency and grassroots approach, which requires countries to organize various stakeholders and spell out their needs and treatment and prevention plans. “I would put my money in the Global Fund,” said Dutch Princess Mabel van Oranje, an official with the Open Society Institute, who presented a critical review of various large funders.

    At the meeting, the European Union announced that it will commit another $52 million to the Global Fund, and the Bill & Melinda Gates Foundation added another $50 million. The fund's war chest now stands at nearly $5.5 billion, with the United States' $1 billion contribution the largest in total dollars (although not in terms of the share of gross national product).

    Promises, promises. Treatment still remains a distant dream for most of the world's poor.

    CREDIT: J. COHEN/SCIENCE

    In spite of that contribution, the Bush Administration came under heavy fire for favoring its own bilateral program, the President's Emergency Plan for AIDS Relief (PEPFAR). PEPFAR plans to distribute $15 billion over 5 years to 15 countries—15 times the U.S. commitment to the Global Fund. Critics note that PEPFAR will benefit fewer countries and many worry that, as van Oranje gently put it, “it seems slightly driven by ideology rather than the reality on the ground.”

    In particular, anti-HIV drugs purchased with PEPFAR money must be approved by the U.S. Food and Drug Administration; currently, none of the cheap, generic drugs that meet WHO's standards has received FDA approval. Critics charge that means money will be wasted on brand-name drugs. They also complain that PEPFAR places limitations on condom distribution programs and doesn't support needle exchange, which is particularly relevant in Vietnam, a PEPFAR recipient whose epidemic is primarily driven by IDUs. “There is room for PEPFAR,” said Stephen Lewis, the U.N. special envoy on HIV/AIDS in Africa. “But I do have a deep concern when people fail to see the Global Fund is the centerpiece.”

    The PEPFAR versus Global Fund fracas came to a full boil at a talk by Ambassador Randall Tobias, a former CEO of Eli Lilly who now serves as the Bush Administration's global AIDS coordinator. A small group of AIDS activists, holding signs saying “He's Lying,” staged a noisy demonstration when Tobias took the podium. They attempted to hand him an oversized check made out to the big pharmaceutical companies they contend the Administration is attempting to protect. Tobias returned to his seat, and for several tense minutes, the activists yelled at him and people in the audience yelled at them. In the end, the co-chairs of the session, Joep Lange of the University of Amsterdam and Helene Gayle of the Gates Foundation, persuaded the activists to sit down quietly in front of the stage, and Tobias gave his talk.

    In a speech larded with Reaganesque references to suffering HIV-infected people he had met, Tobias stressed that the Administration supported ABC: a popular prevention acronym for abstinence, be faithful, and condoms. “Condoms work,” said Tobias. “I want to get something straight about the U.S. position on prevention, because there seems to be a lot of confusion and misinformation. Preventing AIDS is not a multiple-choice test; there is no one right answer to preventing the spread of this epidemic.” He went on to single out PEPFAR plans in Vietnam but sidestepped the issue of needle exchange, only saying that the Administration will be “exploring means to support drug abuse prevention and treatment.” He also apologized for the past behavior of the developed world—explicitly including the United States. “In the past, we in the developed world displayed ignorance, or even apathy, about the global dimensions and intricacies of the AIDS crisis,” he said. “Over time, I believe awareness grew and apathy turned to empathy.”

    Mind the gap. By 2007, UNAIDS estimates a 50% shortfall in the resources needed to provide treatment and prevention worldwide.

    SOURCE: UNAIDS

    AIDS researcher Anthony Fauci, director of the U.S. National Institute of Allergy and Infectious Diseases, later said he was puzzled by the attacks on the Bush Administration. “I am astounded,” he said. “All of a sudden, $15 billion over 5 years is not a good thing. I think it's a tremendous thing. It's going to save a lot of lives.” PEPFAR, he argued, represents billions of new dollars that Congress might not otherwise allocate. He also stressed that the Administration is encouraging makers of generic drugs to apply for licenses and will put the approval process on a fast track.

    The closing ceremony initially threatened to compound the problems of the opening session. A stage show that made repeated religious references—and even had nurses pushing supposed AIDS patients in wheelchairs amid dry ice smoke—drew boos and cries of “shame.” But what could have been another disaster was averted. Thai Minister of Public Health Sudarat Keyuraphan apologized to the speakers at the tail end of the opening ceremony, and Paisan was invited to speak again. Former South African president Nelson Mandela, who took the stage to a standing ovation, made an impassioned plea for “renewed commitment of leaders.” His wife, Graça Machel, the former first lady of Mozambique, apologized for the failure of leadership over the last 20 years of the epidemic. “Bangkok has to be the end of promises made and promises broken,” she said.

    Peter Piot of UNAIDS closed the ceremony with a plea for an end to the “fragmentation” of the effort to treat and prevent HIV infection. “It is now our collective responsibility to make the money work for people,” said Piot. “The way we use this opportunity has tremendous implications for the future. We will not get this chance again.”

    Toronto, Canada, will host the next international meeting in 2006. At current rates, HIV will infect 10 million more people and another 6 million will die of AIDS in the intervening 2 years.

Log in to view full text