News this Week

Science  06 Feb 2004:
Vol. 303, Issue 5659, pp. 740

    New Chemical Elements Probe the Shoals of Stability

    1. Adrian Cho*
    1. With additional reporting by Daniel Clery.

    Nuclear scientists from Russia and the United States have penciled in two new entries on the periodic table. The new superheavy elements disintegrate within a second or so, but their fleeting existence suggests that an “island” of relatively stable nuclei lurks just beyond experimenters' grasp. “Our interest is not just to find one more element,” says Yuri Oganessian of the Joint Institute for Nuclear Research (JINR) in Dubna, Russia. “The key point is to conclude that the island of stability exists.”

    Researchers have long known that some atomic nuclei hold together better than others. The protons and neutrons within a nucleus nestle into series of quantum-mechanical shells, and certain “magic numbers” of protons and neutrons fill the shells and confer additional stability. For example, 82 is a magic number for protons, as is 126 for neutrons. Bismuth, whose nuclei contain 83 protons and 126 neutrons, is the heaviest element that doesn't undergo radioactive decay.

    Theorists suspect that the next magic number for neutrons is 184 and the next magic number for protons may be 114. So superheavy nuclei containing nearly 114 protons and 184 neutrons should be relatively stable. Instead of instantly falling to pieces in a process called spontaneous fission, they ought to disintegrate more slowly by spitting out alpha particles, which consist of two neutrons and two protons. If dry land signifies absolute stability, nuclei near the jumbo magic numbers are still a bit wet, says Kenton Moody, a nuclear chemist at Lawrence Livermore National Laboratory in California. “The undersea mountain of stability might be a more appropriate name,” Moody says.

    Island or seamount, in recent years nuclear physicists and chemists have been wading toward their goal by smashing together heavy nuclei. Using JINR's ion accelerator, the Livermore and JINR scientists produced 114- and 116-proton elements that lasted many times longer than their slightly lighter cousins (Science, 22 January 1999, p. 474). Now the team has created four nuclei of element 115 by blasting calcium-48 ions, which have 20 protons and 28 neutrons, into atoms of americium-243, which have 95 protons and 148 neutrons, the team reported this week in Physical Review C. In a fraction of a second, the nuclei decayed into another new element, number 113, which appears to last as long as a second.

    Such superheavy nuclei should help researchers develop a more unified theory of the nucleus, says Richard Casten, a nuclear physicist at Yale University in New Haven, Connecticut. The nuclei still fall about a dozen neutrons short of the magic 184. To reach that number and the center of the island, researchers need a facility that can accelerate rare unstable nuclei, such as the proposed Rare Isotope Accelerator (RIA), says Konrad Gelbke, director of the National Superconducting Cyclotron Laboratory at Michigan State University. “If you will ever do it,” Gelbke says, “RIA is your only shot.”

    In the meantime, the team from Dubna and Livermore will continue to feel around the edges of the island of stability, says Mark Stoyer, a nuclear chemist at Livermore, to see if it's “a tiny Gilligan's Island or as large as Cuba.” Either way, all agree the water is getting shallower.


    Researchers Issue an Ultimatum

    1. Barbara Casassus*
    1. Barbara Casassus is a writer in Paris.

    PARIS—The directors of hundreds of French research centers say they will stop performing administrative duties on 9 March if the government does not meet demands to restore budget cuts and jobs. Researchers in France's government-funded research agencies have signed a petition, launched on the Internet on 7 January, that already boasts more than 37,500 signatures. The petition calls for the government to pay the agencies what it still owes them for 2002, step up recruitment of young researchers, and hold a conference on the future of French research.

    Relations between researchers and the government have been deteriorating since deep budget cuts were imposed last year. Biologist Alain Trautmann of the Cochin Institute in Paris, a spokesperson for the petitioners, says: “I feel as though we are standing on the edge of an abyss. If we don't change direction completely, we will fall in.”

    The protesters are demanding reinstatement of some 550 permanent posts for young researchers and engineers that were converted into 3- to 5-year contracts this year, and reversal of the government's decision to create no new posts for university lecturers or professors. If the directors carry out their threat on 9 March, labs could close, because staff members are not covered by accident insurance if they have no boss.

    President Jacques Chirac and his government have tried to placate the researchers. The government acknowledges that it owes research agencies $240 million for 2002 and has promised to come up with the money by next year. It has also promised legislation to reform the research system, an audit by 20 February to determine why labs and the government disagree on budget figures, and no spending cuts this year. But researchers are not satisfied. Last week up to 10,000 marched through Paris in protest. Then in a further concession late last week, French Academy of Sciences President Étienne-Émile Baulieu told Science that research minister Claudie Haigneré had endorsed the idea of a conference and agreed that it be organized by researchers.


    Sex Studies 'Properly' Approved

    1. Jocelyn Kaiser

    There is nothing improper in the National Institutes of Health's support for studies of human sexuality, NIH Director Elias Zerhouni has informed Congress. In a 26 January letter to congressional leaders, Zerhouni rejected criticisms from an independent conservative group, which challenged NIH grants last year and asked Congress to investigate. Many scientific societies saw this as an ideological attack on peer-reviewed science and objected; last week, several said they were reassured by Zerhouni's response.

    NIH began reviewing its portfolio after the House nearly eliminated four sexual research grants in July 2003. Several lawmakers raised questions about more grants at a 2 October hearing. A staffer at the House Energy and Commerce Committee then forwarded to NIH a list of 198 studies compiled by the Traditional Values Coalition, a conservative advocacy group (Science, 31 October 2003, p. 758). Coalition director Andrea Lafferty called the research, on topics ranging from AIDS risk among drug users to teen-pregnancy prevention, “smarmy” and a waste of taxpayers' money.

    Not so, concludes Elias Zerhouni in a two-page letter sent to Commerce Committee chair Billy Tauzin (R-LA) and seven other House and Senate members. The “peer-review process … worked properly,” and “I fully support NIH's continued investment in research on human sexuality,” Zerhouni wrote in a letter he drafted himself, according to an NIH spokesperson.

    Solid science.

    Elias Zerhouni has defended NIH's research on human sexuality.


    An attached six-page summary explains three widely reported grants. A study of prostitutes and truck drivers may help prevent heterosexual transmission of HIV, the letter says. A conference on sexual functioning could shed light on dysfunction, “a major cause of divorce.” And understanding the sexual behavior of older men “has important implications for families.” The summary does not explicitly discuss other controversial studies—for example, on transgendered Native Americans and Asian prostitutes in California—but says, for example, that sex worker studies “are certainly not promoting an illegal activity”; they are “trying to stop the devastation it can unleash” by spreading disease.

    “Some of this research has unseemly titles because, frankly, the research involves looking at difficult, albeit real, components of the human condition,” the summary concludes. It also explains the peer-review process, saying that all the grants are “scientifically justified,” received fundable quality scores, and are “connected to clear public health priorities.”

    Advocates of biomedical research praise the letter. “We are pleased that NIH has reiterated its support for sound science” involving “significant” public health issues, says Karen Studwell, a policy analyst for the American Psychological Association. (APA, AAAS—publisher of Science—and other groups issued statements defending NIH's peer-review process.) But the conservative critics aren't satisfied. Zerhouni's letter “was unconvincing,” says the coalition's Lafferty. “This is just an attempt to disguise all of this as science.”

    Some lawmakers may not be placated, either. Although Tauzin's spokesperson declined to comment, Commerce Committee member Representative Mark Souder (R-IN), who questioned a study of women's reaction to pornography, called the letter “an unbelievable rationalization.” Observers expect the sex grants to come up later this year during congressional hearings on NIH's 2005 budget appropriation.


    New-Style Matter Opens Cool Middle Ground

    1. Charles Seife

    Once more, scientists have ventured into the frontier of very cold matter. The creation of a new state of matter, known as a Fermi condensate, promises an exciting new way to understand the mysterious physics of high-temperature superconductors.

    Last November, two teams of researchers announced that they had created a type of matter known as a Bose-Einstein condensate (BEC)—a cluster of particles that acts like a single, enormous quantum-mechanical object—out of atoms belonging to a class of particles known as fermions (Science, 14 November 2003, p. 1129). A fermion, by nature, cannot act just like its neighbors. So it was a major achievement to get fermions to pair up, condense into a BEC, and march in quantum-mechanical lockstep.

    Last week one of the two groups, led by Deborah Jin of the Joint Institute for Laboratory Astrophysics (JILA) in Boulder, Colorado, announced at a press conference that it has taken the next step. As in the earlier experiment, the team used lasers and magnetic fields to cool a potassium gas to a few hundred millionths of a degree above absolute zero. An additional magnetic field adjusted how much the potassium atoms attracted one another.

    Big chill.

    Deborah Jin (left), Markus Greiner, and Cindy Regal coaxed supercold atoms to condense.


    In the BEC experiment, Jin and her colleagues used these supplementary magnetic fields to bind the fermions into loose “molecules” that could condense into a BEC. In the present experiment, however, Jin's group tuned the fields so that the potassium atoms repel one another slightly. “The binding cannot occur,” says Jin. Nevertheless, the fermions still pair up in a sense, each partner affecting its counterpart's motion enough that the fermions can still condense.

    This condensation is more similar to what happens in a superconductor, where repulsive electrons form “Cooper pairs,” than what happens in a BEC. Although BECs and superconductors are related, their behavior is explained by theories based on different assumptions. The region where they overlap—precisely where this experiment probes—is murky. “It's a very new regime; it's someplace where the theory isn't clear,” says Jin.

    Eric Cornell, a leader of a different JILA cold-temperature physics group, says that scientists used to think that a condensate was either BEC-like or purely superconductor-like. “There was no compromise,” he says. But Jin's work proves this idea false, he says. Indeed, Jin's preliminary data seem to show that there's a gradual change from the BEC to the superconducting regime. “We don't see a sudden change in the behavior of the system,” she says.

    Jin believes that the work may illuminate the physics behind high-temperature superconductors: materials that conduct electricity with no resistance at liquid-nitrogen temperatures rather than the much colder liquid-helium temperatures of traditional superconductors. That understanding, she adds, could someday lead to practical applications. In the meantime, says Cornell, the experiment is “a technological and scientific tour de force.”


    India, Pakistan Hold First Science Talks

    1. Pallava Bagla

    NEW DELHI—The science ministers of India and Pakistan met last week for the first time in the two countries' 57-year history. The meeting anticipated a “composite dialogue” later this month between the leaders of the two nuclear powers, which have fought three wars and threaten each other along a tense border.

    Researchers in areas such as biotechnology, nanotechnology, natural product chemistry, and information technology are likely to be the first to feel a warming in relations, as the two sides agreed to set up expert panels to explore these areas. “I am here to build bridges” and encourage scientists to join hands, Pakistan's federal minister for science, technology, and higher education, Atta-ur-Rahman, told Science. India's Murli Manohar Joshi, minister for human resource development, science and technology, and ocean development, was said to be more circumspect about the 45-minute meeting, although a government statement later pointed to “opportunities for expanding bilateral scientific cooperation.”

    Building bridges.

    Pakistan's Atta-ur-Rahman (left) and India's Murli Manohar Joshi after their historic meeting.


    In addition to opening doors for working scientists, such moves could also improve the professional status of many who live on both sides of the border, such as Shazia Jamshed, a pharmacologist at the University of Karachi who is married to Indian organic chemist M. J. Siddiqui. Jamshed came to India in 1999 and was accepted into a doctoral chemistry program at the University of Delhi, but the foreign office refused to grant her permission to enroll. Now working as a medical transcriptionist, Jamshed hopes that the fruit of ongoing scientific talks might “change the suspicious mindset of the bureaucracy” and allow her to pursue her career.

    The director of India's well-regarded Central Drug Research Institute in Lucknow is hoping for a more immediate payoff. The institute has been pursuing high-gum-yielding varieties of a wild plant, Commiphora mukul, that yields the popular lipid-and cholesterol-lowering drug gugulip. “But the best plants grow in Pakistan,” says institute director Chhitar Mal Gupta. In the past, he notes, the institute has worked jointly with the HEJ Research Institute of Chemistry in Karachi on research involving medicinal plants and natural products chemistry.


    Court Tells Nichia to Pay Blue LED Inventor $180 Million

    1. Dennis Normile

    TOKYO—Japanese courts last week delivered stunning monetary awards to two corporate researchers who claimed that they had received inadequate compensation for inventions produced for their employers. The plaintiffs hailed the victories as progress on the road to better treatment for corporate scientists. But others warned that the awards could spell the road to ruin for companies by undermining the potential payoff of a breakthrough discovery.

    In the most eye-popping decision, handed down on 30 January, the Tokyo District Court awarded $180 million to materials scientist Shuji Nakamura for his development of a blue light-emitting diode (LED) while employed by Nichia Corp. of Anan, Tokushima Prefecture. That's a million times more than the $180 that the company originally paid Nakamura, now a professor at the University of California, Santa Barbara, for the rights to a key LED patent. In a separate case decided 1 day earlier, the Tokyo High Court ordered Hitachi Ltd. of Tokyo to pay $1.5 million to former company researcher Seiji Yonezawa for three key technologies that are at the heart of CD players and other optical disk devices.

    Money talks.

    Shuji Nakamura (left) and his lawyer speak to the media after a court awarded him $180 million.


    Katsuya Tamai, a professor of intellectual property law at the University of Tokyo, says the decisions reflect the ambiguity in existing patent laws. Currently, patents are given to individuals, who may cede rights to their employers in exchange for “reasonable” compensation. The two recent cases hinged on the definition of that term, which is not spelled out in the law. Although a growing number of researchers have won suits that allege they have been treated unfairly by their companies, until now the awards have been small.

    What may have tipped the scales in the Nakamura case is the fact that LEDs are a multibillion-dollar industry. Blue LEDs can be combined with previously developed red and green LEDs in giant outdoor displays and to produce white light devices that could supplant conventional light bulbs. The court determined that Nichia had earned more than $1.1 billion in profits from the technology since it was commercialized in 1993. In the Hitachi case, the appellate court took the unusual step of quadrupling the award of a lower court, which had ruled that Yonezawa was entitled to a slice of the company's domestic licensing but not its foreign activities.

    Nichia, in a statement posted on its Web page, criticized the court decision for an “excessive” interpretation of the provisions of the patent law. It has already appealed the ruling. (Hitachi is also planning to appeal to the country's highest court.) In the meantime, Japan's leading daily economics newspaper, Nihon Keizai Shimbun, called the rulings “out of touch with the realities [of business] in Japan.” It warned that a rash of such verdicts could “strip profits from many technology-oriented companies.” Tamai says he worries that more large awards could pressure corporations to move their laboratories offshore. What's needed, he says, is reform of the patent law.

    Not surprisingly, Nakamura offers a more positive take. Larger awards, he told a press conference, will create financial incentives for scientists “that will have everyone striving to make discoveries. … I think this will fuel the dreams of young people interested in science.”


    Development Gene May Give Nerve Cells a Sense of Identity

    1. Elizabeth Pennisi

    Under the microscope, it's hard to tell brain cells apart. But similarities can be deceiving: Neurons acquire unique identities during development, each finding its correct place in the brain and connecting with the appropriate neighbors. A new study suggests that, in fruit flies at least, a gene called Dscam has the flexibility to endow specific groups of neurons, even individual cells, with “Hello, my name is” tags. The gene comes in 38,000 flavors.

    Like many genes, Dscam, which stands for Down syndrome cell adhesion molecule, consists of protein-coding regions called exons interspersed with noncoding regions. It has more than 100 exons, some of which—or even fragments of which—can become active separately in a process called alternative splicing. Each combination of expressed gene segments creates a different protein. (In humans, Dscam lacks the variability seen in its insect versions.)


    Combinations of three Dscam exons in these neurons' photo-receptors (red) make possible 19,000 individual identities


    Although immune system genes were known to mix and match their exons, researchers were initially surprised to find another gene in which so many combinations were possible, says evolutionary developmental biologist Andrew Chess of the Massachusetts Institute of Technology (MIT). To track the variations, the team used DNA microarrays, which reveal which genes are most active in a given sample. Chess, MIT's Guilherme Neves, and colleagues assessed the activity of about 19,000 Dscam variants possible from the three most readily divisible exons. They exposed the microarray to genetic material belonging to fruit fly embryos, larvae of different stages, and adults.

    Different combinations of Dscam variants were active at different ages, the researchers report online this week in Nature Genetics. They also found variety in Dscam patterns among individual neurons. Further analyses showed that each variant formed independently of the others and that the patterns of gene expression were somewhat random. The variability “may help each cell know it's different from its neighbor,” Chess proposes.

    “This is the first evidence for the possible presence of distinct Dscam molecules in individual [nerve] cells,” says Tzumin Lee, a cell biologist at the University of Illinois, Urbana-Champaign. Next, the team hopes to demonstrate “whether and how the presence of distinct Dscam molecules helps provide for the huge diversity and specificity in the central nervous system.”


    FDA Weighs Suicide Risk in Children on Antidepressants

    1. Constance Holden

    Amid growing concerns that some antidepressants may make some children suicidal, the U.S. Food and Drug Administration (FDA) acknowledged a possible risk in a public hearing this week. Members of its drug advisory panel urged that, pending more information, stronger warnings should be put on labels for selective serotonin reuptake inhibitors (SSRIs). There's a “signal” in the data from some clinical trials of children taking SSRIs, officials said at the hearing in the crowded ballroom of a hotel in Bethesda, Maryland. The agency put forth plans for a thorough review of research to find out what, if anything, that signal means.

    SSRIs have been generally accepted as the only antidepressant safe and effective for children and adolescents. Gianna Rigoni of FDA said that in 2002, 10.8 million prescriptions were written for those under 18. Leading the pack are paroxetine (Paxil) and sertraline (Zoloft), which in recent years have surged ahead of pioneer fluoxetine (Prozac).

    But with popularity has come an increasing number of reports of adverse effects. Critics claim that SSRIs lead to extreme agitation, hostility, suicide, and homicide. Many of the speakers at the public meeting shared heart-stopping stories about young family members who had killed themselves.

    Defenders of the drugs counter that there's no good evidence that they are harmful. Last week, the American College of Neuropsychopharmacology announced the results of a review of 15 clinical trials of paroxetine and six other SSRIs covering more than 2000 youths. There were no suicides. And the difference between drug and placebo groups in “percent of youth with suicidal behavior or ideation” was not statistically significant in any trial. Finally, the review noted that since prescribing has become widespread, there has been a dramatic drop—averaging 33%—in rates of youth suicide in 15 countries.

    How to help?

    SSRI drugs may spur suicidal tendencies in a small fraction of children.


    Those challenging the drugs have different numbers. Foremost among SSRI critics is David Healy, director of the North Wales Department of Psychological Medicine in Bangor, U.K. He contends that companies have kept the lid on research that fails to show efficacy for their drugs. He says he has had access to the files of all the trials conducted on Paxil by GlaxoSmithKline and that the rate of suicidality among the SSRI takers is three times that for the placebo group.

    Healy estimated that SSRIs lead to 100 excess suicides each year. But Columbia University psychiatrist John Mann counters that far more lives would be saved with more SSRI use. The “vast majority” of the 4000 young people who kill themselves each year in the United States are not taking antidepressants, he says, and SSRIs might ease their misery.

    Last year, the United Kingdom's Healthcare Products Regulatory Agency banned the prescribing of all SSRIs except fluoxetine to people under age 18. Some want FDA to follow suit. It has been gradually tightening surveillance of SSRIs since last spring, reported Thomas Laughren of FDA's Division of Neuropharmacological Drug Products. In October, it warned doctors that studies have not ruled out increases in suicidal thinking among youth given SSRIs and reminded them that only fluoxetine is approved for use on children.

    Some SSRI critics claim that the United Kingdom acted on the basis of data that have been withheld from public view. But Laughren told those at the hearing that everyone probably had the same information. “It doesn't appear that the U.K. did any analysis other than what the drug companies did,” he said. The agency is asking a group at Columbia University to come up with a detailed classification of suicidal and self-injurious behaviors that will help them reanalyze 25 clinical trials with SSRIs. The studies cover not only depression but also anxiety disorders including obsessive-compulsive disorder and attention deficit hyperactivity disorder. The agency will reassess the debate at another meeting in late summer.


    Hubble Huggers Get a Reprieve

    1. Andrew Lawler

    Under pressure from a senior U.S. senator, a bevy of astronomers, and hordes of interested amateurs, NASA is taking another look at its recent decision not to send another shuttle flight to service and upgrade the Hubble Space Telescope. Harold Gehman, the retired admiral who led the investigation last year into the Columbia shuttle disaster, will conduct his own review of the agency's decision to seal Hubble's fate.

    Last week's reversal marks a victory for Senator Barbara Mikulski (D-MD), who criticized NASA chief Sean O'Keefe for his abrupt decision. The ranking member of the panel that funds NASA, Mikulski received a standing ovation from staff at the Space Telescope Science Institute in her hometown of Baltimore when she visited it shortly after O'Keefe's announcement. But Mikulski warned the enthusiastic crowd not to assume that Hubble has been rescued.

    Not so fast.

    Senator Barbara Mikulski asked NASA to review its decision.


    O'Keefe has concluded that a servicing mission would put astronauts in jeopardy, because the telescope is in a different orbit from the international space station and there would be no safe haven in the event of trouble. In order to conduct such a mission safely, NASA officials say, a second shuttle would have to be readied on the pad and a second crew trained for a possible rescue. That would be a complicated and expensive venture, and NASA is loath to devote time to anything that would divert resources from the president's goal to complete the orbiting laboratory by 2010. A shuttle mission to Hubble was planned for as early as 2006.

    But Mikulski believes Hubble shouldn't be written off so readily. In a 21 January letter to O'Keefe, the veteran legislator asserted that Hubble “is the most successful NASA program since Apollo … and cannot be terminated prematurely with the stroke of a pen.” O'Keefe proposed to her that Gehman conduct an independent review of the matter, and the senator agreed. “She respects Admiral Gehman and will respect his opinion,” says her press spokesperson Amy Hagovsky. Gehman's report on the Columbia accident did not take a stand on whether a Hubble mission would be too risky.

    Gehman's review is expected to be completed within 2 months.


    Reports to U.N. Propose Bigger Role for Science

    1. Jeffrey Mervis

    Two reports delivered this week to United Nations Secretary-General Kofi Annan make a strong pitch for developing nations to build up their scientific institutions if they hope to improve conditions in their countries. The two reports, written independently, both underscore the importance of improving universities, funding the best science through peer review, and providing government leaders with impartial technical advice. Whereas those elements are woven into the fabric of scientifically advanced nations, the reports note, they are often lacking in the rest of the world.

    “Too many poor countries see science as a luxury, something that's only for the rich,” says U.S. National Academy of Sciences President Bruce Alberts. “Even worse, some of them see it as a nuisance because it requires truth-telling.” Alberts is co-chair of the InterAcademy Council (IAC)—a group of 90 national science academies formed in 2001 to provide advice to governments around the world—which produced one of the reports. * The other, still in draft form,was written by a task force formed to help implement the goals of the U.N.'s 2000 Millennium Summit.

    The two reports were compiled by scientists, educators, and policymakers from around the world, and both maintain that countries cannot overcome myriad food, health, environmental, and other problems without help from the scientific community. The U.N. task force, for example, advocates “entrepreneurial universities” that would encourage faculty members to tackle these pressing problems at the same time that they pursue excellence in teaching and research. “Higher education is important, but the goal is not just churning out more graduates,” says Harvard University's Calestous Juma, who co-chaired the panel. “Universities also need to serve as incubators for business and incorporate those skills into the curriculum.” Adds Mamphela Ramphele of the World Bank, a former vice chancellor of the University of Cape Town, South Africa, and a panelist for the IAC report, “You need a higher-education community that wakes up to its responsibility to be a champion of reform.”

    The two reports also put considerable weight on the need for scientists to advise their governments on how best to allocate limited resources. Juma cites the need to reach the most senior officials who determine national policy, and Alberts talks about the value of “merit-based institutions” for all sectors, from dispensing wisdom to hiring faculty and funding the best research proposals. “It's not easy to get rid of the corruption and the cronyism,” Alberts says. “But the cost of not moving forward is to be left further behind.”

    Systemic reform is needed, he adds, beginning with each country's own scientific infrastructure. “We decided to focus on institution building as the best way to make a difference,” he says about the council's first-ever report. “There's all this terrific science out there on the Internet, for example. But it's no good to a developing nation without the talent and the mechanisms to use it.”


    2005 Budget Makes Flat a Virtue

    1. David Malakoff*
    1. With reporting by Yudhijit Bhattacharjee, Jocelyn Kaiser, Jeffrey Mervis, Charles Seife, and Erik Stokstad.

    Federal science managers are facing tough choices as nondefense programs get little if any increase under President Bush's proposed budget for the upcoming year

    As a doctoral student in ecology 3 decades ago, Paul Gilman learned all about environmental stress. Now, as head of the Environmental Protection Agency's (EPA's) research office, he's trying to adapt to a stressful fiscal environment: Under the Bush Administration's 2005 budget proposal released this week, EPA's R&D spending would drop significantly, forcing Gilman to find new ways to stretch his dollars.

    It's a skill many scientists may have to learn next year. Facing an expensive war in Iraq and soaring federal budget deficits, President George W. Bush on 2 February unveiled a $2.4 trillion budget blueprint that would barely increase the budgets of most major nondefense science agencies. Overall, government spending on R&D would rise 4.7%, to $132 billion, for the 2005 fiscal year that begins 1 October. Nearly three-quarters of the new money would go to defense-related programs, however. The Pentagon's technology programs would grow 7%, to nearly $70 billion, and the Department of Homeland Security's applied science efforts would get a 15% boost, to $1.2 billion.

    Nonsecurity science, in contrast, would rise by just 2.5% overall. Leading the pack is NASA, with a 5.6% rise, to $16.2 billion. Its science budget, however, would grow by only 0.5%, to $6.6 billion. Following are the National Science Foundation (NSF), slated for a 3% boost, to $5.7 billion, and the National Institutes of Health (NIH), with a 2.7% increase, to $28.8 billion. The Department of Energy's (DOE's) Office of Science, meanwhile, would remain flat at $3.4 billion.

    Number, please.

    White House officials John Marburger (right) and Marcus Peacock present the president's 2005 budget.


    Bush Administration officials say those numbers look pretty good considering that the White House held the entire discretionary, nondefense budget—the $489 billion piece of the spending pie that pays for everything from food safety to environmental protection—to just a 0.5% increase. “This is an Administration that's been good to R&D,” says White House science adviser John Marburger, adding that research spending is at record highs by several measures.

    Science advocates of all stripes aren't impressed, however. “I am very disappointed. … We just have to find a way to do better,” said House Science Committee chair Sherwood Boehlert (R-NY). “The nation must pull itself out of our deficit spiral, but we cannot do so by shortchanging research,” added Nils Hasselmo, president of the Association of American Universities in Washington, D.C. “It's looking pretty ugly,” sums up Tom Jones, a co-chair of the Coalition for National Security Research, which monitors Pentagon research budgets.

    Even NSF Director Rita Colwell, who as a presidential appointee is duty-bound to defend White House budgets, appeared to struggle with the numbers her agency has been dealt. “We're pleased to receive a 3% increase when many agencies are facing cuts,” she began her public briefing. But “I have no doubts that NSF merits” a 5-year doubling trajectory, she said, to $9.8 billion by 2007, called for in a 2002 law that is not binding on Congress. “And the $19 billion budget recommended [last week] by the National Science Board is fully justified and, frankly, necessary,” she concluded.

    Now, science backers are looking to Congress—which began examining the White House plan this week—to ease their pain. They may not find much relief, however. “With a $500 billion deficit [this year], and the bill for Iraq still due, it's going to be hard to find much cash,” predicts a veteran budget aide.

    One major focus for research advocates will be NIH's budget, which enjoyed double-digit increases from 1998 to 2003. “There's no question that this [year's request] is a slowdown,” said Tommy Thompson, Secretary of the Department of Health and Human Services, NIH's parent agency. But given NIH's “huge” recent increases, “this is one area where we felt we could tighten the belt,” he said.

    View this table:

    The request includes a 7.5% increase for biodefense research, bringing the total to $1.7 billion. Some funds would go to building 20 new biosafety level 3 labs. There is also $47 million to develop treatments to counter a nuclear or radiological attack.

    NIH's new Roadmap—initiatives that cut across NIH's 27 institutes and centers—would get $237 million. And an obesity initiative would get a 10% boost, to $440 million. But funding some of these priorities would come at the expense of new grants (see sidebar).

    NSF's overall 3% increase, to $5.75 billion, papers over reductions in the agency's budget that will displease many of its constituents. For example, its bread-and-butter research account is projected to go up by 5%, or $200 million. But $80 million of that is the remnant of a much bigger math and science education partnership program that the White House wants to move to the Department of Education (Science, 16 January, p. 295). Colwell says the new accounting reflects the fact that the agency “is now focused on other [education] programs.” With another $50 million of the increase going to beef up NSF's share of a $1 billion governmentwide nanotechnology initiative, most of the foundation's research directorates would receive only a 2% increase.

    Thin slice.

    Research receives 13.5% of the discretionary budget, which is dwarfed by mandatory spending programs.


    NSF's education programs overall take an 18% hit. In addition to the loss of the partnership program, there are sharp drops in programs serving K-12 teachers, minority institutions, museums, and states with few research-intensive universities. But NSF plans a 10% boost in the number of graduate fellows supported through three popular programs. And the agency's major new research facilities account would continue to grow, with money for three new projects: a high-energy physics experiment, a national ecological observatory network, and an upgrade of an ocean-drilling vessel.

    In a very constrained budget, the biggest surprise is a $75 million management initiative. The money, which represents nearly half of NSF's overall $167 million increase, would be spent on an improved, more secure computer system; better training; and more staff. “This is NSF's single greatest need this year,” says Colwell.

    NASA's rising budget is tied to the beginnings of a major reorganization aimed at implementing the White House's new plan for returning to the moon and going to Mars (Science, 30 January, p. 610). As part of the shakeup, the agency plans to accelerate construction of the James Webb Space Telescope, but other projects would be delayed.

    At DOE, the biggest news was Secretary Spencer Abraham's plan to merge two of its national laboratories, the Idaho National Engineering and Environmental Laboratory and Argonne-West, both near Idaho Falls. The goal is to create “the world's premier nuclear energy technology center within a decade,” Abraham said. The laboratory will concentrate on next-generation nuclear power plants, research on space nuclear power, and a new fuel cycle for nuclear plants.

    A flat research budget, however, spells trouble for DOE's three fusion energy research facilities. The trio—in New Jersey, Massachusetts, and California—would lose 4 weeks of run time, to 14 weeks, under a $264 million fusion budget. And neither the Brookhaven nor Berkeley national lab will get funding for support buildings that Office of Science head Ray Orbach says are “desperately needed.”

    Core science programs at the National Institute of Standards and Technology would get a 20% boost to $482 million, including nearly $60 million to build and renovate laboratories in Colorado and Maryland. But the White House once again calls for abolishing the Advanced Technology Program, which funnels tens of millions of dollars a year to companies for early-stage research. Congress has repeatedly rejected the idea.

    Meanwhile, Gilman faces a 12% cut, to $572 million, in EPA's research account. Administration officials say most of the decrease stems from removing congressionally ordered “earmarks.” But the request also cuts the agency's Science to Achieve Results (STAR) grants program by 34%, to $66 million, and a related STAR fellowship program by 40%, to $5.9 million. Gilman says the cuts are the result “of having to come to grips with other priorities,” such as developing a new generation of clean school buses.

    Outsiders give Gilman credit for doing a good job under strained budget circumstances. But Granger Morgan of Carnegie Mellon University in Pittsburgh, Pennsylvania, says that the Administration shouldn't be forcing its science managers to be so creative. “You can't argue that the country needs science-based regulation and then not make the investment,” he says.


    Highlights From the Budget

    1. Jocelyn Kaiser,
    2. David Malakoff,
    3. Erik Stokstad,
    4. Charles Siefe

    New Grants Are Sickly Measure for NIH

    The biomedical community has long measured the health of the National Institutes of Health by the annual number of new and competing grants it funds. By that measure, NIH is about to flat-line.

    The pressure on new NIH grants results from the modest 2.7% increase the president has proposed. The bulk of the $764 million is committed to priorities such as biodefense and the cross-NIH Roadmap, as well as to 4-year grants already under way. “This is a difficult budget, and we're doing everything we can [so] it doesn't damage … the most important aspects of the budget,” says NIH Director Elias Zerhouni. That means institutes are scrambling to find ways to preserve the investigator-initiated grants that make up about 54% of NIH's overall budget.

    But unless Congress comes to the rescue, some pain seems inevitable. Although the overall number of NIH grants is expected to hit an all-time high of 39,986, the 2005 budget request would support just 10,393 new and competing grants, the same figure as in 2003. And that comes after a dip of 258 in this year's total, the result of NIH funding more grants than it had planned to support in 2003 and getting less money than expected in 2004 for research. Moreover, to help shore up grant numbers, NIH is squeezing grant size: The new batch will be only 1% bigger than this year's (continuing grants will grow by 1.9%), well below the projected inflation rate for biomedical research costs of about 3.5%.

    Not for granted.

    NIH lobbyists fret about halt in growth of new awards.


    The silver lining in the dark NIH cloud is that the research community had expected even worse. The Federation of American Societies for Experimental Biology (FASEB) last month predicted that new grants would plummet to 9925 in 2005. But 2005 “really is a halt to the growth” of the previous half-decade, says FASEB's Howard Garrison. And success rates—the portion of submitted grant proposals that are funded—will dip to 27%, the first time since 1996 that it's been below 30%, FASEB notes. “The message will come out loud and clear to people who depend on NIH for grant support: The situation is becoming more difficult as the opportunities in biomedical research are increasing,” says Garrison.

    Reviews Play a Bigger PART of Budgeting

    When White House officials rolled out the Environmental Protection Agency's (EPA's) budget this week, they revealed some bad news for ecosystems research: A $132 million EPA program that funds an array of in-house and competitive grants to monitor and model ecosystems would lose $20 million.

    The program is one of 58 government research efforts that the White House Office of Management and Budget (OMB) put under the microscope last year in a somewhat controversial process called PART (Program Assessment Rating Tool). The exercise judges programs on relevance to the agency's mission, quality, and performance in an attempt to give taxpayers the most bang for their buck. But some science advocates worry that the reviews are simply window-dressing for politically driven budget decisions.

    In EPA's case, OMB reviewers concluded that the ecosystems program suffered from poor organization, unclear goals, and inadequate oversight. “We found the research was not well connected to [EPA's] program offices; it needed refocusing,” says OMB official Marcus Peacock. So the White House has proposed stripping $20 million from the program and giving the money to states to improve their water quality monitoring. EPA could get the funds back if it puts its house in order, says OMB.

    Against the flow.

    EPA wants to shift funds to water quality monitoring.


    Other “PART-ed” programs judged lacking—and slated to lose funding—include several applied research programs at the Department of Energy focused on oil and gas development. But a complete list from the Administration suggests that a poor PART doesn't always have budget consequences: An EPA research program on particulate air pollutants that hasn't “demonstrated results” has retained its funding—at least for now. The same goes for the space station, which OMB also judged to be poorly managed. Critics say the EPA decision typifies the Administration's disregard for environmental protection, and they are vowing to reverse the cut. The outcome will be closely watched, for OMB is planning to subject scores of other R&D programs to PART this year.

    Moon Rocks and Hard Knocks

    Last week, at the Super Bowl, the National Football League had a fake astronaut plant a flag on an artificial moon in tribute to the fallen astronauts aboard Columbia. If only it were that simple. The reality of sending humans back to the moon and then to Mars will cost many billions of dollars and require NASA to cut back many other projects. That's life in a budget world where flat has become a four-letter word.

    One project potentially under siege is the Beyond Einstein initiative, which will use satellites to study fundamental questions about cosmology, black holes, and dark energy. That sort of exploration doesn't quite fit in with the Administration's new initiative, so it is taking a back seat to other projects. NASA's comptroller, Steven Isakowitz, says that missions like Beyond Einstein's x-ray observatory Constellation-X and laser interferometer LISA, as well as the unrelated earth-science satellite Global Precipitation Measurement mission, are going to be “deferred” by a year or more. “We will maintain funding at [previously projected] levels,” he says, but NASA will delay some of the planned spending on those projects. Nicholas White of NASA's Goddard Space Flight Center says that he is concerned, but the community can “learn to live with” a delay if the overall funding is maintained.

    Even as some astronomers and earth scientists wonder about their futures, some planetary scientists have cause to celebrate. Despite less money for exploring the outer solar system, NASA hopes to allocate $70 million as a down payment for lunar exploration. Mars missions are getting a 16% increase over last year's budget, with the intent to double spending on the Mars program by 2009. Those projects, says Isakowitz, might result in a mission that would return samples from Mars as early as 2013.


    Marine Geologist Hopes to Hear the Heartbeat of the Planet

    1. David Malakoff

    University of Washington marine geologist John Delaney extols the science—and the poetry—of building a network of observatories on the ocean floor

    Many scientists turn to poets for inspiration. But marine geologist John Delaney actually took one along for a voyage to the bottom of the sea.

    The 1991 submarine dive that sent Maryland poet laureate Michael Collier 2200 meters down to boiling volcanic vents off the Pacific coast typifies Delaney's expansive vision, say friends and colleagues. “John's a dreamer, an instigator. … He rejects limits,” says Margaret Tivey, a geochemist at the Woods Hole Oceanographic Institution in Massachusetts. And the University of Washington, Seattle, researcher knows how to make that vision appeal to others. “The first time I heard John give one of his talks, I felt like I was at a rock concert—I wanted to pull out a lighter and salute him,” says Oscar Schofield of Rutgers University in New Brunswick, New Jersey.

    Delaney calls himself “rather impractical.” Still, he's shown a pragmatic bent, from helping establish a long-running program to study underwater volcanism to leading a herculean expedition that hauled massive “black smoker” chimneys off the sea floor. Now the tall, 62-year-old researcher stands on the verge of realizing one of his wildest dreams: a $200 million plan to wire an entire tectonic plate off the Pacific Northwest with a spider web of sensors, pumping gigabytes of real-time data directly to scientists ashore. Dubbed NEPTUNE—for North-East Pacific Time-series Undersea Networked Experiments—the project is jockeying to become part of a broader National Science Foundation (NSF) plan to build a trio of ocean observatories that would enable scientists to keep a constant watch on the sea. “We're going to listen to the heartbeat of the planet,” Delaney says in his sonorous baritone, displaying the poetic turn of phrase that has become a hallmark of his public persona.

    But funding for the observatories isn't yet certain, and not all marine scientists are on board. Some fear that the program will siphon funds from other projects; others question the approach itself. Physical oceanographers, for instance, “would probably not go down this road first to solve their problems,” says Carl Wunsch of the Massachusetts Institute of Technology in Cambridge.

    Buying into geology

    Fittingly for a man captivated by volcanoes, Delaney made his debut in the afterglow of another kind of explosion. The son of a Navy engineer and his wife, he was born beside the U.S. Navy base in Pearl Harbor, Hawaii, on 8 December 1941, the morning after Japanese bombers had reduced much of the U.S. fleet to smoking hulks. Growing up in Charlotte, North Carolina, he developed basketball skills that won him a scholarship to Lehigh University in Bethlehem, Pennsylvania. Graduating with a geology degree, he turned down an offer to assist a hometown college basketball coach named Al Maguire, who went on to win a national championship, and headed to graduate school instead. “Al said I couldn't dribble, but I could think,” Delaney says.

    He ultimately enrolled in a doctoral program at the University of Arizona in Tucson, working as a prospector for mining firms on the side. But he didn't get “serious about school,” he says, until he was nearly trapped in an abandoned mine while sampling. A trip to the Galápagos, which included camping inside a recently active volcano, hooked him on studying volcanism. The journey almost didn't happen: Delaney's adviser “couldn't afford to take me,” he recalls. “So I put $2500 on his desk and said, ‘I'm going.’”

    Using samples donated by another researcher, Delaney ultimately wrote a thesis that examined how the volatile gases in sea-floor basalt—a volcanic rock—behave when bottled up by the sea's crushing pressure. “It was magic,” he says. “I was given this garbage bag full of basalt that came from the sea floor!”

    The work won him a temporary post at Washington, where he was assigned to teach oceanography—a course he'd never taken. Although the head of his hiring panel soon suggested that Delaney start looking for another job, it wasn't long before students began to praise their 36-year-old lecturer. Within a few years Delaney had won a top teaching prize and secured a permanent position.

    But Delaney still hadn't found his niche as a scientist. That occurred during a 1980 dive in the submersible Alvin. “It changed my life. I realized I wasn't a laboratory researcher.” His work increasingly revolved around understanding the dynamics of the nearby Juan de Fuca Plate, a relatively small and accessible chunk of the Pacific crust rife with earthquakes, volcanoes, and thriving chemosynthetic communities of tubeworms and bacterial snow.

    Hearing Neptune's call.

    John Delaney has become a forceful voice for ocean observatories, including his own proposed NEPTUNE network off the Pacific coast.


    Delaney was also honing his administrative skills. He helped organize the NSF-funded RIDGE program, a multidisciplinary assault on the midocean ridges where crustal plates creep apart. Within RIDGE, Delaney and others sparked controversy by proposing to divert already-planned cruises to undersea eruptions along the Juan de Fuca immediately after they had been pinpointed by newly available sensors. The rerouting paid off, however, giving researchers an unprecedented firsthand look at the almost apocalyptic events that shape the sea floor.

    Still, many researchers were frustrated by the limitations of traditional ship-based studies. In the early 1990s, Delaney, Alan Chave, a Woods Hole geophysicist, and others began to explore what it would take to install instruments that could keep a constant watch on the plate—and stream data back to land through a cable that could also provide the instruments with a steady source of desperately needed power.

    A passion for networks

    It wasn't a new idea. Marine scientists had been experimenting with cabled instruments for decades (see sidebar), and Japan had already instrumented several offshore sites. But Delaney's allies envisioned more: a sensor net that could dispatch robotic observers to fast-moving episodes—from eruptions to plankton blooms—that researchers often miss, and a communications grid that would offer anyone with a computer an instant window onto the sea. Thus was born NEPTUNE, which aims to link dozens of nodes bristling with physical, chemical, and biological sensors with more than 3000 kilometers of fiber-optic cable.

    The idea initially made little headway, but Delaney was “incredibly persistent,” says Kendra Daly, a biological oceanographer at the University of South Florida in St. Petersburg. “He kept going, cajoling, long after most people would have given up and gone away.” Adds Wunsch: “John may not be the world's greatest marine geologist, but he's got this spark and passion that we as a community sometimes lack.”

    The commitment has paid off. Four years ago, NSF formally endorsed the “regional observatory” concept, bundling it into a $245 million initiative that also includes coastal sensors and open-ocean buoys. About half the funds would go to the regional system, with NEPTUNE a leading candidate. The next step is up to Congress, which next year will be asked to open the spending spigot.

    Delaney's allies, however, didn't wait for Congress. Last October, the Canadian government gave the University of Victoria nearly $50 million for a northern leg of NEPTUNE, starting with a project off Vancouver Island dubbed VENUS. And Delaney's team has raised about $25 million for related work in the United States, including a second pilot project—MARS—set for California's Monterey Bay.

    MARS and VENUS will tackle what Delaney admits are a host of daunting technical issues, from building workable sensors to waterproof sockets—often an Achilles' heel for cabled instruments. Even if the pilots pan out, however, NEPTUNE still must overcome a flat NSF budget and concerns that it could overtax a thin research fleet. Another fear is that the observatories will become oceanography's version of the space station: a huge infrastructure that supports relatively little science.

    Delaney welcomes the debate, saying that NEPTUNE and its sister observatories “will only benefit from more discussion, more ideas.” But he fiercely challenges the notion that the projects will monopolize resources. “I want to find exciting, important ways to argue for decades of new funds, not get by on what we've got,” he says.

    Delaney drives home that message in the dozens of talks he gives each year before everyone from congressional aides to schoolteachers. The word is also getting out through the media: His work has been featured in books and documentaries, including a PBS NOVA show on a dramatic 1998 mission he led to recover several “black smokers”—chimneys that belch superhot water—from the Juan de Fuca Ridge. The mission recovered bacteria that thrive in the chimney's record-high temperatures, and several of the formations are now on display at the American Museum of Natural History in New York City. But Delaney doesn't want to become a similar kind of display: fascinating to outsiders but no longer useful to fellow scientists. “I'd like to reach a broader audience,” he says, but not at the cost of his credibility.

    Delaney's desire to communicate may also explain the poetry that leavens his technical talks and the shipboard “poetry nights” that have become a tradition on his cruises. His selections, often delivered from memory, range from the earthy rhythms of Robert Frost and Robert Service to the ethereal images of the Japanese haiku master Basho. And he is fond of T. S. Eliot's observation that “we shall not cease from exploration.”

    Indeed, Delaney says that if he were starting his career today, he'd probably want to work in planetary exploration. He's participated in NASA workshops on a probe to Europa, the jovian moon that some believe holds an ocean under its frozen surface. Submerged fires on Europa, he believes, could be fueling life beneath the ice—just as they did on Earth. “When it comes to life, it takes an ocean,” he jokes, borrowing from a slogan popularized by Hillary Clinton.

    Collier, the University of Maryland, College Park, poet and longtime friend who went down in Alvin more than a decade ago, believes poetry “is another way for John to articulate his wonder and his enthusiasm for science.” Delaney is “incredibly inclusive,” he says. “He wants to share.”


    A Cautionary Tale From Bermuda

    1. David Malakoff

    Fifty years ago, legendary oceanographer Henry Stommel of the Woods Hole Oceanographic Institution in Massachusetts set out to establish his own ocean observatory. Its fate offers both hope and caution to advocates of today's crop of underwater facilities, such as NEPTUNE (see main text).

    Like today's architects, Stommel designed a multipart observatory to collect a steady, long-term stream of data on ocean conditions, recalls physical oceanographer Carl Wunsch of the Massachusetts Institute of Technology in Cambridge, a former student of Stommel's. One element is now known as “Hydrostation S”: a spot 20 kilometers southeast of Bermuda, at a depth of 3000 meters, where scientists regularly measured temperature, salinity, and dissolved oxygen from the sea's surface to its floor. Another was a set of drifting buoys, fitted with radio transmitters, for tracking water movements. There was also a power cable connected to several instruments located thousands of meters off Bermuda. Then, as now, the cable was seen as a promising way to provide power and accurate timing and to move data ashore.

    Historical record.

    Sea-surface temperature and other basic ocean characteristics have been tracked for a half-century off Bermuda.


    But Stommel's dream turned into a nightmare, Wunsch says. The weather didn't cooperate, electrical connectors sprung leaks, instruments failed, and good help and steady funding proved hard to find. “Funders didn't want to commit to open-ended data collection,” he says. Within a few years, most of the station was abandoned—with one significant exception. Today, Hydrostation S is the source of one of the world's few long-term records of a changing ocean.

    Skeptics predict that the next generation of observatories will face similar crippling problems. But supporters take heart from the continuing stream of data from Hydrostation S. It is a model, they say, that new observatories can first replicate and then expand.


    Buried, Recovered, Lost Again? The Romanovs May Never Rest

    1. Richard Stone

    DNA studies in the 1990s appeared to prove that the remains of the last Russian tsar and his family had been found; a new analysis raises questions

    In the summer of 1991, the remains of nine people were unearthed from a shallow grave in central Russia. Forensic experts concluded that the skeletons likely were those of the last tsar, Nicholas II, the tsarina, and three of their five children, whose bodies disappeared after they were shot by the Bolsheviks in July 1918. DNA studies in the mid-1990s supported that claim, and in 1998, a special government panel affirmed the bones to be those of the Romanovs, Russia's ill-starred royal family, along with their doctor and three servants.

    A new study, however, challenges this verdict. In the current issue of the Annals of Human Biology,* a team led by molecular systematist Alec Knight of Stanford University resurrects questions about the discovery of the remains and mounts a blistering attack on the original DNA analysis, contending that the results were tainted. Knight's group also performed the first analysis of the remains of the Grand Duchess Elisabeth, the tsarina's sister. All told, says Knight, “the evidence does not support the claim that the remains are those of the Romanov family.”

    “That's nonsense,” fires back Pavel Ivanov, a molecular biologist at the Engelhardt Institute of Molecular Biology in Moscow who carried out the original DNA studies with Peter Gill of the U.K. government's Forensic Science Service and several colleagues. Ivanov and Gill contend that the new claims are flawed.

    The new findings “do not add to the slim doubts about the remains. But they do not take away from the doubts either,” says Evgeny Rogaev, a molecular geneticist at the Center of Mental Health in Moscow and the University of Massachusetts Medical School in Worcester, who was invited by Romanov descendants and the Russian government to conduct an independent examination.

    The debate could create a stir in Russia. The Russian Expert Commission Abroad, an expatriate group that doubts the authenticity of the remains, is calling for a new examination, claiming support from the Russian Orthodox Church. The plea, however, may fall on deaf ears: “The case is closed,” says a Putin Administration official.

    The lingering uncertainty stems in part from the intrigue surrounding the demise of the Romanovs, said to have been killed and disposed of near Ekaterinburg. The discovery of a communal grave in the vicinity was reported in 1989. Two years later Russia's chief forensic medical examiner unearthed nine badly damaged skeletons and, after a series of forensic tests, came up with a tentative ID.

    Vanished royalty.

    Tsar Nicholas II and family in one of their last official photographs before they disappeared.


    The Gill and Ivanov team amplified short tandem repeats from DNA, which confirmed that the purported tsar, tsarina, and three girls belonged to the same family. If correct, the bodies of one daughter and the tsarevich, Alexei, were missing. More curious was an analysis of mitochondrial DNA (mtDNA), which is passed from mother to child. The team compared mtDNA fragments from the nine skeletons to blood samples from the Duke of Edinburgh, Prince Philip—a grandnephew of the tsarina—and two living descendants of the tsar's maternal grandmother. The tsarina and the children matched Prince Philip, but the tsar's mtDNA was heteroplasmic—having cytosine and thymine at one site. The relatives had thymine or cytosine. Despite that apparent mismatch, the group reported in the February 1994 issue of Nature Genetics that the odds of the remains not being those of the Romanovs were no less than 700 to 1.

    Two years later, Ivanov, working with a U.S. team, provided what appeared to be the clincher: The mtDNA of the remains of the Grand Duke Georgij Romanov, the 28-year-old brother of the tsar who died of tuberculosis in 1899, showed the same heteroplasmy. Rogaev in 1997 and 1998 sequenced DNA from the femur presumed to be from Nicholas II and from the blood of a nephew of Nicholas II. In findings submitted to the Russian commission, he concluded that the DNA matched.

    Knight and colleagues contend, based on recent historical research, that the discovery and removal of the remains is “characterized by extreme irregularities at every level,” and that “crucial evidence has been proven fraudulent.” The most damning shortcoming, they charge, is that samples of old DNA must have been contaminated with “fresh” DNA that skewed the analysis. They argue that a sequence of 1123 base pairs, for example, was too long to have come from old bones.

    The Knight team contributes some new data: a DNA analysis of the Grand Duchess Elisabeth's shriveled finger. The sequence did not match the reported sequence of the tsarina. To Knight's group, this means that “it is probable that the Ekaterinburg remains were misidentified.” But the sequence didn't match Prince Philip's mtDNA either, so it might not have come from Grand Duchess Elisabeth, says Ivanov. Knight's team acknowledges that it may have been from a contaminant.

    The Knight paper is “intriguing,” says Anne Stone, an expert on ancient DNA at Arizona State University in Tempe, who sequenced the purported remains of the Wild West outlaw Jesse James. Notwithstanding the Elisabeth-Prince Philip puzzle, she says, the “the ancient DNA work looks like it was performed according to today's standards and looks good.”

    Gill is not impressed, insisting that his team's forensic DNA work “set the standard.” The new paper so completely mischaracterizes his work, he says, that it “comes across as vindictive and political.”

    Rogaev says that further DNA studies would be worthwhile. Knight agrees; one possible way to settle the identity of the tsarina, he says, is to invite other living descendants of Queen Victoria, her maternal grandmother, to donate blood for mtDNA sequencing. Ivanov, meanwhile, contends that the scientific argument is too weak to warrant debate. And what about the Romanovs? They must be turning in their graves.

    • * A. Knight et al., Molecular, forensic and haplotypic inconsistencies regarding the identity of the Ekaterinburg remains. Annals of Human Biology (2004).


    The Man to Finish the Job

    1. Daniel Clery

    Robert Aymar's task is to complete the construction of the world's most powerful particle accelerator at a CERN that is cowed and short of cash

    GENEVA, SWITZERLAND—When Robert Aymar took the reins of CERN at the beginning of the year, Europe's premier particle physics laboratory was in flux. Construction of the Large Hadron Collider (LHC), the world's most powerful particle accelerator, was forging ahead (see sidebar on p. 756). But behind the scenes, the lab was still struggling to make changes prompted by a 3-year-old financial crisis.

    The crisis began brewing in 1996, when, after much political wrangling, CERN's 19 member states agreed to build the LHC but skimped on its budget (Science, 3 January 1997, p. 19). CERN's council ordered cutbacks to improve efficiency. In September 2001, however, CERN director-general Luciano Maiani revealed that the LHC was overspending on its $1.6 billion budget by $300 million for hardware alone, mostly due to unforeseen problems with the superconducting magnets and underground construction. The council set up an External Review Committee (ERC), headed by Aymar, to assess CERN's problems. Following changes made by CERN management, some recommended by the ERC, the LHC will now be completed a year later than scheduled, money will be shifted to the LHC from other CERN programs, and numerous non-LHC experiments will be closed for 1 year (Science, 29 March 2002, p. 2341; and 28 June 2002, p. 2317).

    Aymar was tipped to head the lab in December 2002, and he comes with impressive credentials. He led the construction of France's Tore Supra fusion tokamak from 1977 to 1988 when its first plasma was achieved. In 1990 he became head of physical sciences research at France's Atomic Energy Commission, and in 1994 he joined the International Thermonuclear Experimental Reactor (ITER) project as director and in 2001 as international team leader.

    When Science interviewed him in mid-January, Aymar, 67, had moved into his new office only that morning, and the smell of fresh paint still hung in the air.

    Q: Thinking back a few years, if I had said to you then that in 2004 you would be building the LHC rather than the ITER, what would have been your reaction?

    A: I would not have been so surprised, because it was agreed that I was too old to begin construction of a big project that would last for 10 years. And I was involved with the LHC right from the beginning because I was chairman of the committee [that assessed the original LHC proposal]. I did not fight for it at all, but I did not refuse when I was asked.

    Q: As a non-particle physicist, what skills do you bring to the job?

    A: My experience of managing different disciplines was proved 10 years ago. I have experience of being involved with large projects, with real technical goals and large budgets. At the same time I have run large laboratories in different disciplines.

    People asked me to accept this job, and I said I'm not a particle physicist. You would assume, to drive a laboratory such as CERN, you would need to be a specialist. But it was suggested, and I agreed, that if my deputy is a particle physicist who is well known in the field—it is Jos Engelen [former head of the National Institute for Nuclear and High Energy Physics in the Netherlands]—and we work together hand in hand, it should not be too difficult.

    Most of the member states, especially the large ones, felt that CERN had not changed its procedures and management for quite a long time. At a time when laboratories everywhere were being forced to build large facilities with less money, CERN was not subjected to this pressure, and the time had come to do something.

    Q: In hindsight, do you think it was wise to embark in 1996 on building the LHC without contingency in the budget, as this turned out to be a major factor in the financial problems in 2001?

    A: At that time, the member states wanted to see CERN applying more rigor, more awareness of cost. So aside from the LHC budget, they ordered CERN to reduce its staff by up to 20%, and they cut its budget by 9% year on year. Unfortunately, this was done without any rational analysis. It was an austerity measure applied without correlation to the scientific product.

    Without this analysis, the management continued to make more commitments beside the LHC, on other programs, without considering the implications for budget and personnel. Previously there had always been plenty of resources at CERN, and nobody realized, certainly in-house, that when a difficulty arose the member states would not provide more money. So at some point, a crisis was bound to happen.

    Tightening up.

    CERN Director Robert Aymar says better management should bring the troubled Large Hadron Collider on line in 2007.


    Q: And that happened in 2001?

    A: Exactly, and after that they had to make savings everywhere, squeeze every part of the program, and they asked the ERC to look at the full program, not just the LHC.

    The ERC made a long list of recommendations, mostly on how to make the council and the management more aware of the relationship between objectives and resources. This used a lot of simple management tools that are known everywhere but were not applied here at CERN. The tools were made available, savings were made, budgets were squeezed, everyone acted properly, and I think the situation has improved. As usual, unexpected things may happen, but I believe we will have the first shot at physics in the summer of 2007.

    Q: In your 2002 ERC report, CERN was charged with “serious weaknesses” in cost awareness and control, contract management, and financial reporting. What are you doing to remove these weaknesses?

    A: I think the most effective thing we have proposed, which has already been put in place, was to take everything that has to be made for the LHC—hardware and software—identify it, define it, and put a cost on it. These items are then put under the responsibility of a group of people which will have a quasi-contract with the management to do the job with an agreed budget and deadline. These “work packages” are the most appropriate way to improve people's awareness of cost and schedule. As soon as you have people who feel responsible and agree to do the job in the agreed time scale, you are almost there.

    This is a complete change of philosophy for CERN. Previously, people were always interested in the excellence of the technical solution, improving it all the time, but the cost and schedule was something else. This reverses the priorities: You have to complete the job, and sometimes the compromise is not to make it perfect but just make it work.

    Q: The ERC suggested scaling back non-LHC work to cut costs and divert personnel to the LHC. To what extent has that been done?

    A: The cost of personnel was never included in the costs of the LHC. This was the way of doing things: You cost the hardware and you cost what is called industrial services, when you have to hire people to make something. Staff from CERN were supposed to be free. But because of the non-LHC commitments that the management continued to make, these people were not available to the LHC.

    Another assumption was that non-LHC programs cost almost nothing. Running the older accelerators was supposed to be free. But this isn't true. There is a very large cost. The ERC looked in detail at the commitments of all these experiments, and most were coming to a stop at the end of 2004, with plans to restart soon in 2005. We proposed instead to stop them for 2 years, although this was later reduced to 1 year. This will provide an opportunity for reflection, to see if the experiments need any improvements, such as a better detector. When the time comes, people will cry; this is normal, but I think this is not wasted time.

    The staff, the technicians freed up in this way, can then move to LHC work. There is a lot of work involved in testing all the magnets. We have a very large installation, and to run it day and night all year long needs a lot of people.

    Q: The ERC also suggested that, during these lean times, CERN should collaborate more closely with other particle physics laboratories.

    A: This is something that I feel very strongly about. The convention which created CERN states that the council has two functions. One is to supervise the laboratory, and the second is to steer particle physics in Europe. This gives CERN a special role: not as one laboratory among all the others, but as the laboratory with connections to all the others in Europe.

    We are now working with other labs on, for example, an injector for the Next Linear Collider and proton injectors which can be used on the LHC or a superconducting proton linac. There are plenty of small items like that. We have larger ones, mostly involving CLIC [CERN's concept for the next linear collider]. I am now pushing for CLIC R&D to have more organized collaborations and become a collective project for the whole of Europe so that we have a proof of concept ready when the time comes to decide on the linear collider.

    Q: CERN is also playing a leading role in GRID computing, which essentially uses resources across the Internet as one giant computer. What role will this play?

    A: It is absolutely compulsory. The number of events we have to look at with the LHC is such that without the capacity to store and analyze this huge amount of results at a number of sites, it would be impossible to make it work. This is a central activity, and it will have repercussions everywhere. If we can work reliably and safely with a distributed system, we will meet our goals.

    Q: Last fall, CERN celebrated the anniversaries of two major discoveries, neutral currents in 1973 and the W and Z particles in 1983. What will CERN's next big discovery be?

    A: LHC is the facility to provide new discoveries. No other lab will do that to the same extent. It is surprising that the Standard Model of high-energy physics is good, but the questions that are not answered by the Standard Model are completely open. There are plenty of theories, all different, and you cannot tell which is likely to be true. The LHC will provide perhaps not all answers, but a lot of answers. That will be a reward for people who have been working 20 years on this. And I am very happy that that will come—after me, but it will come.


    A Leviathan Takes Shape Beneath Geneva's Gentle Environs

    1. Matin Durrani*
    1. Matin Durrani is deputy editor of Physics World.

    GENEVA, SWITZERLAND—Looking at this tranquil agricultural plain between Lake Geneva and the Jura mountains, it is hard to imagine that 100 meters below the surface a machine of epic proportions is taking shape. Although the $2 billion Large Hadron Collider (LHC) now under construction here at CERN, the European particle physics laboratory, will be housed in the same 27-kilometer-long circular tunnel that held its predecessor, the Large Electron-Positron Collider, the technology required to achieve a 70-fold increase in beam energy is staggering.

    CERN has ordered about 6000 superconducting magnets, some of which are 14 meters long and contain 6000 kilometers of carefully wound titanium-niobium wire, and all of them must be tested and installed. These require refrigeration plants, pipework, and enough liquid helium to cool 50,000 tons of equipment to 1.9 degrees above absolute zero. And construction workers have hewn from the rock huge caverns, up to six stories high, which will soon be filled with vast detectors ready to capture any products of particle collisions. Coordinating such a project is not easy, and the LHC has had some hiccups (see main text). But the man in charge of it all, LHC project leader Lyn Evans, shows remarkable sang-froid. “There have been a few problems, but nothing that is disrupting the schedule,” he says. “Costs are under control, and things look very solid.”

    When complete, the LHC will slam together two beams of protons traveling in opposite directions at near the speed of light. Bent around the tunnel by powerful dipole magnets, the protons will have an energy of 7 tera-electron volts (TeV)—10 times that of the current most powerful accelerator, the Tevatron at Fermi National Accelerator Laboratory in Batavia, Illinois.

    The counterrotating beams of protons (and later heavy ions as well) will cross at four points around the ring, where the giant detectors will monitor the particles produced during the collisions. Two of the detectors—the Compact Muon Solenoid (CMS) and Atlas—will be multipurpose devices that will carry out various experiments. These include searching for the elusive Higgs boson, which is believed to provide other particles with mass, and seeking “supersymmetric” particles—shadowy counterparts of the known fundamental particles—which, if found, could help unify all the fundamental forces, including gravity. A third experiment, LHC-b, will try to find out why there is so much more matter than antimatter in the universe by probing the decays of particles called B mesons, and the fourth, Alice, will study collisions between lead ions to recreate the energy densities that existed in the first 10–12 second of the universe.

    The long view.

    A string of superconducting dipole magnets is put to the test.


    But before researchers get near those discoveries, they must overcome the huge technical hurdles facing the LHC, many of which arise from the decision to save money by reusing the existing tunnel. First, the tunnel is too narrow for two sets of dipole-bending magnets to steer the clockwise and counterclockwise proton beams. Instead, the LHC will use a complex two-in-one design in which a single ring of magnets will create adjacent magnetic fields pointing in opposite directions across tiny channels that run the entire length of the ring. The two beams of protons will travel in opposite directions down these channels, just 42 centimeters apart. “We're making not one machine but two,” says Evans. “It's like building a single ring that is 54 kilometers in circumference.”

    The other main problem with reusing the old tunnel is that, to a 7-TeV proton beam, a 27-kilometer loop is a pretty tight curve. To keep the beam on track, the dipole magnets must generate incredibly high fields—8.3 teslas each, roughly 100,000 times as strong as Earth's magnetic field. Such high fields can be produced only by magnets made from superconducting wire, which can carry much higher currents than conventional conductors such as copper because of its total lack of resistance. But to make it still more complicated, such superconductors lose their resistance only at extremely low temperatures. The LHC needs a total of 1232 purpose-built dipoles, each 14 meters long and carrying up to 15,000 amperes when chilled with liquid helium to 1.9 kelvin.

    To speed production, CERN has three firms in Germany, France, and Italy working flat out to produce the magnets. The first batch of 174 dipoles arrived at CERN late in 2003. Once on site, every magnet must be thoroughly tested for electrical and mechanical flaws that could cause the proton beams in the completed machine to swerve off course and vanish into the wall of the pipe.

    About 100 dipole magnets have so far been tested; some 10 more arrive at CERN every week. To keep the testing program on track to be completed by the summer of 2006, researchers are now carrying out tests around the clock, including on weekends, with the help of scientists from India's Department of Atomic Energy who have been drafted to speed things up. In addition to the dipoles, more than 4500 other superconducting magnets to focus, steer, and tweak the beam will also have to be put through their paces.

    A sledgehammer to crack a nut.

    The 7000-ton CMS detector, taking shape on the surface, will study the debris from colliding protons.


    Other parts of the project may be less high-tech but are just as expansive in scale. The superconducting magnets require eight refrigeration plants to keep them fed with liquid helium. The installation of pipes and cladding in the first octant of the tunnel is currently running 13 weeks behind schedule, but the contractor doing the work is putting in extra shifts to catch up.

    Despite the use of the old tunnel, much other digging has been required, including caverns for two of the new detectors and two 2.5-kilometer-long tunnels that link the LHC to an older accelerator, the Super Proton Synchrotron. The SPS will accelerate protons to an energy of 0.45 TeV and shoot them down the new tunnels to be injected into the LHC pipe. Once there, 16 copper “cavities” spaced around the ring will accelerate the protons to 7 TeV. The cavities produce intense electromagnetic fields oscillating at radio frequencies, and the protons “surf” along these, getting a kick every time they pass through. CERN expects to thread its first proton beam along the new tunnels to the point of injection by late 2005. The next champagne moment will come in the spring of 2006, when a test beam will be sent around one-eighth of the LHC.

    Meanwhile, the detectors are also taking shape. The Atlas collaboration, made up of 2000 physicists from 34 countries, is assembling its 22-meter-wide barrel-shaped device, weighing in at 7000 tons, in one of the newly dug caverns. Like most modern particle detectors, it contains multiple layers of electronic components that measure the path, energy, and momentum of the hundreds of particles created when the protons collide. “About two-thirds of the detector has been built, and we are just starting on the installation,” says Atlas spokesperson Peter Jenni.

    The CMS collaboration—2000 scientists and engineers from 36 nations —is adopting a different approach. Rather than constructing their detector underground, the CMS team is building most of the 12,000-ton detector in a hangar aboveground next to the cavern. In August 2005, the partially complete detector—preassembled into 15 modules—will be lowered gingerly into place, module by module, using giant cranes. “Most of the heavy construction work on the detector is now complete, and the most visible activity this year will involve construction of the superconducting solenoid magnet, which at 13 meters long is the biggest in the world,” says Austin Ball, CMS deputy technical coordinator.

    Although there is still much to do to bring this huge project to completion, Ball, among others, is looking to the future. “I do not envisage any critical delays. There will be times of more pressure, but there is no point having sleepless nights. If you did that, you would be completely wrecked by 2007 when the LHC is meant to be ready.”

  18. Life's Patterns: No Need to Spell It Out?

    1. Adrian Cho

    Rather than being specified by genes alone, elaborate biological structures may arise through simple dynamical mechanisms

    “The force that through the green fuse drives the flower drives my green age,” wrote poet Dylan Thomas. If he meant that the same principles determine the form and structure of flowers, people, and other organisms, then scientists of all stripes would surely assent. Yet researchers don't necessarily agree on what those principles might be.

    Developmental biologists have focused on the genes that spur growth and determine the fates of individual cells. They've succeeded handsomely in explaining, for example, the embryonic development of the fruit fly Drosophila, in which simple chemical gradients trigger certain genes, which then trigger other genes, and so on, like toppling dominoes. Yet genes alone can't specify the shape and arrangement of all an organism's parts, says Jacques Dumais, a biologist at Harvard University in Cambridge, Massachusetts. “There is not enough information in the DNA to code for that,” Dumais says. “As soon as you talk about something more complicated than a virus, you can't do it.”

    Instead, the patterns of life must arise without detailed blueprints, just as an exquisitely symmetrical snowflake emerges from the random collisions of water molecules in moist air. Or so a small school comprising mainly physicists, chemists, and mathematicians has argued for decades. Through some self-reinforcing mechanism, digits on limbs, vertebrae along spines, and buds on branches must thrust themselves into existence. Genes establish the physical conditions that make the process possible; the snowballing mechanism generates the pattern without specific instructions. The flower, the researchers contend, drives itself.

    It's an approach that assumes that each biological form is the solution to a difficult calculus problem. Whereas developmental biologists talk of genes as switches that can be either on or off and can trip one another, pattern-formation researchers talk of the spatial distributions of interacting chemicals and mechanical forces. Their work belongs to the mathematical field of dynamics and relies heavily on differential equations reminiscent of those used to describe roiling liquids or even the orbitals of electrons in atoms and molecules.

    Most researchers of a dynamical bent argue that the patterns arise when chemicals diffusing through a developing organism interact to trigger growth in some places and inhibit it in others. Pioneered by British mathematician Alan Turing in 1952, such reaction-diffusion theories have been applied to problems such as the stripes of zebras and the swirling of slime molds. Other researchers argue that biological patterns, especially those in plants, emerge from mechanical stresses inside an organism that buckle tissue and spur gene expression or inhibition. “I'm of the opinion that the reaction-diffusion mechanism has more power for pattern formation,” says Lionel Harrison, a physical chemist at the University of British Columbia in Vancouver. “But eventually the mechanical and the chemical have to come together.”

    Coming together with the wider community of developmental biologists may be harder. “Many in the experimental community are fixated on cloning genes,” says Hans Meinhardt, a physicist at the Max Planck Institute for Developmental Biology in Tübingen, Germany. “If they see a differential equation, they have a problem.” But that attitude is starting to change, researchers say, as the dynamical and genetic approaches begin to converge.

    Same vein.

    Patterns in leaves resemble the cracks in drying gel, suggesting that mechanical stresses might trigger the formation of the vein networks.


    Creative inhibition

    To make a pattern from scratch, a developing organism needs to strike a balance between two processes: one that stimulates growth, and one that inhibits it. Each emerging feature must discourage the emergence of another feature in its immediate neighborhood. For example, in a developing zebra embryo, one stripe must repel the next to develop a well-defined pattern instead of one big blotch. At the same time, each new feature must amplify its own development, says Meinhardt, who with colleague Alfred Gierer fleshed out Turing's idea in the 1970s. “You need local enhancement and long-range inhibition,” he says.

    To achieve such interplay, reaction-diffusion theories posit at least two chemical partners. One activates growth and speeds its own production as well as the production of the second substance. The second inhibits growth and slows production of the first. Both chemicals must diffuse, just as a spot of blood diffuses into water. Crucially, to form a stable pattern, the inhibitor must diffuse more quickly than the activator. Even the slightest ripple in the distribution of activator and inhibitor will trigger the formation of a pattern of high and low concentrations.

    Reaction-diffusion theories have been used to model biological phenomena from the regeneration of the tentacles of the freshwater hydra to the arrangement of leaves and flowers on plants, from embryonic development to the spotting of seashells. Ironically, the theories' very versatility may count as a mark against them with many biologists, says Edward Cox, a molecular biologist at Princeton University in New Jersey who has used the approach to explain the spirals seen in slime molds. Some researchers believe that such models can be stretched to explain anything, Cox says, which would make them and the underlying theory untestable.

    Skeptics also demand solid proof that the hypothesized activators and inhibitors, which are known as Turing morphogens, actually exist. Recently, researchers have begun to identify prime suspects. For example, in the past 2 years experiments with chick, frog, and zebrafish embryos have shown that the proteins Nodal and Lefty possess all the properties of an activator and an inhibitor. In vertebrate embryos, the proteins control the generation of three distinct layers of cells—the endoderm, the ectoderm, and the mesoderm—that will generate different sets of organs. Researchers had known that Nodal spurs the growth of mesoderm and endoderm and that Lefty counters the effects of Nodal.

    The new results show that Lefty works as a true long-range inhibitor of Nodal, says Alexander Schier, a developmental geneticist at New York University's Skirball Institute of Biomolecular Medicine, who performed the zebrafish experiments. But Schier and others think the two-component system may be too simple to produce complex structures. “Does it contribute to the patterning process? Yes, it does,” says Lilianna Solnica-Krezel, a developmental geneticist at Vanderbilt University in Nashville, Tennessee. “Does it explain the whole thing? No, it doesn't.” Nonetheless, a tenuous bridge now spans the gulf between reaction-diffusion theory and genetics.

    Buckling buds

    The evidence for the mechanical origins of pattern formation centers on striking analogies between living organisms and mechanical systems. For example, the pressure within plant cells ranges from 7 to 10 times atmospheric pressure. The cellulose that holds that pressure in is as stiff as high-grade plastic. So even a humble daisy resembles a mechanical pressure vessel. Inspired by that analogy, Paul Green, a biologist at Stanford University in California who died in 1998, attempted to explain the development of structure in plants in terms of the buckling of the plant skin. The appeal of the approach is its simplicity, says Charles Steele, a mechanical engineer at Stanford. “The only thing that's going into it is the mechanical properties of the surface, which is already there.”

    Green and colleagues analyzed the elaborate spiral patterns that develop, for example, pinecones and the faces of sunflowers. The patterns form at a growing tip, called the apical meristem, which resembles a pressurized dome. The elements of the pattern, or “primordia,” emerge near the edge of the meristem, where mechanical calculations show that the skin of the meristem is compressed, not stretched by the underlying pressure. The researchers hypothesized that the compression would cause the meristem to buckle and spur the growth of the primordia.

    Using computer simulations, they showed that such buckling could create spiraling wrinkles that mimic the patterns in sunflowers. And by slicing the meristems of budding sunflowers, Steele and Dumais, then at Stanford, directly observed the tension in the center of the dome and the compression at its edges, as they reported in 2000. Data from a variety of plants also suggest that the spacing of primordia is proportional to the thickness of the meristem skin, as calculations in basic mechanics predict.

    Mechanical stress might also spur the formation of the weblike patterns of veins in leaves, physicists at the École Normale Supérieure in Paris report. Yves Couder, Steffen Bohn, and colleagues showed that various patterns of cracks in a drying gel could closely mimic those in leaf veins, depending on how the thickness of the gel varied and how it was dried. Although the mechanisms must be different—cracks in the gel open as the material stretches, whereas the tissue within developing leaves is squeezed—Couder says the results suggest that mechanical forces could be key to forming the intricately connected networks.

    Further evidence of mechanical connections turned up when the researchers analyzed the angles in the webs of leaf veins. Where the veins came together, they found, the networks behaved as if each vein were pulling on the intersection with a force proportional to its radius—just what would happen if the veins were elastic tubes. Couder next hopes to measure the forces within living leaves. “It would be nice to look at the dynamics of venation of a leaf during all its life,” he says.

    Proponents of the mechanical models must show that stresses affect gene expression and spur growth, and that connection remains largely hypothetical. However, last year molecular biologist Emmanuel Farge of the Curie Institute and the University of Paris 7 reported that a gentle squeeze can induce the expression of the gene Twist in Drosophila, at least in mutants that do not express Twist normally.

    Remember the genome

    Farge cautions that researchers advocating the dynamical approaches tend to gloss over key biological details. “Physicists are very dogmatic in saying that everything in these systems will be explained with physics,” Farge says. “I cannot believe that.” Drosophila, he says, is a prime example of an organism that is organized mainly by its genes. Within an embryo, the mother creates distributions of the proteins Bicoid and Dorsal that decrease from head to tail and belly to back, respectively. These distributions then serve as a three-dimensional map that tells each cell where it's situated and, through a cascade of gene expression, determines how it will develop. Effectively, the mother sows the seeds of the body pattern, so it doesn't have to emerge by itself.

    But reaction-diffusion may still play a role in Drosophila development, says David Holloway, a physical chemist at the British Columbia Institute of Technology in Burnaby, Canada. The spatial patterns of gene expression generally grow sharper and more regular as the embryo develops, he says. That suggests that the patterns are reinforcing themselves and the various transcription factors are behaving like Turing morphogens.

    Ultimately, gene expression is itself a dynamical process, says physicist Stephane Douady of the École Normale Supérieure, so biologists are sure to take an interest in dynamics eventually. “I think they'll be obliged to come to dynamics,” he says. That movement has already begun, Holloway says: “I've seen a real change in the last couple of years at developmental biology meetings. It seems to have crept in that you need these chemical interactions.” Precisely what insights may spring from the confluence of approaches no one can predict. The process of discovery drives itself through its own enigmatic dynamic.

  19. The New Math of Clinical Trials

    1. Jennifer Couzin

    Other fields have adopted statistical methods that integrate previous experience, but the stakes ratchet up when it comes to medical research

    HOUSTON, TEXAS—If statistics were a religion, Donald Berry would be among its most dogged proselytizers. Head of biostatistics at the M. D. Anderson Cancer Center here, he's dropped all hobbies except reading bridge columns in the newspaper. He sends out e-mail missives at 3:00 in the morning. The running joke in the department is that Berry, his curly gray hair perpetually tousled, never sleeps. Admittedly, sleep doesn't come easily to a man on a mission.

    Berry, 63, adheres to a branch of statistics named after an 18th century minister, Thomas Bayes, whose followers advocate incorporating prior knowledge into experiments and sometimes altering them as they run to take into account accumulating results. Although Bayesian designs are now widely used in everything from astrophysics to ecology (Science, 19 November 1999, p. 1460), they've been slower to catch on in medical research, particularly clinical trials. That's where Berry comes in.

    A Bayesian since the 1960s, Berry for years was unable to implement his unorthodox approach. Then, in 1999, he was offered a golden opportunity: Come to M. D. Anderson, one of the largest cancer centers in the United States, with a reputation for being the “Wild West” of oncology research, and transform how it designs and runs many of the 800 clinical trials being conducted at any given time.

    Berry's perch at Anderson has fueled his resolve to spread the Bayesian word. He crisscrosses the country speaking with cancer advocates, drug companies, and the Food and Drug Administration (FDA); the latter is beginning to consider Bayesian trials in new drug applications and is planning a May meeting on the subject.

    His critics, however, hope his ideas won't take hold. Berry's skepticism that mammograms help younger women left him accused of risking lives; his approach to clinical trials has prompted worries about bad drugs slipping through the system. Bayesian drug studies risk “saying [a treatment is] positive too often,” says biostatistician Stephanie Green of the Fred Hutchinson Cancer Research Center in Seattle, Washington. But critics and supporters alike have a grudging admiration for Berry's persistence. “He isn't swayed by the status quo, by people in power in his field,” says Fran Visco, head of the National Breast Cancer Coalition in Washington, D.C. “You have to respect him for that,” she adds, “whether you agree with him or not.”

    Bucking tradition.

    Donald Berry's support for Bayesian designs is changing the face of clinical trials, especially at his home base of M. D. Anderson Cancer Center.


    Maverick beginnings

    Berry stumbled into statistics after an erratic college career. He skipped classes regularly and took a 3-year break, in 1960, to volunteer for the army. By his senior year, he and his wife had four sons (two more children, both girls, would follow), and Berry had little idea what to do with his life. A professor suggested statistics; Berry took the advice and enrolled in graduate school at Yale University. After completing his dissertation in 1971, he moved to the University of Minnesota.

    From the start, Berry was drawn to the Bayesian school of thought, then widely viewed as an oddity within the field. The Bayesian approach calls for incorporating “priors”—knowledge gained from previous work—into a new experiment. “The Bayesian notion is one of synthesis … [and] learning as you go,” says Berry. He found these qualities immensely appealing, in part because they reflect real-life behavior, including the way doctors practice medicine.

    But learning as you go collides with the decades-old clinical trials paradigm. To guard against bias—from doctors, drug companies, and even patients—each phase of a traditional clinical trial is run from start to finish without interference from interested parties. Outside scientists monitor the data regularly; although a trial can be stopped early if patients appear unduly harmed or helped by the new treatment, the protocol itself can't normally be changed.

    A Bayesian approach demands more than sporadic monitoring-board meetings, however. Bayesian trials often unveil data while a study is ongoing. What's more, researchers can use these early results to reallocate patients to different treatment groups depending on how the first batch of patients, or even a single patient, fares. Berry also favors other approaches foreign to clinical trials, such as answering questions about multiple drugs in a single experiment, a method known as factorial design. Factorial designs include a treatment arm for every drug combination possible, leading to unwieldy experiments whose results can be tough to interpret.

    Some doctors agree with Berry that the standard approach to clinical trials is problematic. Elihu Estey, who oversees the treatment of acute leukemias at Anderson, points out that the typical paradigm assigns patients to different study arms with equal probability, even in the face of mounting evidence that one arm offers a better shot at survival. “The patients themselves, if they knew the way the trials are conducted, wouldn't be too thrilled,” he says.

    A big break for Berry came in 1990, when he was invited to join Cancer and Leukemia Group B (CALGB). It's one of the country's 10 cooperative groups: multi-institutional collaborations on large-scale cancer clinical trials. Berry would be the lead statistician for CALGB's breast cancer studies. He was not greeted warmly.

    “I objected rather strenuously,” recalls I. Craig Henderson, a breast oncologist at the University of California, San Francisco, who had heard that Bayesians were “loosey-goosey” in adhering to the rules.

    Henderson subsequently had a change of heart: Last year, he was the first in a string of authors on one of the largest breast cancer studies Berry has designed, with more than 3000 women. Its factorial design revealed that adding the drug paclitaxel (Taxol) to standard chemotherapy is beneficial, and that high doses of doxorubicin (Adriamycin), one of the most toxic chemotherapy agents, don't fight cancer any more effectively than lower doses. This came as a great surprise, and some criticized the study for its unusual methodology.

    Despite Berry's relentless efforts to convert Bayesian nonbelievers in CALGB, the group has yet to conduct a fully Bayesian study. But CALGB and other cooperative groups are adopting factorial designs to answer more questions, more quickly. “Maybe some of these designs don't give you the absolute perfect answer,” says Eric Winer, co-chair of CALGB's breast committee and an oncologist at Dana-Farber Cancer Institute in Boston. But, he adds, “it may be good enough if the alternative is waiting another 10 years.”

    A nose for controversy

    Over dinner at an Italian restaurant in Houston, Scott Berry, 37, munches on spinach pizza and considers why his father leaps into one controversy after another. There's one thing, he says, that he's certain of: “He doesn't do it for the notoriety.”

    Notoriety, however, is something Don Berry has amassed in impressive quantities over the years. In the late 1990s, he became a lightning rod in the debate over whether women under 50 benefit from mammograms. As co-chair of a National Institutes of Health panel on the subject, he reported that regular screenings of 2500 women under 50 would be needed to extend the life of one. “I focused very much on the risks” of mammography, Berry says, including false positives and finding tiny tumors that are unlikely to spread.

    In 2002, Berry testified in Congress on the subject; now-Senate Majority Leader Bill Frist (R-TN) minimized Berry's findings because he lacked an M.D. degree. Berry received death threats, including one from a man who believed his wife's life had been saved by a mammogram. He was also accused of rampant sexism. “People said, ‘It's because you're a man; if this were prostate cancer, it would be different,’” he recalls. What his critics didn't know, he says, is that he feels even more strongly about routine testing for prostate-specific antigen (PSA), a controversial marker of prostate cancer. Berry doesn't know his PSA level and has no interest in learning it; like mammography, he believes, its inappropriate use leads to more heartache and unnecessary invasive procedures than benefits.

    Fervent belief that he can shift medical opinion has sustained Berry through some dispiriting times. It's the same kind of determination that's kept him bicycling religiously each day from his home to his office, even during Minnesota winters when temperatures plunged below −30°C and hot breath turned his beard to ice and glued it to his ski mask.

    One of his lowest points came in the mid-1980s. Researchers at the University of Michigan were testing a new technology called extracorporeal membrane oxygenation (ECMO) on desperately ill infants. ECMO takes over the job of oxygenating the blood and gives the struggling heart and lungs time to grow or heal. The Michigan study reallocated infants to standard therapy or ECMO depending on how previous babies in that same study had fared. The result was a sharply skewed trial: 11 babies in the ECMO treatment group, all of whom survived, and one baby in the standard arm, who died.

    Doctors and statisticians roundly criticized the trial, arguing that one baby in a control group wasn't sufficient to come to any conclusion. Although Berry did not participate in the trial, he conducted his own analysis. That prompted him to defend the research, and he countered that 100% survival in the ECMO group was remarkable given the high death rates observed in similar babies in the past.

    Berry had little influence, however, and a team at Harvard launched a more standard ECMO trial. Berry publicly accused the Harvard researchers of killing babies, a belief he maintains to this day. That trial found ECMO to be vastly superior, and today the machines are regularly used on infants and children. Berry, however, was left deeply disheartened by his inability to prevent the Harvard study, as well as a subsequent one in the United Kingdom. In all, 58 infants—57% of those allocated to standard care—died; of those allocated to ECMO, 31 infants, or 25%, died.

    When to start?

    Berry and a cadre of others argue that, for women under 50, mammograms bring more harm than gain.


    Outside M. D. Anderson, true Bayesian clinical trials remain rare. “We don't use Bayesian designs here because I think the system works reasonably well without them,” says David Harrington, the lead biostatistician at Dana-Farber. Both Harrington and Ross Prentice, a biostatistician at the Fred Hutchinson Cancer Research Center, say that a well-designed Bayesian trial should reach the same conclusion as traditional methods. But, says Prentice, he worries that when it comes to Bayes, “are you making more assumptions, [and] are those assumptions having more weight than they should?”

    The integration of priors into a Bayesian design is among the most deep-seated concerns cited. Some worry that priors could perpetuate incorrect or anomalous early data. David Spiegelhalter, a Bayesian statistician at the Medical Research Council in the United Kingdom, admits that he's seen some disastrous Bayesian analyses that do just that. One example he cites is a quality-control comparison that may have erroneously promoted one hospital over another. The drug field, he says, still has few Bayesian analyses, but “I'm dreading the first time something high-profile comes up that hasn't been done well.”

    Preaching the word

    At Anderson, evidence of Berry's influence is plain. His department exploded from a dozen people to 133. (Many although not all of the 60-odd statisticians are Bayesian.) Anderson now insists that companies or hospitals collaborating with it on trials examine the data throughout. Berry also presses hard to spread Bayesian teachings outside Anderson's walls; his close colleague Peter Thall co-taught a 3-day course in December to 100 FDA statisticians.

    Bayesian believers like Berry and Thall know that getting FDA's stamp of approval will be crucial. But although FDA regulators who approve devices have long accepted Bayesian designs, the drug staff remains uncertain.

    Rumors at Anderson about FDA “closet Bayesians” notwithstanding, the agency so far has approved only one drug on a Bayesian platform. It's a pill that combines pravastatin (Pravachol), an existing cholesterol-lowering drug, with aspirin. Approval was based in part on a Bayesian analysis that made it easier to synthesize information from five previous trials, and to allow for diverse sets of patients within each of those studies. FDA approved the combination drug in June 2003.

    Scott Berry tells the pravastatin story with pride. Father and son launched Berry Consultants in 2000, and it worked with the drug's manufacturer, Bristol-Myers Squibb, to shepherd it through approval. “Most of our meetings take place between 12 and 1 a.m.,” says Scott, who's the company's sole full-time employee.

    Because of potential conflicts of interest with Anderson, Berry Consultants rarely advises on cancer. The overwhelming majority of its business is medical, however, such as helping the device firm Medtronic gain approval for an improved shunt for infants with hydrocephalus. Some companies seek out Berry Consultants in the wild hope that a drug or device that's performed poorly in traditional trials can somehow undergo a Bayesian resurrection. (Such a “rescue analysis” is rarely a possibility, both Berrys agree.)

    His colleagues may be nearing retirement, but Don Berry isn't ready to slow down anytime soon. He's been a workaholic for as long as Scott can remember. All four of the Berry boys played ice hockey as children, and Scott remembers his father attending games back in the 1970s with work and a clipboard in hand. Goalie Scott would see his father watching as the puck slid toward his net. But once it glided safely away, Scott would glance up and spot his father braced against the clipboard, scribbling away.

  20. Making Sense of a Heart Gone Wild

    1. Dana Mackenzie*
    1. Dana Mackenzie is a writer in Santa Cruz, California.

    Armed with computer models, interdisciplinary teams of researchers are studying what triggers life-threatening fibrillation—and the even deeper mystery of why it can be stopped

    Richard Gray, a biomedical engineer at the University of Alabama, Birmingham, studies the heart for a living, but last year the heart's mysteries struck close to home. Gray's 68-year-old father called 911, complaining of chest pain. The paramedics were already on the scene when he suddenly collapsed. He had gone into ventricular fibrillation—his heart running amok, its muscle fibers all marching in time to their own drummers instead of beating in unison.

    Ventricular fibrillation is a death sentence if not treated within 10 minutes, but John Gray was in luck. A member of the rescue squad applied the paddles of a defibrillator to his chest and with a whomp of electricity shocked his heart back into its normal rhythm.

    Hundreds of times a day, defibrillation resuscitates people who would otherwise die in minutes. For implantable cardioverter defibrillators (ICDs), the success rate exceeds 99%. (External defibrillators, like the one used by the rescue squad, have a lower success rate, primarily because they are not always applied in time.) It's a true medical miracle—and as befits a miracle, no one can explain why it works. “We don't even know how the electric current goes into the heart,” says Gray. Nor does anyone really know how ventricular fibrillation gets started, or why a big shock brings it to an end.

    Gray is one of many bioengineers and heart specialists who expect the answers to emerge from mathematical models of the heart. Researchers are experimenting with virtual hearts in part because it is easier than tinkering with a living, beating one. And there is no way to look beneath the surface of a real animal heart. As Alan Garfinkel, a cardiologist at the University of California, Los Angeles, puts it, “You can't get the light into the meat.”

    Raging current.

    In a fibrillating heart, the normal bottom-to-top electrical activity of the ventricles (above) is replaced by spiraling scroll waves (right).


    So far, mathematics has answered some questions but raised others. James Keener, a mathematician at the University of Utah, Salt Lake City, says that if defibrillation worked the way most experts think it does, then we would have a lot more dead patients. “If we invoke the prevailing theory, the probability of success is no greater than 20% in our numerical simulations—regardless of the amplitude of the shock,” says Keener. “Yet defibrillators have a success rate that approaches 100% as the shock gets larger. So we have a problem.”

    Some people might argue that this is a good problem to have. If the treatment works, who cares that no one understands why? Garfinkel, for one: “I would urge that electrical defibrillation, the delivery of a huge, painful shock by an implanted $40,000 device, is neither a medically satisfactory solution, nor does it represent any scientific insight into the phenomenon,” he says. If cardiologists could understand fibrillation from first principles, he argues, they might be able to improve the treatment with less expensive equipment, less painful and damaging shocks, and potentially with antiarrhythmic drugs, which have until now been an embarrassing flop.

    The mathematical heart

    Mathematical models showed long ago that there is some method to the apparent madness of the fibrillating heart. Ventricular fibrillation is first and foremost a malfunction in the heart's electric circuitry. In a normal heartbeat, electrical activity starts near the top, in the atria; shoots to the bottom along special highly conductive muscle cells; and rises through the ventricles before dying away. When ventricular fibrillation sets in, however, one or more “spiral waves” of electrical activity start pinwheeling around in the cardiac muscle, like the “Mexican wave” in a soccer stadium. Just like sports fans, the cells in the ventricles start paying more attention to the wave than to the game. That is, they ignore the normal pacing signals coming from the atrium.

    Calming the waves.

    Models say the heart's “virtual electrodes” must cover spiral-wave cores—a near impossibility.


    Heart modelers who study fibrillation are more interested in the electrical behavior of the heart than its mechanical pumping, because the electricity drives the pump. So they begin with what mathematicians call a reaction-diffusion equation, which expresses the two ways that electrical signals travel through the heart: by diffusion of ions from cell to cell through gap junctions, and by currents that pass through the cell membranes.

    Valentin Krinsky, a Russian biophysicist, and Arthur Winfree, an American mathematician, were the first to realize that rotating spiral waves arise naturally as solutions to reaction-diffusion equations. Two interwoven spirals emerge from an inactive core or “phase singularity,” like the two flavors in a spiral lollipop: a spiral of active cells and a spiral of resting cells. The spirals rotate as individual cells take turns charging up or relaxing. In three-dimensional models, the spirals become scrolls wrapped around a “filament” that meanders through the heart muscle.

    It took experimenters a while to catch up, because it is not easy to map the heart's electric field as it races around the organ five times per second. Voltage-sensitive dyes solved the problem, and by 1993 researchers at the State University of New York (SUNY) Upstate Medical University in Syracuse had produced the first video images of spiral waves on the surface of a rabbit heart: ghostly yin-yang shapes twirling on a heart-shaped radar screen, many times per second. Scroll waves have not been seen yet, because no one has imaged electric fields inside the muscle.

    Although the basic reaction-diffusion models explain why spiral waves exist, they do not explain how to start or stop them. In fact, Keener says, “the heart is the only medium in which we know how to get rid of these waves. The mechanism that works in cardiac tissue doesn't work in any chemical oscillators. It's special, it's unique—and that makes it a mystery.”

    As in any mystery, there is an abundance of suspects—perhaps too many. One of them is the “virtual electrode” theory. About a decade ago, John Wikswo, a biomedical engineer at Vanderbilt University in Nashville, Tennessee, noticed that an electric current applied to the heart creates several spots of positive and negative voltage—not just a single spot under each electrode of the defibrillator. Wikswo explained this observation with a model that treats the heart as if it were a coaxial cable. Like a coaxial cable, the heart has two different conductors: the insides of the cells and the outsides. The positive and negative spots, or “virtual electrodes,” are places where current is flowing through the cell membranes, from one conductor to the other. Those transmembrane currents, Wikswo and others believe, are responsible for calming the spiral waves and returning the heart to normal.

    Unfortunately, the coaxial-cable approach predicts that transmembrane currents should exist only in the outer millimeter of the heart—not deep enough to stop fibrillation. Natalia Trayanova, a biomedical engineer at Tulane University in New Orleans, Louisiana, has shown that by taking into account the electrical effects of the twisting fibers that make up heart muscle, a 3D model can create virtual electrodes extending into the interior of the heart. Trayanova's simulation gets rave reviews from some of her colleagues: “I can't say enough good things about Trayanova's model,” says Paul Belk, a biomedical engineer at Medtronic Inc. in Minneapolis, Minnesota, which manufactures defibrillators. But Wikswo warns that some of the assumptions in the model have not been tested in the lab.

    Keener, however, thinks virtual electrodes alone can't explain defibrillation. A virtual electrode, Keener says, would have to cover a scroll wave filament completely to wipe it out (see figure). It's like fighting a forest fire with fire: You have to completely surround it, or it might escape. Even then, the shock has to be timed just right, or it will just trigger another wave of fibrillation. And as a fibrillating heart has several “fires” burning at once, it is very unlikely that they will all be doused at one stroke.

    Keener thinks the real key to defibrillation lies at the level of the cells themselves. The cell membrane, he says, responds more vigorously to a positive voltage than a negative one. Applying a positive voltage to it is like pouring gasoline on a fire: It helps release the energy already stored inside the dry wood. The energy propagates from cell to cell until the “fire”—the region of positive electric potential—engulfs the whole heart. Then the fire burns out, and the scroll waves are gone. If the individual cells do indeed act as batteries, then the electric potential should follow a “sawtooth” profile, with one end of each cell positive and the other negative. Earlier experiments failed to detect such a sawtooth, but Arkady Pertsov, a biophysicist at SUNY Upstate Medical University, is gearing up to search with new technology.

    So are the mathematical models getting closer to a solution or just adding to the confusion? “I think we're getting pretty close,” says Brad Roth, a biophysicist at Oakland University in Rochester, Michigan. “I don't think the cardiologists will be convinced by mathematical models, but they will be intrigued enough to do experiments. I see our role as motivating the experiments.”

    Certainly Medtronic is taking models seriously. Belk and fellow engineer Paul de Groot use them to determine the best placement for the electrodes of an ICD. They are also working on “pain-free” defibrillators that restore the heart's normal rhythm by a gentle pacing signal instead of a giant jolt.

    “The best thing about a model is that you can see exactly what's going on,” says Belk. “You can stick 500 electrodes on the heart. [Animal experiments typically manage 30.] You can induce tachycardias on the computer and play around with different pacing schemes. But at the end of the day, you don't have to trust your model very much. You ask yourself, ‘Does that make sense?’ If the result is valid, almost invariably the answer is ‘Jeez, I should have thought of that.’”

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution