News this Week

Science  26 May 2000:
Vol. 288, Issue 5470, pp. 1310

You are currently viewing the .

View Full Text

Log in to view the full text

Log in through your institution

Log in through your institution


    Space Telescope, Teamwork Top Priority List for Next Decade

    1. Andrew Lawler

    Most scientific disciplines would have a hard time anticipating their needs for the next 10 years. And picking priorities among a slew of projects competing for a limited budget is even harder. The exception is astronomy, where the size, cost, and scope of the tools required makes the exercise essential. Last week the U.S. community issued its latest blueprint, a 164-page compendium of projects designed to persuade the federal agencies that will foot the bill.

    The study,* by a panel of the National Research Council (NRC), urges the government to spend $4.7 billion through 2010 on a new generation of ground- and space-based observatories, many to be built in partnership with other countries. Its top choice is the Next Generation Space Telescope (NGST), a proposed $1.3 billion observatory with a mirror nearly four times as large as that of the current Hubble Space Telescope. Led by Christopher McKee, an astronomer at the University of California, Berkeley, and Princeton University astronomer Joseph Taylor, the panel also makes a plea for private and public observatories to set aside their differences and proposes a National Virtual Observatory to store vast amounts of astronomical data. In addition, the panel urges NASA to place greater emphasis on medium-sized missions.

    If the past is any guide, the exercise should pay off handsomely. The previous study, a 1991 effort chaired by Princeton astronomer John Bahcall, recommended $4 billion worth of initiatives, most of which have either been built or are under construction. The secret, say science managers, is to select a limited number of priorities. “This is not just a typical ‘Please send money’ report,” says William Wulf, president of the National Academy of Engineering and NRC vice chair. “This represents tough choices.” One such choice involved dumping a proposed Space Ultraviolet Observatory because its technology, according to McKee, is not advanced enough.

    The listed projects closely match the intentions of NASA and the National Science Foundation (NSF), agency officials say. “I'm delighted with this report—it's fantastic,” says Anne Kinney, science chief of NASA's origins program, which oversees the $1.7 billion Terrestrial Planet Finder, ranked third among space-based missions and a personal favorite of NASA Administrator Dan Goldin.

    The list of a dozen moderately sized missions is topped by the Gamma Ray Large Area Space Telescope, a joint NASA and Department of Energy effort with 30 times the sensitivity of instruments aboard the soon-to-be-jettisoned Compton Gamma Ray Observatory. Smaller missions were not ranked, except for the first: a National Virtual Observatory to enable “a new mode of research” for astronomers by providing a one-stop shop for data as well as a potent tool for public education. The panel also called for agencies to put a stronger emphasis on funding astrophysical theory, data archiving and mining, and laboratory astrophysics.

    A more daunting task is reorganization of the U.S. optical community, now divided between independent and government-funded observatories. “All facilities, whether nationally or independently operated, should be viewed as a single integrated system,” according to the report. Whereas the solar and radio astronomy communities work together closely, those in the optical community tend to go off on their own, say researchers.

    The report recommends fixing the problem by having NSF fund instruments at private observatories in exchange for viewing time by outside astronomers. Past efforts have had limited success, say several researchers and NSF officials. “I still think it is a brilliant suggestion worth pursuing,” says Hugh Van Horn, director of NSF's astronomical sciences division, who hopes the panel's support will help bring the two communities together. Toward that end, the report proposes that the National Optical Astronomy Observatories, a Tucson, Arizona, organization operated by the Association of Universities for Research in Astronomy (AURA), divest itself of its solar observatory and make unity its major focus. AURA president Bill Smith says he's in “total agreement” with the recommendations.

    But striking the right balance may be difficult. Joseph Miller, director of the independent Lick Observatory based in Santa Cruz, California, says many independents feel the idea is “grossly unfair,” as NSF grants would come with strings attached and with an unfavorable ratio of time in exchange for dollars. Miller, who backed a similar proposal in 1995, says the concept could work with a revamped formula and if new monies are found to pay for the effort.

    On the space-based end, the panel urges NASA to accept more diversity in mission sizes. Large efforts such as NGST are well supported, it notes, and advocates of smaller missions can tap programs such as Explorer and Discovery. But nobody is watching out for medium-sized missions, panel members say. “They often slip through the cracks,” says Blair Savage, an astronomer at the University of Wisconsin, Madison, who served on the panel's policy subcommittee. To emphasize this need, the panel included five moderate-sized space initiatives in its report. Whether they will all fit in NASA's limited budget is another matter: Although NGST and the Terrestrial Planet Finder are included in the agency's long-term budget as well as the strategic plan, several others are in the plan but not the budget.

    Not everyone is happy with the NRC study. Jon Morse, an astronomer at the University of Colorado, Boulder, complains that the committee excluded the Space Ultraviolet Observatory because it favored a larger and more complex spacecraft, making it too ambitious for this decade. But Morse hasn't given up: This week he's meeting with NASA engineers and scientists to improve the observatory's chances of making the cut in the next decadal report.

    View this table:

    UC Teaching Assistants Win First Union Contract

    1. Constance Holden

    The University of California (UC) system has agreed to a union contract covering some 8000 teaching assistants (TAs), capping a 16-year fight by graduate students for a labor agreement with their employer. The contract, between UC and the United Auto Workers (UAW), includes an immediate 1.5% pay raise and creates a mechanism for overtime pay as well as limits on the workweek. However, it exempts academic matters from the collective bargaining process, removing a major sticking point among faculty during the yearlong negotiations that ended last week. The pact must still be ratified by each of the system's eight general campuses.

    “TAs are choosing unions,” exults Christian Sweeney, a union spokesperson and a graduate student in history at UC Berkeley, about similar efforts by graduate students at the University of Washington and the University of Illinois, Urbana-Champaign, that so far have come up short. The UC agreement covers a small minority of UC's 39,000 graduate students and excludes some 8000 research assistants, most of whom are paid through research grants. University officials don't anticipate a major disruption of academic life. “It's unlikely to have a significant impact” on graduate education in science at UC Los Angeles, says psychiatrist Robin Fisher, associate dean of the graduate school.

    The key issue in the lengthy California battle has been whether teaching assistants are primarily students or employees. The university argued that they were not employees and could not form a union. But in December 1998, the state Public Employee Relations Board ordered the university to allow a vote on unionization among TAs as well as undergraduate tutors and readers. After a systemwide strike (Science, 11 December 1998, p. 1983), the students voted to join the UAW-affiliated Association of Graduate Student Employees.

    The new contract gives the university “sole authority on all decisions involving academic matters” and exempts “workload” disputes from arbitration while limiting how often the usual workload of 20 hours per week can be exceeded. It also establishes arbitration for settling nonacademic complaints about health and safety, discrimination, and sexual harassment, and provides for the eventual full remission of tuition fees. Most TAs now earn about $13,500 a year, and UC spokesperson Mark Westlye says he doesn't know where in the budget the extra money for the raises and tuition will come from.

    Peter Miller, a union organizer at the University of Illinois, says the impetus for unionization has come from arts and humanities grad students, who get less money from research grants and spend more years teaching than do science majors. One UC scientist who requested anonymity calls the agreement “a reasonably good compromise,” although he thinks everyone would be better off without it: “The TA-faculty relationship has been good; there wasn't much of anything that really needed to be fixed.”

    With UC grad students now in the UAW fold, Sweeney says, the number of unionized grad students in the country has doubled to about 20,000. Although the University of Wisconsin and several other public universities have had graduate student unions for some time, most administrations still oppose unionization. Illinois graduate students voted 3 years ago for a union that the university refuses to recognize, and the University of Washington has rebuffed student attempts to unionize on the grounds that state law does not allow it. “Our TAs are students first; they are only employed as a result of their continuation of successful educational pursuits,” says Steven Olswang, vice provost of the University of Washington. New York University is appealing an April ruling by a regional office of the National Labor Relations Board that graduate students can be regarded as employees.

    Although Olswang argues that union membership undermines the collegial relationship and tips it toward the adversarial, UC has apparently decided to make the best of the situation. Says Westlye, “Now that we have unionization, we will proceed in as copacetic a fashion as we can.”


    State Ready to Create Three Research Institutes

    1. Evelyn Strauss

    Lured by $300 million in state money, California university officials are scrambling to compete for three planned interdisciplinary research institutes that would be set up under a bill speeding through the state legislature. The legislation—introduced this winter as a way to enhance the state's already strong science and technology base—has won bipartisan support and could be approved as early as next week.

    Even before the bill is approved, researchers are putting together proposals. The prize will be $25 million a year over 4 years for each institute, and the winners must raise twice that amount from outside sources. Only the 10 University of California (UC) campuses, including the new Merced site, are eligible to host the new institutes, but collaborators can come from any university in the state.

    “The bill promotes technological and scientific research and training to maintain California's leadership,” says Antonio Villaraigosa, former speaker of the California assembly, who introduced the bill at the request of Governor Gray Davis. “These institutes will concentrate resources and mobilize the state's best scientists and engineers in medicine, biotechnology, telecommunications, energy, space, and agriculture.”

    The idea for the California Institutes for Science and Innovation grows out of the state's economic boom and a rare political alignment in which Democrats control the governor's mansion and both legislative houses. It's a departure from most other state science and technology investments, which tend to focus on short-term economic development rather than basic research. It also differs from the typical state investment in university buildings, made without regard to the potential commercial value of the research going on inside. “States spend billions of dollars on university infrastructure, but most of it is not targeted for an economic payoff,” says Dan Berglund, executive director of the State Science and Technology Institute in Columbus, Ohio. Adds Robert Conn, dean of the Jacobs School of Engineering at UC San Diego (UCSD), “The commitment of the state to provide this infrastructure is extraordinary. It's bold thinking.”

    Conn is part of a team at UCSD and UC Irvine drafting a proposal for an institute in telecommunications and information technologies that would build on existing research efforts and local expertise. Officials have already made progress in lining up outside donations, Conn says, including a $15 million pledge from QualComm, the wireless communications giant headquartered in San Diego. In a similar vein, UC Davis, the state's agricultural campus, is proposing an institute on environmental informatics and technology to develop new production methods in agriculture and other sectors. “We want to improve the economy without damaging the environment,” says Kevin Smith, Davis's vice chancellor for research.

    The Los Angeles and Santa Barbara campuses are jointly proposing a nanosystems institute that would also have a heavy emphasis on developing a cadre of researchers for this emerging area. “The institute would provide core facilities that otherwise would be difficult to build or to use because we wouldn't have a critical mass of investigators,” says Roberto Peccei, dean of physical sciences and interim vice chancellor for research at UCLA. “One of the products of this institute will be grad students and undergrads who will have some real understanding of cross-disciplinary fields. You can get a lot of people to work together who otherwise wouldn't.”

    Although the bill has yet to be approved, the UC system has asked campuses to submit their ideas by the end of this month. Experts in the relevant fields will review them for scientific and educational merit, along with the importance of the work to the state's economy, says UC official Susanne Huttner, and from that pile will come a final round of submissions in September. Any California campus, public or private, can collaborate on a proposal, and a single campus can submit more than one idea. But no university can land more than one institute.

    Although the state money is intended for bricks and mortar, state officials say that the outside funding—from federal agencies, foundations, and industry—may be used for research activities and operating costs. And while three new institutes pale in comparison to the number of existing centers on California campuses, they would represent a significant part of the state's investment in basic research (see pie chart). In addition, the absolute amount is nothing to sneeze it, says Berglund: “$300 million over 4 years is a big number.”


    Dioxin Draft Sparks Controversy

    1. Jocelyn Kaiser

    Even before it is released, the U.S. Environmental Protection Agency's (EPA's) new report on dioxin is creating a furor. A draft, leaked to The Washington Post last week, concludes that dioxin is 10 times more likely to cause cancer than previously believed, posing a risk as high as 1 in 100 among the most exposed individuals. Some scientists immediately blasted the findings as “unbelievable,” while acknowledging that they had not seen the report. Even before the leak, concern from other federal agencies about public anxiety prompted the White House to organize an interagency review of the draft, which has yet to undergo review by EPA's Science Advisory Board.

    Five years ago, that same board kicked an earlier version of the dioxin reassessment back to EPA for revision, calling it scientifically flawed (Science, 26 May 1995, p. 1124). The 1994 assessment concluded that low levels of dioxin could be causing significant reproductive, immune, and developmental effects and retained dioxin's label as a “probable” carcinogen. After analyzing the evidence again—including new analyses of occupational studies, according to an EPA scientist—the agency has now upped dioxin's classification to “known” carcinogen. Although dioxin levels in the environment have been steadily falling since the 1970s, once ingested, exposure continues because the pollutant bioaccumulates in body fat.

    Michael Gough, a retired biologist and dioxin expert, asserts that “there is no convincing evidence” that dioxin is 10 times more potent a carcinogen. Scientists who helped write the report counter that evidence of higher potency is solid and should not be surprising: Even in 1994, EPA had recognized that dioxin has a much longer half-life in humans than in rats. What's more, the International Agency for Research on Cancer recently classified dioxin as a known carcinogen as firmer evidence has emerged from studies of workers exposed to dioxins. But even one scientist who co-authored a key chapter of the new study cautioned against reading too much into EPA's characterization of a 1-in-100 risk. “That's at the very upper edge of body burden in the United States,” says Chris Portier of the National Institute of Environmental Health Sciences.

    Meanwhile, EPA officials point out that the report is not final; the agency intends to release a draft for public comment in June and send it for final scientific review in September. With this inauspicious debut, the report seems sure to attract a wide audience.


    New Clues to How the TB Bacillus Persists

    1. Ingrid Wickelgren

    The tuberculosis bacterium has been called the world's most effective pathogen. It kills an estimated 2 million people each year and lurks in one-third of the world's population. In most, the infection remains latent, but in some it explodes in disease years to decades later. Just how this organism manages to persist silently in the body in a kind of truce with the immune system has long been a mystery—one just now beginning to be solved.

    Two groups, one led by Stanford University microbiologist Stanley Falkow and the other by microbiologist William Jacobs of Albert Einstein College of Medicine in New York City, have identified genes that may be required for persistent TB infection. If so, the proteins made by the genes could be good targets for new drugs against TB, which are badly needed because the pathogen is becoming resistant to current medications. The work is “clever and exciting,” says infectious disease specialist William Bishai of Johns Hopkins School of Public Health in Baltimore. “If we understand how the organism adapts and persists, we'll be much better off in terms of drug development.”

    For their work, which is described on page 1436, the Stanford team used an animal TB model: the infection of frogs by Mycobacterium marinum, a close cousin of the TB bacillus, M. tuberculosis. The researchers chose M. marinum because it grows faster than M. tuberculosis and is not spread through the air. But it causes latent infections in frogs, similar to those in human TB. In particular, the animals develop a pathological hallmark of persistent TB infection: clusters of immune cells called granulomas where the bacteria live for years.

    To identify the genes that allow M. marinum to persist, Falkow, with Stanford colleagues Lalita Ramakrishnan and Nancy Federspiel, attached the gene for a green fluorescent protein to random fragments of M. marinum's genome and then inserted the fragments, carried on circular pieces of DNA called plasmids, into normal strains of the bacterium. That way, if a fragment contained a gene that became active, its bacterial host would glow green, and the gene could be identified by its location on a plasmid.

    Using this technique, Ramakrishnan and her colleagues found a dozen genes that M. marinum turns on when living inside frog granulomas but not when growing on its own. Most of them have close counterparts in M. tuberculosis, whose genome was sequenced in 1998. Two in particular caught the researchers' interest, because they are members of a family of genes called PE-PGRS that comprises 5% to 10% of the M. tuberculosis genome. These genes are blueprints for bizarre-looking proteins that contain hundreds of glycines in a row and whose functions are unknown. Intrigued, the Stanford researchers knocked out one of the PE-PGRS genes in the frog bacterium.

    When they put these mutants into macrophages, the main type of immune cell in granulomas, that were maintained in lab cultures, the bacteria didn't grow at all. And they grew only poorly in frog granulomas, with the number of bacteria in the frogs reduced to 1/50 that of wild-type infections. That suggests that at least some of these PE-PGRS family members play a role in persistence. “This is the only demonstration that these genes have any function,” says Eric Rubin, a microbiologist at the Harvard School of Public Health in Boston.

    Others aren't convinced that the Stanford group's finding casts any light on human TB, however. Ian Orme, a TB researcher at Colorado State University in Fort Collins, thinks the frog model is a poor one, in part because M. marinum is relatively harmless to mammals and must be injected in “monster doses” to produce signs of infection in frogs. “I'm not arguing that frogs are people,” counters Falkow. “But marinum and tuberculosis have a common ancestor, and they share a common pathogenic apparatus.”

    One way to settle the argument would be to knock out the PE-PGRS gene in M. tuberculosis itself and see whether that affects the organism's virulence. Such an effort could be greatly facilitated by a method developed by the Einstein researchers in the course of their work, which appears in the April issue of Molecular Cell.

    Jacobs also started with a safer organism: bacillus Calmette-Guérin (BCG), used in the TB vaccine. Two years ago, he and Einstein colleagues Michael Glickman and Jeffery Cox began randomly knocking out genes in BCG looking for mutants that failed to form cords, a phenomenon in which the microbes join in long ropelike structures in culture. Because such “cording” seems to correlate with high virulence, Jacobs reasoned that pinpointing genes needed for cording might lead him to virulence genes.

    The researchers found that mutations in a BCG gene that makes an enzyme called cyclopropane synthase prevent the bacteria from forming cords. They then went on to inactivate the comparable gene in M. tuberculosis. In the past, producing such gene knockouts took 6 months, in part because it's very hard to get gene-disrupting DNA fragments into the TB genome. But the group used a rapid new technique co-developed by Einstein's Stoyan Bardarov, which uses a TB- infecting virus to insert the fragments, and so achieved its goal in just 3 weeks.

    The mutant TB strains thus obtained also failed to form cords. What's more, they were less virulent. Control animals infected with the unaltered microbe all died from the infection after about 7 months. By contrast, the mutant bacteria grew rapidly in their hosts for a few weeks, but then began to die off slowly, without killing any of the mice. These results, Jacobs says, show that the mutated enzyme is not only critical to the ability of M. tuberculosis to cause disease but also to its ability to persist. “This is the first persistence mutant ever isolated,” claims Jacobs.

    Others question whether the gene is involved in persistence, partly because the mutant strains did not rapidly disappear in the mice after the initial stage of infection; thus, knocking out this gene did not dramatically cripple the bacterium's ability to persist. “It's an interesting gene, but I wouldn't consider it the crux of persistence,” says Bishai.

    But nobody is questioning the impact of the new technology for creating TB mutants, which Harvard's Rubin hails as “a huge advance.” Indeed, Jacobs says that in the past 6 months his team has created about four times as many TB mutants (32) than have been published to date.

    Both groups have more work to do in figuring out what role the genes they have identified might play in TB virulence and persistence. But in any new knowledge, there is hope. Says microbiologist Michael Mahan of the University of California, Santa Barbara: “The payoff is huge when you really understand a disease.”


    HHS Plans to Overhaul Clinical Research Rules

    1. Eliot Marshall

    Health and Human Services (HHS) Secretary Donna Shalala announced last week that the government intends to issue new guidelines and regulations designed to protect human subjects who participate in clinical trials. Curiously missing from the announcement, however, was the final word on a long-anticipated reorganization of HHS's framework for monitoring patient safety in clinical research.

    The department is planning to appoint a “czar” who will run a new office in HHS that will coordinate efforts by 17 agencies to protect human research subjects. According to several sources, the job is being offered to Greg Koski, an anesthesiologist and director of human research affairs at Massachusetts General Hospital in Boston. But a deal had apparently not been consummated in time for last week's announcement, which came on the eve of congressional hearings on the topic, and the department was left proclaiming a new policy but not the person who will implement it.

    The announcement said HHS plans to ask Congress for civil penalties for lapses in obtaining informed consent from research subjects—up to $250,000 per clinical investigator and $1 million per institution. Both the National Institutes of Health (NIH) and the Food and Drug Administration are drawing up new guidelines on obtaining consent. HHS also plans an “aggressive effort” to train clinical investigators and members of Institutional Review Boards (IRBs) on the use of human subjects. And NIH intends to clarify its guidelines on conflict of interest to ensure that “any researcher's financial interest in a clinical trial [is] disclosed to potential participants.”

    As for the new czar, Shalala said earlier this month that the job will go to “someone who is experienced and is a world-class leader” but is from “outside the bureaucracy.” That formula clearly ruled out one top candidate: Gary Ellis, the current director of the Office for Protection from Research Risks (OPRR), a small office at NIH that watches over institutions receiving federal money for clinical research. Ellis confirms that he will not go to HHS when the office moves but instead will be offered other employment at NIH. Under Ellis, OPRR experienced a sudden change of style. After being criticized in Congress in 1998 for not taking the initiative, OPRR came out with guns blazing over the past year and a half, shutting down half a dozen prestigious clinical research programs for noncompliance. Ellis's high-impact style may have cost him some support among university chiefs, observers say. But it also won praise. One leader in the field says Ellis “raised the public consciousness.” He also got an endorsement from Representative Dennis Kucinch (D-OH), who said this month that Ellis's record was “the only bright spot” in a “dismal area” of federal oversight of human research subjects.

    Koski—if he is the appointee—will clearly be wading into a contentious political job. An assistant professor who has spent the past 30 years at Harvard and Harvard Medical School conducting basic research, clinical medicine, and teaching, Koski declined comment.

    One of the czar's first tasks will be to answer questions about federal monitoring of experiments such as the gene therapy trial in which an 18-year-old died last September (Science, 12 May, p. 951). The Senate public health subcommittee has scheduled a hearing on this case for 25 May. Meanwhile, a House Government Reform subcommittee, chaired by Representative John Mica (R-FL), is asking HHS to respond to criticisms from its own Office of Inspector General. Testifying before Mica's subcommittee on 3 May, HHS Deputy Inspector General George Grob called for “a greater sense of urgency” in improving the oversight of clinical research. He said that IRBs, in particular, need more resources.

    “There's going to have to be a much greater investment in IRBs,” agrees LeRoy Walters, director of the Kennedy Institute of Ethics at Georgetown University in Washington, D.C. “We have not modernized the research oversight system in the same way that we have modernized the research itself.” Creating the new office at HHS and establishing an outside advisory panel is “a step in the right direction,” says Walters. But he cautions that even this may not be enough: Ultimately, he says, an independent clinical research monitoring agency may be necessary.

    Ralph Snyderman, president and CEO of the Duke University Health System, agrees that reform is needed, even if it costs more initially to beef up the IRBs, because “protecting human subjects may be the most important thing we do.” He hopes the federal government will help pay for the cost. But he's “not passionate” about the idea of creating a new independent agency. His advice to the new czar: Begin by taking stock of OPRR's existing regulations and methods and consider “whether they are doing what they're supposed to do.”


    A Refuge for Life on Snowball Earth

    1. Richard A. Kerr

    When a small group of geoscientists recently revived the snowball Earth hypothesis—the idea that the planet froze from pole to pole 600 million years ago—some scientists raised serious doubts. Although geologists have found evidence of extreme glaciation at that time, the paleontological record shows that complex life passed through unscathed. How could early life have weathered such a horrendous environmental catastrophe without suffering a mass extinction (Science, 10 March, p. 1734)? How could algae and perhaps even early animals have survived 10 million years sealed off by globe-girdling ice? Now climate modelers say that their most realistic models offer a possible resolution of the conundrum: In the tropics, climatic amplifiers built into clouds, winds, and currents may have counteracted the chilling effect of ice and snow.

    Modelers trying to simulate the world of the Neoproterozoic era 600 million years ago are finding that the more realistic the climate model, “the harder it is to create climate change,” says Mark Chandler of NASA's Goddard Institute for Space Studies (GISS) in New York City. “We've used the most realistic [model] boundary conditions to date, and we have not been able to freeze over the planet.”

    Work by Chandler and other modelers contradicts the so-called “White Earth solution” that climate modeler Mikhail Budyko of the Leningrad Geophysical Observatory came across in the 1960s. Working with the simplest possible climate model—a one- dimensional rendition of incoming solar radiation and Earth's outgoing radiation—Budyko found that if the planet's ability to retain heat through its carbon dioxide greenhouse were to weaken for any reason, then white, highly reflective snow and ice would creep toward the equator. The farther it got, the more solar radiation it would reflect back into space, further cooling the planet. If the sun were dimmer than today (it was about 6% dimmer in the Neoproterozoic), this albedo feedback effect would eventually take over, the ice and snow would rush across ocean and continent to the equator, and Earth would be locked in a snowball.

    In a paper appearing this week in Nature, paleoclimate modeler William Hyde of Texas A&M University in College Station and his colleagues report results for the Neoproterozoic from a more complex, two-dimensional version of Budyko's energy balance model. The model is coupled to a second one that can grow ice sheets on land, which in turn can affect climate. When greenhouse carbon dioxide is cut to half its concentration today—say, because unusually severe weathering of continental rocks sucked carbon dioxide out of the atmosphere—Hyde's model planet ices over, just as Budyko's did.

    Albedo, however, is not the only feedback in the real world. Some loops work the other way, resisting cooling rather than reinforcing it. So Hyde and his colleagues had their model mimic one negative loop previously missing: the tendency for cooler temperatures to reduce the cloud cover and let in more warming sunlight. Even with the negative feedback, some tropical continents still iced over—consistent with geologic evidence of continental glaciation at low latitudes in the Neoproterozoic. But open water remained in the tropical oceans—a potential haven for life. “You could have open water,” says modeler Thomas Crowley, Hyde's A&M colleague, “but we want to do more simulations” to tease out just which feedbacks are most important.

    Simulations by other scientists confirm that the more realistic the model, the harder it is to freeze over the planet. Chandler and geologist Linda Sohl of Columbia University's Lamont-Doherty Earth Observatory in Palisades, New York, will soon report in the Journal of Geophysical Research that a GISS general circulation model (GCM) also leaves tropical waters open while icing over tropical continents. Unlike the A&M energy balance model, a GCM has an atmosphere as realistic as those used in weather forecasting as well as a simplistic ocean. “We kicked the model very hard” with a faint sun and greatly weakened greenhouse, says Chandler, “and it doesn't even come close to freezing over. The geologic record is doable with reasonable conditions for this time period.”

    Using an even more complex model, Raymond Pierrehumbert and Christopher Poulsen of the University of Chicago found that the model's ocean can also be crucial. “We can get Earth to freeze easily with a slab ocean,” he says, referring to a model ocean devoid of currents. “But when we have a real ocean model, it transports enough heat [in currents] to the ice margin to hold the ice off.”

    Although these early modeling results are far too preliminary to prove that life would have had a refuge in any Neoproterozoic ice age, they offer some comfort to paleontologists. They're only models, says Guy Narbonne of Queen's University in Kingston, Ontario, “but what I find exciting is that [the A&M model] explains the geological relationships [such as low-latitude glaciation] and permits some of the things we see in the paleontological record.” On the other hand, snowball proponents remind modelers that open ocean waters shouldn't be carried too far. “You can't have the whole tropical ocean open,” says geochemist Daniel Schrag of Harvard University. “There's good evidence in the geological record that the ocean was sealed off or close to it.” Even guessing how close will take more runs of more sophisticated models with more accurate Neoproterozoic conditions.


    Three Asian Nations Launch Joint Study

    1. Michael Baker*
    1. Michael Baker is based in Seoul.

    Seoul—Pollution from China's booming industrial northeast has long rained down on its richer neighbors, South Korea and Japan, damaging ecosystems and degrading public health. But scientists in all three countries hope that a 5-year project to measure the extent of the problem will provide critical information to help China clean up its act.

    The research, which began last month, is the first of nine projects among the three countries dealing with transboundary pollution from a variety of sources. Others will focus on the effects of water pollution, acid rain, and desertification—which generates the airborne “yellow sand” that clogs Korean lungs—along with ways to meet the CO2 emissions goals of the Kyoto protocol. They are the outgrowth of agreements struck by the environmental ministers of the three countries, most recently at a February meeting in Beijing (

    The new initiative will feature computer modeling of the flow of pollutants as well as the compilation of a list of major sources. Prompted by public pressure to deal with the problem, Korea is taking the lead with a promised $6 million next year. Japan's contribution is a modest $200,000 annually for the next 2 years, while China has not yet settled on an amount.

    The project is seen as an important extension of Japan's Acid Deposition Monitoring Network in East Asia (EANET), which will soon begin to collect data after a decade spent setting up 38 monitoring sites in 10 countries, from Indonesia to Mongolia. “We expect this [new project] will develop into a regional framework for acid rain,” says Eisaku Toda, assistant director of the air pollution control division in Japan's Environmental Ministry.

    EANET, which lags 15 to 20 years behind similar tracking in Europe and North America, has been slow to get off the ground due to a limited budget and the difficulty of winning political support. Toda says the current monitoring network—which includes 10 stations in Japan, two in Korea, and nine in China—falls short of what is needed, and that Japan alone should have 40 to 50 stations. “But it's a good start,” he says.

    For years any type of international cooperation on environmental matters was plagued by political obstacles. Until the mid-1990s China “vehemently denied the existence of such a problem,” says Kenneth Wilkening of the Nautilus Institute, a nonprofit organization in Berkeley, California, that works on security and environmental issues throughout Asia. Wilkening, who is organizing a meeting this summer in Seattle to discuss the transport of air pollutants across the Pacific, says that although most of China's pollution remains within its borders, the 5% to 10% that travels abroad can represent a significant part of a neighboring country's load. Depending on which computer model is used, Wilkening says, as much as half of Japan's acid rain could be blown in from China and Korea.

    Chinese officials deny erecting any roadblocks to monitoring transboundary pollution. Instead, says a spokesperson for the environmental agency, China was reluctant to join international efforts until it had a better handle on the domestic sources, transmission routes, and impacts of the dirty air. Clean air is now a government priority, he noted, and Chinese cities regularly announce public air quality bulletins and forecasts. The country's emissions of sulfur dioxide and particulates dropped by 7.8% in 1998, he says, thanks to a campaign to reduce emissions in 47 cities hard-hit by acid rain and by the closing of many small, inefficient factories that use high-sulfur coal. “There has been equal and mutually beneficial cooperation between China and Japan and Korea on pollution control and environment management, and we hope to continue such cooperation,” the official says.

    Once the data are collected and analyzed, scientists hope they will point the way to new policies. But even if China is found to be the culprit in most of the airborne pollution, it will probably need help from its neighbors in addressing the problem. “Unfortunately, China is not a rich country,” says Park Chul Jin of Korea's National Institute of Environmental Research in Seoul. Instead, Park and others foresee Korea and Japan providing money for technical fixes and other steps aimed at curbing the problem.


    A DASH of Data in the Salt Debate

    1. Gary Taubes

    The controversy over salt, blood pressure, and public health has seemed endless and intractable. The National Heart, Lung, and Blood Institute (NHLBI) and the National High Blood Pressure Education Program, among other august bodies, recommend that all individuals, not just those with hypertension, reduce the amount of salt in their diets to lower their blood pressure and improve their health, while a good proportion of the researchers in the field believes such recommendations have not been supported by the data. As a result, an entire field has been mired in acrimony for 4 decades.

    On 17 May, Claude Lenfant, director of NHLBI, declared the controversy over. The results of DASH-Sodium, a new NHLBI-funded study to be presented the next day at the annual meeting of the American Society of Hypertension (ASH), had made the health benefits of salt reduction unambiguous, Lenfant said. After the meeting, the controversy showed little sign of abating, however.

    DASH stands for Dietary Approaches to Stop Hypertension. DASH-Sodium is the sequel to the original DASH study published in April 1997, which suggested that blood pressure could be reduced dramatically by eating a diet rich in fruits, vegetables, and low-fat dairy products. Salt was not a factor in the original DASH study, which made the blood pressure reductions that much more noteworthy.

    In DASH-Sodium, a collaboration of five institutions, investigators tested both the DASH diet and a control diet, similar to that of the average American, at three levels of salt intake—8 grams a day, which is slightly less than the average American's intake; 6 grams, equivalent to the current government recommendations; and 4 grams. The investigators randomly assigned 412 subjects with either hypertension or high normal blood pressure to either the control diet or the DASH diet for 90 days. They fed them all their meals—assuring that subjects were eating their assigned diets, no more, no less—and changed the sodium level every 30 days.

    The results were impressive. The DASH diet alone reduced blood pressure as dramatically as before. And the reductions in blood pressure by decreasing salt, whether on the DASH diet or the control diet, while not quite as impressive, were still substantial. When hypertensives, for instance, went from the high-salt to the low-salt control diet, their systolic blood pressure fell 8.3 millimeters of mercury (mmHg) and diastolic fell 4.4 mmHg (8.3/4.4 mmHg). This drop is comparable to that achieved by blood pressure- reducing drugs. In those with high normal blood pressure, going from high sodium to low sodium on the control diet reduced blood pressure by 5.6/2.8 mmHg, a drop almost five times greater than recent meta-analyses might have predicted. The better part of these blood pressure reductions came when the subjects went from the government-recommended levels of 6 grams of salt a day to the lowest level of 4 grams. “The finding suggests that an intake below that now recommended could help many Americans prevent the blood pressure rises that now occur with advancing age,” said Lenfant in a press release.

    Protracted controversies, however, can be remarkably resistant to new data, even good data. After hearing the DASH-Sodium results at the ASH meeting, those who were skeptical of the wisdom of recommending that an entire nation eat less salt remained resolutely skeptical. David McCarron, for instance, of the Oregon Health Sciences University in Portland, pointed out that for those with normal blood pressure eating the healthy DASH diet, reducing salt from 8 grams to 4 grams a day made little difference in blood pressure (1.7/1.1 mmHg). “If you are eating the healthy DASH diet and you have normal blood pressure, sodium restriction has almost no effect. … So why should salt reduction be the major message, when it says if you go on a healthy diet, salt reduction is a moot point?”

    A stickier issue speaks to the nature of public health recommendations. The better part of the salt controversy centered not on the size of the blood pressure reductions that could be achieved by eating less salt, but on whether it would improve our health to do so. Over the years, researchers have been unable to demonstrate that reducing salt improves health. The authors of a 1998 comprehensive meta-analysis on salt reduction published in The Journal of the American Medical Association concluded that “The optimum solution to the controversy are long-term trials with hard end points, such as stroke, acute myocardial infarction, and survival.”

    This conclusion was echoed after the ASH meeting by Micky Alderman, a hypertension specialist at Albert Einstein College of Medicine in New York City and a past president of ASH. “They're suggesting as a remedy for 250 million people that they cut sodium intake in a half,” says Alderman, “and to do so solely on the basis of showing you can change blood pressure for a 30-day period, without even assessing any other potentially adverse consequences. It seems to me it's a leap of faith.”

    Although DASH-sodium investigators were much more sanguine about the health benefits of salt reduction, at least one agreed that Alderman's point was reasonable. Biostatistician William Vollmer of the Kaiser Permanente Center for Health Research in Portland told Science that he believes DASH-Sodium provides good evidence for recommending lower levels of salt intake. Nonetheless, he added, “it would be nice to see a good, controlled study that shows the long-term effects of a low-sodium diet. The issue has been raised. We can sit here and say it hasn't, or we can do a study that settles it once and for all.”


    The Boom in Biosafety Labs

    1. Martin Enserink

    Concerned about new or reemerging pathogens, many virologists are dreaming of their own high-containment labs. But worried neighbors may derail their plans

    PLUM ISLAND, NEW YORK, AND GALVESTON, TEXAS—When safety officer Thomas Sawicki starts explaining the rules, you realize that this is no ordinary lab visit. Before entering, Sawicki instructed a small band of reporters last month, you'll have to take off all your clothes and jewelry. Then, you'll walk naked through a narrow hallway and open a door, after which you'll find a locker room where you can grab some underwear and don a disposable lab suit. Now you're ready to enter the innards of the facility—where the viruses are.

    Later, before you exit, drop your entire outfit in a laundry bin, blow your nose, clear your throat and spit, scrub under your nails, and dip your glasses in disinfectant. Then step under the shower and wash your entire body and your hair for at least three minutes. Oh, and forget about bringing any notes you took. They'll be faxed to the outside world; the originals will be incinerated.

    This press tour is part of a new public relations campaign at Plum Island Animal Disease Center, designed to assuage the fears of the surrounding community. The laboratory, sitting in isolation on a tiny island in the Atlantic off Long Island, is the only place in the United States where scientists are allowed to study several deadly pathogens that infect animals—a sort of Alcatraz of microbiology. Despite the evident safeguards, local critics have long worried that the bugs might somehow escape and make their way across 2 kilometers of ocean, setting off an Ebola-like epidemic in an upscale New York suburb. For decades, the lab was shrouded in secrecy; now, as part of the new glasnost, lab officials routinely shepherd through hordes of reporters and residents of nearby towns so they can see for themselves what scientists are doing here—and how seriously they take safety. The PR is essential, they concede, because the U.S. Department of Agriculture (USDA), which operates Plum Island, wants to upgrade the facility so that it can handle the most dangerous pathogens in the world—those that infect humans as well as animals—and it can't do so without the public's blessing.

    USDA is not alone in its ambitions. For decades, the United States had only two major high-containment laboratories, known as biosafety level 4 (BSL-4) facilities, both of them run by the government—one at the U.S. Army Medical Research Institute of Infectious Diseases (USAMRIID) in Fort Detrick, Maryland, the other at the Centers for Disease Control and Prevention (CDC) in Atlanta. (The National Institutes of Health also has one but currently doesn't use it for viruses.) But over the last 2 years, two small BSL-4 facilities have been built, and plans are afoot to build three more—with universities, not the government, paying the hefty bill.

    Splendid isolation.

    USDA wants to upgrade its lab on tiny Plum Island, off the New York coast, to study the most deadly viruses around.


    Researchers claim they urgently need these facilities if they are to prepare for future outbreaks of deadly scourges and build defenses against bioterrorism. “There are so many threats out there,” says pathologist David Walker of the University of Texas Medical Branch (UTMB) in Galveston, which is angling for its own level 4 facility. “And CDC and USAMRIID really don't have time for every one of these viruses.”

    For the universities, these labs provide other perks as well. Having a BSL-4 lab is a way to put themselves on the map in the hot field of emerging infectious diseases. They can also be a powerful bargaining chip in recruiting topflight faculty and a way to get a head start in the grant game.

    A few skeptics wonder whether the country actually needs nine (see map) of these high-tech, multimillion-dollar labs—in addition to over a dozen that exist overseas. But one thing is clear: For any of the plans to succeed, courting the public will be at least as important as fund raising. “If you haven't thought about the public concern before getting the funding, you have the cart before the horse,” says UTMB's vice president, Adrian Perachio.

    The hot zone

    Biosafety level 4 is reserved for highly infectious viruses (no bacterium or parasite is considered this dangerous) that are often lethal and for which there is no cure, let alone a vaccine. Federal guidelines currently list about 16 viruses that must always be handled in BSL-4 facilities; some notorious, like Ebola, others, like Omsk hemorrhagic fever and Hypr, rather obscure. In addition, hundreds of viruses have been assigned to slightly more relaxed level 3 conditions—like those now at Plum Island—unless researchers intend to grow the viruses in large quantities or infect certain animal hosts, in which case level 4 is required. The “strictly BSL-4” list is expected to grow, however; some 120 viruses discovered over the past century have been temporarily assigned to level 3 because nobody knows whether they infect humans and pose a threat at all; some could easily be promoted should a deadly outbreak occur.

    Everything in BSL-4 labs is aimed at containment: Researchers work in space suits that shield them from the viruses, air pressure is kept low so that nothing can accidentally waft out, and nothing leaves the lab without being thoroughly sterilized.

    Until recently, most researchers have been content to work in level 3 labs—or, for riskier work, to beg and borrow time at CDC or USAMRIID. But now, with a few dozen new viruses and revved-up interest in the field, space is tight, and everyone wants one at home. “We're all dressed up but we have nowhere to go,” says hantavirus expert Brian Hjelle of the University of New Mexico, Albuquerque.

    The Southwest Foundation for Biomedical Research (SFBR) in San Antonio, Texas, was the first nongovernment lab to “go hot” to meet this growing demand. Since the late 1970s the foundation has operated a small, simple type of BSL-4 lab, known as a “glove box,” enabling its researchers to study herpes-B virus, which occurs naturally in monkeys but occasionally infects scientists and zoo personnel working with the animals—with potentially fatal consequences. Researchers don't need to wear space suits; instead they keep their samples inside a sealed cabinet and manipulate them through arm-sized rubber gloves. Last year, however, SFBR joined the big league, replacing its glove-box facility with a full-fledged “suit lab”—similar to, if smaller than, those at CDC and USAMRIID. Now its 30 virologists are able to work with lethal South American arenaviruses, such as Guanarito and Sabia, which can cause hemorrhagic fevers.

    Other universities in Texas are eager to join the club. Indeed, three more institutions are clamoring to set up level 4 facilities in the Lone Star state. Texas “just happens to be where a lot of the brainpower is for doing BSL-4 work,” explains virologist Julia Hilliard of Georgia State University in Atlanta. It also just happens to be a place where funds are relatively flush. UTMB, for instance, is building up a world-class research center for infectious diseases—and that makes a BSL-4 facility de rigueur (Science, 28 April, p. 598). The university hopes to break ground early next year for a 180-m2 level 4 lab that will enable its scientists to work with arenaviruses and hantaviruses without having to fly to Atlanta and work at CDC. A private foundation has pledged to cover the estimated $7.5 million bill.

    In Lubbock, 1000 kilometers to the northwest, Texas Tech University is thinking of adding a BSL-4 lab to its new Institute of Environmental and Human Health. Last year Congress approved a Department of Defense plan to appropriate $5 million to Texas Tech to develop countermeasures to chemical weapons and bioterrorism. If the university becomes a major player in this field, it will have to work with several agents on the BSL-4-mandatory list. Texas Tech vice president David Schmidly says that money should be no problem. The university can provide at least half of the expected $8 million tab, and Schmidly expects both the city and state to kick in some additional funds.

    Finally, Texas A&M University in College Station wants a BSL-4 lab to study animal viruses—much like the proposed upgrade at Plum Island. The plan is ambitious, as it requires facilities to house animals inside the airlock lab, and for now it is on hold because A&M hasn't been able to find funds.


    Although most researchers agree that more BSL-4 labs are needed, some question whether the country actually needs nine, with four in one state alone. Frederick Murphy, dean of veterinary medicine at the University of California, Davis, says it may be hard for universities to attract and properly train skilled staff. If BSL-4 personnel—from scientists to cleaners—don't get training at the same level as at the federal labs, cautions Murphy, then they may be at risk.

    View this table:

    Cost—both construction and maintenance—is another issue. UTMB virologist Charles Fulhorst thinks it would make more sense for Texas Tech researchers to do their research at Galveston's new lab. “Texas is a big state,” counters Texas Tech's Schmidly; “I don't think Galveston can meet everybody's needs.” The Texas Higher Education Coordinating Board may ultimately decide, as it has to approve university construction plans whose cost exceeds $1 million. Last month UTMB received the board's blessing; Texas Tech's plans could well be vetoed if the board finds them duplicative.

    Building boom.

    The U.S. now has five BSL-4 labs, and four more are planned.

    Almost no one questions the need for a BSL-4 animal lab in the United States. Plum Island's level 3 facility sufficed until recently, as most livestock diseases weren't thought to pose a serious health threat to humans. That changed with the discovery of several new viruses, including the Nipah virus, which wiped out the Malaysian pig industry last year and killed 105 people. (See p. 1432 of this issue.) If the virus should ever reach U.S. soil, either by accident or as a result of bioterrorism, the country would be powerless to study it, as none of the existing BSL-4 labs can house pigs. “That's really a national embarrassment,” says Murphy.

    Dressed for danger.

    “Space suits” protect workers in BSL-4 labs from exposure to viruses.


    But does the country really need two—one at Plum Island, and one at Texas A&M? Some scientists don't think so. And if only one lab gets built, perhaps it shouldn't be at Plum, says well-known virus hunter C. J. Peters of CDC. Construction is expensive, there's no other academic center or lab nearby to collaborate with, and it's hard to attract young scientists because of staggering real estate prices, he says. Atlanta, or the USDA's veterinary lab in Ames, Iowa, would be better places, says Peters: “You can't do science in a vacuum anymore. You need synergy.”

    But USDA is in a bind: The law currently demands that research on foot-and-mouth disease and rinderpest, which are level 3 agents because they're harmless to humans, take place on an island that's not connected to the mainland by bridges or tunnels. So unless Congress gets involved, at least that part of Plum Island's work has to remain where it is.

    Winning hearts and minds

    But even if money and people were no problem, public resistance may imperil some of the plans. In 1996, for instance, Ontario's provincial government completed a brand-new BSL-4 lab in Toronto that had been in the works for 10 years—almost unnoticed, initially. But as the opening date came close, opponents whipped up a media frenzy, which gained even more force when the hit movie Outbreak was released. Ultimately, the government backed off, and the lab never opened as a BSL-4 facility. (Instead, it works with less dangerous level 2 organisms.) The same happened to a lab in Tokyo in 1981. And neighbors' worries—in this case, about radioactive pollution rather than bugs—also led to the permanent shutdown of the High Flux Beam Reactor at Brookhaven National Laboratory on Long Island. Some say the closure could have been prevented if scientists hadn't shrugged off the public's concerns for so long (Science, 25 February, p. 1382).

    Plum Island is barely 50 kilometers east of Brookhaven—which is one reason why USDA is going out of its way to persuade its neighbors. Even so, fierce opposition may have derailed the planned upgrade. Debbie O'Kane, for one, is not convinced there's nothing to worry about. O'Kane, who directs the North Fork Environmental Council on Long Island, recently did Plum's lab tour, nail-scrubbing and all, but rather than reassure her, the visit made her think twice. Nobody was there to watch whether she actually went through all the prescribed steps, she says—she could have walked right out without doing them. Nor was O'Kane impressed by several town meetings. “USDA came across as incredibly patronizing,” she says.

    O'Kane and other activists subsequently barraged their congressman, Michael Forbes (D-NY), with hundreds of petitions to kill the project. He, in turn, lobbied the White House, which had originally supported the project; as a result, USDA's request for $75 million in funding for the new facility in fiscal year 2001 was denied. USDA officials hope that funding will come next year, but Forbes has vowed to fight the upgrade.

    From the start, Galveston officials were determined to avoid such setbacks. “When I proposed this plan to the [University of Texas] Board of Regents,” says UTMB's Perachio, “their first question was: How does your community feel about this?” There are understandable reasons for concern, because Galveston is prone to hurricanes. A giant storm whipped up a surge in 1900 that killed thousands and partly destroyed the city. Many worry that a repeat might damage the lab, spewing pathogens all over the place. “Why would you want to build it here, of all places?” asks Jackie Cole, a Galveston veterinarian.

    The university's answer, conveyed in countless meetings, on a Web site, and through other channels, is that Galveston now has a 5-meter-high sea wall and that hurricanes come with ample warning nowadays. If landfall is predicted with some degree of certainty, an emergency plan would be activated to shut down the facility in 90 minutes and sterilize it; all viruses would be put in secure freezers. Besides, the building will be solid as a rock, and even if the freezers were torn apart, the viruses would be in almost unbreakable containers and vials.

    Perachio didn't enjoy the extensive public scrutiny of the plans. “When I see a TV camera aimed at me, it's like looking down the barrel of a howitzer,” he says. But the exercise seems to have paid off: Although some people still oppose the plan, resistance here isn't as organized or active as on Long Island. “UTMB's approach has been honest and open,” says Harris Kempner, a local businessman from an influential Galveston family who has been won over. “There's no surprises here.” Perachio says that the university will keep up the effort as construction proceeds: “It ain't over till it's over.” Indeed, he may follow the example of colleagues in Winnipeg, who almost lost their new BSL-4 facility and then set up a permanent community liaison committee to deal with issues as they arise (see sidebar).

    In the long run, the best way for labs to strengthen confidence is simply to show that they can work safely year after year, says Tony Della-Porta, head of technical services of a BSL-4 lab in Geelong, Australia, that opened in 1986. “You have to develop a track record of safety, and that takes time,” says Della-Porta. “But if you succeed, you can actually get the community to take pride in what you achieve.”


    Learning the Hard Way

    1. Martin Enserink

    Until its festive dedication in June, the Canadian Science Centre for Human and Animal Health had not run into major opposition. Researchers—some of them trained by professional consultants—had spoken at dozens of meetings, thousands of people had toured the lab, and all seemed fine. But barely a week later, the lab accidentally released a batch of wastewater into the city's sewage system without properly sterilizing it. At first, lab officials didn't report the incident, says director Norm Willis, because they were positive the water didn't contain any virus and thus posed no risk.

    But when news about the leak reached the press, opponents claimed there had been a cover-up—just as they had predicted would happen in case of mishap—and public opinion soured. As a result, the two agencies responsible for the lab decided to delay the permits needed to import biosafety level 4 (BSL-4) viruses. For a while, it seemed like the lab might go the path of its failed predecessor in Toronto, which has never operated as a BSL-4 facility because of opposition from its neighbors (see main text).

    To win back the squandered trust, the center installed a “public liaison committee,” a diverse group of 16 Winnipeg citizens, to serve as a go-between. For several months, the group—which comprised conservationists and health workers as well as the editor of the local Filipino journal—studied the 10-cm-thick safety handbooks, inspected the lab, and asked countless questions. “We've made it quite clear that we intend to blow the whistle if we have concerns that aren't being looked at,” says Bob Douglas, a former city council member who co-chairs the committee. In the end, the group became convinced that all was in order, and in March, it endorsed the level 4 operations. One month later, Health Canada finally issued the coveted permit. Ultimately, the group became the lab's best advocate; members even wrote to Ottawa to urge speedier delivery of the permits.


    Aluminum Is Put on Trial as a Vaccine Booster

    1. David Malakoff

    Complaints about vaccine safety and debate over a mysterious muscle ailment have prompted researchers to take a fresh look at the use of aluminum

    San Juan, Puerto Rico—A few years ago, Romain Gherardi, a pathologist at Henri Mondor University in Céteil, France, noticed something unusual in tissues taken from a few hospital patients who complained of sore muscles and fatigue. Muscle biopsies from the patients' upper arms contained unusual aggregations of macrophages, the body's multipurpose cleaning cells. Worried that the finding heralded the arrival of a new disease, Gherardi and his colleagues published an article in the August 1998 issue of The Lancet describing the condition as Macrophagic Myofasciitis (MMF), “a new inflammatory muscle disorder of unknown origin.”

    Gherardi has since developed a controversial theory about what causes the rare disorder: It is aluminum, he believes, the same ingredient that has long been added to many vaccines to give them a more powerful immunological punch. The idea has propelled Gherardi into the middle of a volatile public debate over the safety of vaccines. The disputes have run from worries over the use of mercury as a preservative to allegations that vaccines contribute to allergies and autism (Science, 31 July 1998, p. 630 and 14 April, p. 241). And this month, as some 70 scientists gathered here* for 2 days of often vigorous discussion of Gherardi's findings, a larger question hung over the gathering: Will aluminum be the next battleground in the vaccine wars? “The vaccine community is on edge about any perceived safety issue, because it has been a very difficult past few years,” says physician Michael Gerber of the National Institute of Allergy and Infectious Diseases in Bethesda, Maryland.

    Unlikely suspect. Aluminum would seem to be an unlikely source of concern. For nearly 70 years, vaccinemakers have used three aluminum salts—alum, aluminum hydroxide, and aluminum phosphate—to add oomph to many vaccines, including widely used formulations for diphtheria, hepatitis B, and tetanus (see table). These adjuvants, or helper chemicals, cause the body's immune system to react earlier, more potently, and more persistently to the antigen contained in the vaccine. Despite aluminum's potency, there are relatively few reports of adverse reactions, and those are usually limited to temporary muscle soreness or swelling at the injection site. “Aluminum vaccines have an excellent track record,” says vaccine expert C. John Clements of the World Health Organization in Geneva, which each year supplies vaccines for millions of children.

    View this table:

    Given that track record, Gherardi was surprised when a colleague informed him in October 1998 that the macrophages in his tissue samples were filled with aluminum. “We did not believe it,” Gherardi recalls. At the meeting, however, researchers agreed that it is aluminum—and that it comes from vaccines. They also agree that measurable quantities of aluminum can remain at the injection site for surprisingly long periods—up to 8 years in one of the French patients. But most vaccine researchers remain highly skeptical that the metal is the cause of nearly 150 cases of MMF found to date.

    Several conferees questioned whether MMF is actually a distinct disease. “I didn't come away convinced that this is a new disorder,” says Neal Halsey, head of the Institute for Vaccine Safety at The Johns Hopkins University in Baltimore, Maryland. It could be “an epidemic of recognition,” notes professor of medicine Theodore Eickhoff of the University of Colorado, Denver, in which doctors ascribe previously unnamed symptoms to a newly described disorder. Still others wonder whether the epidemiological association between MMF and the muscle ailment is a byproduct of tissue sampling practices in France, where doctors routinely take tissue from the upper arm, often the site of vaccinations, instead of the leg or other muscles.

    Finally, many scientists say that the French results are flawed by a lack of control groups. In particular, the existing studies do not show how many vaccinated people are carrying the MMF aluminum deposits but exhibit no symptoms. As a result, “it would be premature and dangerous to ascribe this condition to vaccines,” says Vito Caserta of the U.S. government's Division of Vaccine Injury Compensation in Rockville, Maryland.

    Gherardi agrees that the MMF-aluminum association is far from proven. But he is confident that the symptoms he has described can be distinguished from similar ailments. What's more, he asserts, “there are MMF patients in the U.S. [and other nations who] are not detected due to differing biopsy practices.” The spate of recent cases, he adds, might be explained by France's decision several years ago to vaccinate nearly 40 million adults against hepatitis B. Gherardi insists he is just as eager as his critics to do a controlled study to look for MMF lesions in people without symptoms, calling it “a critical piece of information.” Although French regulations make it difficult for researchers to take muscle biopsies from a control group of healthy patients, he says, researchers are moving ahead with a range of other human and animal studies to ferret out more detail.

    Answers in cadavers. To prepare to answer any potential public concerns, the U.S. government is initiating its own aluminum research, says Martin Myers, acting director of the National Vaccine Planning Office, which organized the meeting and helps coordinate vaccine-related work across the federal government. One study might help settle the MMF debate by examining tissue samples taken from cadavers. Other high-priority projects, he says, may include setting guidelines for the amount of aluminum that should be allowed in injectable vaccines (the current standard applies to oral vaccines), collecting more data on how young children process aluminum, and examining whether changing the way some aluminum-adjuvated vaccines are injected might further reduce the risk of adverse reactions. Also possible are clinical studies to evaluate the differences between the three kinds of aluminum adjuvants and the role that adjuvants alone play in causing swelling and muscle pain.

    Ironically, some of the new studies may get a boost from a controversy surrounding the U.S. military's anthrax vaccine, which includes aluminum. Last year, after some soldiers and pilots refused to take the vaccine as ordered by the Defense Department, Congress called for a large-scale clinical trial to answer safety questions. That trial, due to begin later this year, may now be designed to answer questions about aluminum adjuvants, too. Sparked by the French research, vaccine manufacturers are also jumping into the act. SmithKline Beecham researchers, for instance, are in the midst of large-scale animal studies of aluminum's biological impacts, as well as surveys designed to detect adverse reactions among patients.

    Vaccine researchers hope the new data will help avoid what several call “Thimerosal II.” Thimerosal, a mercury compound used to prevent vaccines from becoming contaminated by bacteria or fungi, is now scheduled for elimination from several common vaccines after federal officials and some public interest groups raised questions about its safety. During debate over the last few years, however, many vaccine researchers felt hamstrung by a lack of data on everything from mercury's interaction with other vaccine compounds to acceptable doses for infants. Indeed, when the aluminum issue arose, “I had a sense of déjà vu,” says Alison Mawle, a researcher with the Centers for Disease Control and Prevention in Atlanta, Georgia. “There are huge gaps in what we know about the toxicity of aluminum.”

    For the moment, researchers and public health officials, including Gherardi, see no reason to remove aluminum from vaccines. But they may examine the feasibility of reducing the amount used in some vaccines—just in case. Scientists say that it's not clear, for instance, that the adjuvant is necessary to ensure the effectiveness of booster shots, which are administered after the immune system has already reacted to the initial dose. However, manufacturers say any effort to replace aluminum with another adjuvant would be costly and complicated. Regulatory and manufacturing requirements, for instance, would make it “a nightmare” to create different formulas for an initial vaccine and its booster, says Nathalie Garcon-Johnson of SmithKline Beecham Biologicals in Rixensart, Belgium.

    Most researchers left the San Juan meeting feeling reassured about aluminum's safety. But some government officials remain worried that the subject could blow up into a legal and political battle. Web sites published by some anti-immunization groups, for instance, already finger aluminum as a cause for concern. In light of such sentiment, Caserta urged a panel drafting the workshop's summary statement on MMF to tread cautiously. “We have to be very careful with our ideas,” he says. “The courts don't know how to deal with [uncertainty about] causality.”

    • *Workshop on Aluminum in Vaccines, 11 to 12 May, San Juan.


    A Blow to Austria's Scientific Revival

    1. Robert Koenig

    Since the Nazi occupation tore apart its scientific community, Austria has revived many fields. Now budget cuts threaten the momentum

    Vienna—Less than a century ago, this cultural bastion on the Danube was a vibrant scientific capital, nourishing great minds such as theoretical physicist Erwin Schrödinger and psychoanalyst Sigmund Freud. But a mass exodus in the late 1930s crippled Austria's intellectual dynamo, and the impoverishment deepened after much of the city was destroyed during World War II. Slowed by stingy postwar government support, the seeds of good science did not take root again until the late 1960s, giving rise to centers such as Vienna's Atom Institute and, later, Anton Zeilinger's quantum teleportation labs in Innsbruck and Vienna (see sidebar on p. 1327). And although Austria still lags many other European nations in broad science indicators, a government pledge last year to pour more money into research fueled hopes of a true scientific resurgence. “In the last few years, you got the feeling that science was improving rapidly here,” says Helmut Ruis, who heads the University of Vienna's Institute of Biochemistry and Molecular Cell Biology. But Austria's budding scientific renaissance is suddenly in jeopardy.

    Last week, the Austrian Parliament approved an austere science budget that slashes support for the Austrian Science Fund (FWF)—the country's basic research granting agency—by 26% and savages spending on laboratory upgrades by nearly two-thirds. “These budget cuts—coming at a time when Austrian science has been on the upswing—threaten to endanger basic research here,” says solid state physicist Peter Skalicky, rector of the Technical University of Vienna. And the embattled Austrian government's recent contretemps with many of its European Union partners could doom a major project on the drawing board, the AUSTRON neutron source that physicists want to build this decade. Bemoans FWF head Arnold Schmidt: “We just can't afford to lose the research momentum of the last few years.”

    That momentum has taken a half-century to build since the dark years that began when Adolf Hitler annexed his native country in 1938. Many of Austria's top scientists fled: Freud went to London, Schrödinger to Ireland, Victor Hess, discoverer of cosmic rays, escaped to the United States, and molecular biologist Max Perutz ended up in Cambridge, where he won a Nobel Prize. Of those luminaries, only Schrödinger came home; he died a few years later. “That emigration was an incredible loss to Austrian science,” says Zeilinger.

    But decades of Austrian government neglect was also to blame for the slow pace of the rejuvenation, which didn't start in earnest until 1968, when the newborn FWF began training and funding a new generation of scientists. “When I was a Ph.D. student in the late 1970s, we had only two sets of Gilson pipettes for the whole department and two or three gel apparatuses that we had to build ourselves,” says Renée Schroeder, a University of Vienna biochemist who worked as a postdoc in the United States, France, and Germany during the 1980s. After returning to Vienna in 1989, she says, she saw “a huge difference in the quality of science.” Adds Zeilinger: “It's only in the last 10 years or so that physics here has begun to recover.”

    Playing catch-up.

    Austria trails in major science indicators.


    Not all problems have been overcome. Many scientists complain that the universities stifle creativity and that the Austrian Academy of Sciences is too timid in moving into new research areas. Moreover, says Erwin F. Wagner, senior scientist at the Institute of Molecular Pathology in Vienna, the turnover in midlevel research posts is too slow: “There just aren't enough positions for talented young scientists.”

    When Austria's coalition government came to power in February, many researchers thought the opprobrium resulting from the right-wing Freedom Party's membership in the center-right alliance posed the greatest threat to their work. Members of the European Union began freezing Austrian officials out of policy discussions, and Austrian researchers feared that colleagues in Europe and elsewhere might be forced to back out of collaborative projects. But it turned out that the real threat came from within.

    European sanctions have had little impact on research, whereas the new Austrian government has so far reneged on its pledge to honor its predecessor's commitment to sharply increase research spending from the current 1.65% of gross domestic product to 2.5% over 5 years. “Our good intentions and commitments remain, but the money simply isn't there right now,” explains Daniel Kapp, chief spokesperson for Education and Science Minister Elisabeth Gehrer. The minister defends the cuts, pointing out that the government has avoided decreasing university salaries or operating budgets, instead wielding the knife in discretionary accounts such as lab upgrades. Gehrer says she will try to pump up the research budget next year.

    But Austrian labs may need upgrades sooner rather than later. According to Skalicky, Technical University's main competitors in German-speaking Europe—Zurich's ETH Polytechnic and Munich's Technical University—have budgets that are seven times the size of his. Now, he says, “we won't be able to replace badly need equipment like electron microscopes or infrared spectrometers.”

    In an open letter posted on its Web site last month, Austria's conference of rectors—the heads of the nation's universities—pleads with the new government to live up to its previous commitments and restore the science budget or, at least, commit to “a drastic increase” in university and research spending in 2001.

    The FWF's budget hit has forced it to consider, for the first time, scrapping its fall grant competition. Hoping to avert this, Schmidt has appealed to the Transport and Innovation minister, Michael Schmid, whose ministry oversees FWF, to find more money for the research fund. Another potential savior is the Austrian National Bank, which each year contributes a portion of its currency exchange profits to academic R&D. It may up its contribution to the FWF this year.

    One major project that could be in trouble is the AUSTRON pulse-spallation source, a $400 million facility for neutron scattering studies that backers hope to build in eastern Austria by 2007. Austria has offered to put up one-third of the construction funds if other European governments kick in the rest. But Skalicky says that commitments have fallen short so far, and AUSTRON “could only proceed if Austria is prepared to pay a larger share.” The innovation ministry says it is working on a plan to save the project.

    Some of Austria's research labs should be able to weather the storm. The cuts—slated to go into effect on 1 July—have not eroded the $42 million annual budget of Austria's Academy of Sciences, which operates 20 institutes. The academy's secretary-general, engineering professor Herbert Mang, told Science that the academy still plans to expand with new institutes and partnerships, including its share in the Institute of Molecular and Cellular Bioinformatics at Vienna's Biocenter (see sidebar on p. 1325).

    Despite the setbacks, many scientists are optimistic that key research areas, such as Zeilinger's work on quantum teleportation, will continue to prosper. However, other seedlings that have taken root in Austria's once-blighted research landscape could be more vulnerable in a prolonged funding drought. Says Schmidt, “There is still tremendous potential for growth in Austrian science,” but that growth needs nurturing.


    From Wasteland to Biomedical Wonderland

    1. Robert Koenig

    Vienna—When Max Birnstiel first set eyes on the decrepit site of his future institute in 1985, he was crestfallen. “It was nightmarish,” he says: “An abandoned radio factory in the middle of an outmoded industrial district. I couldn't imagine turning it into a modern research campus.” He quit a top post in Zurich for this? Fifteen years later, Birnstiel's “glass palace,” the Institute of Molecular Pathology (IMP)—now surrounded by a vibrant complex of research establishments—has put Vienna on the biomedical research map.

    IMP, now headed by Cambridge transplant Kim Nasmyth, exemplifies the promise of Austria's scientific rehabilitation. With groundbreaking work in such areas as chromatid separation in yeast and cell adhesion in cancer, IMP's 13 research groups—which boast 120 scientific staff members from 25 nations—have “made a great contribution to Austrian science,” says Walter Schaffner, director of Zurich University's Institute for Molecular Biology.

    IMP, bankrolled by German drugmaker Boehringer Ingelheim, also stands out as one of Austrian industry's few significant investments in basic research. Although Austria's public research spending per capita (0.65% in 1998) is only slightly less than average among European Union member states, private-sector contributions (0.83%) trail far behind those in other countries, including Germany (1.57%), France (1.37%), and the United Kingdom (1.22%).

    IMP's true impact reaches beyond its research domain, thanks to founding father Birnstiel's early demands that the Austrian and Viennese governments build a biomedical campus around the institute. Four years after IMP opened its doors in 1988, the University of Vienna moved five life science and medical research institutes into a new Biocenter building next door. Several biotech start-ups, including InterCell, which is developing synthetic vaccines, and VBC Genomics, which does custom sequencing, have also sprouted up. And last fall, the Austrian Academy of Sciences announced an unprecedented move to cast its lot with industry on a project; it will join forces with IMP and the city of Vienna to create a new Institute of Molecular and Cellular Bioinformatics. When it opens in 2 years, the institute is expected to employ about 80 scientists, whose main task will be to glean insights into human diseases from the draft of the human genome.

    More expansion is in the works. Erwin Heberle-Bors of the University of Vienna's Genetics and Microbiology Institute says the university plans to open an Institute of Structural Molecular Biology this fall at a new Biocenter building. Although he's worried about the new government's budget cuts, Heberle-Bors thinks the research ministry will stick to its promise to expand the Biocenter. “They recognize that the Biocenter is a bright spot in Austria's research landscape,” he says


    Teleportation Guru Stakes Out New Ground

    1. Robert Koenig

    Vienna—It's a long way from the University of Vienna's Experimental Physics Institute to the Hollywood studios that dreamed up Star Trek's “Beam me up, Scottie” brand of teleportation. But behind this venerable institute's 19th century façade is a burnished-steel, laser-packed realm in which a form of teleportation has indeed become reality. Nobody here is planning to transport Star Trek red shirts onto hostile planets: Macroscopic teleportation remains firmly in the realm of science fiction. Rather, the teleporter of Vienna is Anton Zeilinger, a pioneer in exploiting the peculiarities of quantum mechanics—the best explanation physicists have for the weird actions of individual atoms—to teleport the quantum state of photons over short distances.

    Zeilinger has earned an international reputation for devising experiments to test quantum theory. “He's one of the most original researchers in the foundations of quantum mechanics in the world today,” says quantum physicist Seth Lloyd of the Massachusetts Institute of Technology (MIT). “His tenacity has been rewarded by great results.” Before entering the teleportation rabbit hole, Zeilinger cut his teeth in the quantum world as a postdoc at the Technical University of Vienna, where he worked on neutron interferometers, then did a stint at MIT. He returned to Technical University in the 1980s, but it was not until he moved to the University of Innsbruck as a full professor that in the early 1990s he began devising experiments in quantum optics, a field in which researchers exploit the quantum nature of photons.

    What helped guide him in this new direction, Zeilinger says, was a pioneering 1993 report in which IBM physicist Charles H. Bennett and his colleagues first proposed that “entanglement,” a beloved concept of Austrian-born Erwin Schrödinger, could be exploited to teleport information. A pair of entangled particles—be they photons, ions, or atoms—always possess correlated quantum states: If one photon is horizontally polarized, for example, the other must be vertical. But according to quantum mechanics, the state of either particle is only revealed when it collapses—that is, when it is measured. So Bennett and his colleagues argued that if one photon collapses to a vertically polarized state when it is measured, its entangled partner will immediately “know” that it is horizontally polarized, no matter how far away it is. The “message” transmitted from one particle to the other must therefore travel faster than the speed of light—something that Albert Einstein once called “spooky action at a distance.” The revolutionary concept that Bennett's group proposed was that “one can actually transfer a quantum state of a specific particle applying entanglement,” Zeilinger says.

    “When that paper came out, I thought the experiment to test it was completely impossible,” Zeilinger recalls. But in his quest to extend entanglement to three photons, Zeilinger devised a series of elegant quantum optics experiments in which his group was able to produce and manipulate entanglement at will, laying the groundwork for the teleportation experiments. Finally, last February, Zeilinger and colleagues in Oxford and Munich reported that they had succeeded in observing three-photon entanglement. “The interesting feature here is that even the perfect correlations between the three photons are in striking conflict with any classical picture one could make,” he says.

    Although he remains fascinated with quantum optics, Zeilinger says he does not want to spend the rest of his career working on it. “Quantum optics is just a means for me to study the foundations of quantum mechanics,” he says. “What really fascinates me is the possibility of bridging the gap to biological systems”—for example, by using the techniques of quantum optics to learn more about simple microscopic organisms such as viruses. “It's difficult to see how quantum physics could play an active role in biological systems,” he says, “given the fact that a quantum state usually requires the system to be well isolated from the environment, and living systems usually die when you isolate them—they need breathing, they live at a finite temperature. Yet I am not convinced that living systems are just classical machines.”

    It was not an easy decision for Zeilinger to leave Innsbruck, in the heart of the Alps, for Vienna in 1999. Now it takes longer for him to get to the Tyrolean slopes for skiing, his passion along with jazz and classical music. He also collects antique maps, including a favorite one of central Europe around 1830 depicting only four nations, including the former Austrian empire. Zeilinger's colleagues point out that he's redrawing the map in his own field. “While Austria produced many world-class physicists over the last century or so, very few of them did their great research in Austria,” says Austrian-born Kurt Gottfried, a professor emeritus at Cornell University and an expert on quantum physics. “But Zeilinger has a clear commitment to putting Austrian physics on the map.


    Is That Your Final Equation?

    1. Charles Seife

    The Clay Institute announces a list of mathematical puzzles for the 21st century—at $1 million a pop

    For decades, the fugitives have eluded capture; now, they have prices on their heads. In Paris this week, mathematicians unveiled a most-wanted list of seven of the most intractable math problems in the world. With a purse of $1 million for each, it is by far the biggest math prize ever announced and a call to action for mathematicians everywhere.

    The announcement hearkens back to a similar exhortation, made in Paris a century ago. In 1900, at the second International Congress of Mathematicians, the German mathematician David Hilbert challenged his colleagues with 23 unsolved problems. “Hilbert's lecture, more than any other event, shaped 20th century mathematics,” says Arthur Jaffe, former president of the American Mathematical Society and president of the Clay Mathematics Institute (CMI) of Cambridge, Massachusetts, which is sponsoring the challenge.* Unlike Hilbert, however, CMI is sweetening the deal with cash. Founded 2 years ago by mutual-fund magnate Landon Clay, the institute has a substantial endowment (Jaffe wouldn't name a figure) devoted to advancing mathematical knowledge. “If next year every one of these problems were solved, it wouldn't be a problem,” Jaffe says. “It would be a surprise.”

    P = NP? First on the list is a problem that could make computer encryption a thing of the past. The so-called “P versus NP” problem arose when computer scientists tried to figure out how efficiently algorithms crunch numbers. In general, the more data you cram into a computer program, the longer the program takes to process it. Consider an algorithm for alphabetizing a list of files. If you double the number of files, the program might take four times as long, on average, to put them in order. In the language of computer science, it is an n2 algorithm. For most purposes, programmers are happy to come up with such “polynomial-time,” or P, algorithms, as they aren't outrageously time-consuming to solve.

    Even problems that don't seem to be solvable in polynomial time, such as factoring a large number, may be checked in polynomial time. To check whether someone has factored a large number, for instance, all you have to do is multiply the factors together. A problem checkable in polynomial time is called “NP.” Clearly, all P algorithms are NP: If you can solve something in polynomial time, you can certainly check someone else's solution in polynomial time. In 1971, however, computer scientist Stephen Cook asked whether an NP algorithm is necessarily a P algorithm.

    The answer appears to be “no”; NP problems such as factoring large numbers don't have any known polynomial-time solutions. But proving this is another matter—and a matter of some gravity. Mathematicians have proven that the hardest type of NP problems, called NP-complete, are equivalent. Thus a polynomial-time algorithm of one NP-complete problem can be tweaked to crack them all—including computer ciphers. “You could use [one] to break any encryption scheme,” Cook says.

    The Poincaré conjecture. In 1904, French mathematician Henri Poincaré was studying the classification of shapes in space, a field known as topology. One powerful way of classifying these shapes is to observe the behavior of shrinking loops of string on the surface of an object. For example, if you place a loop on the surface of a basketball, as it shrinks, it will always shrivel up into a point. On the other hand, a loop of string around a doughnut might not be able to shrivel completely: It could get stuck if it is looped around or through the doughnut's hole.

    For two-dimensional surfaces such as the skin of a basketball or the glaze on a doughnut, the behavior of the shrinking loops completely describes the type of surface you're dealing with. If you know that every loop on a given surface shrinks to a point, then the surface is topologically equivalent to a sphere. Poincaré conjectured that the loop-closing test also holds true in the next dimension up, for three-dimensional surfaces. But he never proved or disproved his conjecture, and neither has any other mathematician.

    The conjecture has been proven for every other dimension; the three-dimensional case alone remains. “The fact that the problem is still unsolved after so long is rather shocking,” says John Milnor, a mathematician at the State University of New York, Stony Brook.

    The Birch-Swinnerton-Dyer conjecture. This problem lies in the same realm of mathematics that Andrew Wiles used to prove Fermat's Last Theorem half a decade ago. Both hinge on the mathematical properties of geometric figures called elliptic curves: the set of points that solve an equation of the form y2 = x3 + ax + b. The Birch-Swinnerton-Dyer conjecture, which was formulated in the 1960s, is concerned with “rational” points on the curve, that is, points on the graph where both x and y are rational numbers. Associated with each such elliptical curve is a mathematical object called an “L-function”—basically, a formula that encodes the information about the curve in a different form. The conjecture states that there are an infinite number of rational points on a curve if and only if the curve's L-function equals zero at a certain value. Although the problem is abstract, it is related to questions about the areas of right triangles with rational-sized sides, which Cambridge mathematician John Coates calls the “oldest unsolved major problem in mathematics.”

    The Hodge conjecture. Like the Birch-Swinnerton-Dyer conjecture, the Hodge conjecture tries to link two mathematical concepts. In the branch of mathematics known as algebraic geometry, mathematicians try to combine abstract algebra, which studies the relations and symmetries of numbers, with geometry, which studies shapes in various spaces. Hodge cycles are structures that have a great deal of algebraic power but no obvious geometric interpretation. Algebraic cycles have a geometric interpretation—they are related to the intersection of curves in space—but are less powerful algebraically. The Hodge conjecture links the two, stating that a Hodge cycle can be written as a sum of algebraic cycles, combining the power of the former and the easy interpretation of the latter.

    Yang-Mills existence and mass gap. Problem number 5 is inspired by a branch of physics known as Yang-Mills theory, which describes particles by using the language of mathematical symmetries. Even though Yang-Mills theory has enabled physicists to unify the electromagnetic, weak, and strong forces, it's not certain that reasonable solutions to Yang-Mills equations actually exist—and if they do, whether those solutions will have a “mass gap” that explains why physicists can't isolate quarks. “There's no real outline or idea [for] how to go about this,” Jaffe says.

    Navier-Stokes existence and smoothness. This problem concerns a set of differential equations that describes the motion of incompressible fluids: the Navier-Stokes equations. Although they're relatively simple-looking, the three-dimensional Navier-Stokes equations misbehave badly. “You can set up Navier-Stokes with nice, smooth, reasonably harmless initial conditions,” and the solutions can wind up being extremely unstable, says Princeton mathematician Charles Fefferman. “People think they see breakdowns—a singularity develops. It appears to be very, very bad.” If mathematicians could tame the outrageous behavior of Navier-Stokes, it would dramatically alter the field of fluid mechanics. “To understand the behavior of fluids would have a very big effect in science and technology, and also in mathematics,” Fefferman says.

    The Riemann hypothesis. No most-wanted list would be complete without this, the granddaddy of mathematical mysteries. The hypothesis was first published in 1859 by German mathematician Bernhard Riemann, who was investigating the properties of the so-called zeta function: ξ(s) = 1 + 1/2s + 1/3s + 1/4s +…. No matter what positive number you plug in for s, you never get ξ(s) to equal zero. However, this is not true in the realm of complex numbers—numbers that can be expressed as a + bi where i is the square root of -1. In fact, infinitely many “zeros” of the zeta function contain a multiple of i, and they all seem to have a real part of 1/2; that is, they equal 1/2 + bi for some real number b. “Seem” is the key word, however. Although more than a billion known zeros follow the pattern, no one has proved that they all do.

    If the hypothesis is true, it affects almost all other branches of mathematics—for instance, it will tell mathematicians about the distribution of prime numbers. “To me it is the central problem in pure mathematics, much more so today than it was 50 years ago,” says Enrico Bombieri, a mathematician at the Institute for Advanced Study in Princeton, New Jersey. The zeta function is closely related to the L-functions of algebraic geometry, for instance, so the Riemann Hypothesis affects the same areas of mathematics as did Wiles's proof of Fermat's Last Theorem. “The connection with other parts of mathematics is getting deeper,” Bombieri says.


    Turn-of-the-Century 'Hit List' Showed the Limits of Mathematical Ambition

    1. Charles Seife

    David Hilbert wouldn't be thrilled by the fate of the 23 problems he challenged his colleagues to solve a century ago. Although his speech in 1900 helped set the course of mathematics in the 20th century, perhaps the most striking and depressing discovery in 20th century mathematics was that Hilbert's grand scheme was a fool's errand.

    The Hilbert problems became the most-wanted list in mathematics. Some proved relatively easy. Hilbert's third problem, which dealt with cutting up tetrahedra, was solved within 2 years by Hilbert's graduate student Max Dehn. Others—such as number eight, the Riemann hypothesis—remain unsolved unto this day.

    Even though the 23 problems span a wide variety of mathematical “flavors,” many of them share an underlying theme. Hilbert desperately wanted to bring mathematics back to its rigorous, axiomatic roots, sweeping aside ad hoc assumptions and starting anew with a bare minimum of statements assumed to be true. Toward that end, Hilbert's second problem asked whether the axioms of logic can be proved to be consistent, while his fourth problem invited mathematicians to explore geometries similar to Euclidean geometry but with some of Euclid's axioms weakened or removed entirely. Hilbert's sixth problem asked whether physics can be axiomatized as mathematics had been. Ultimately, Hilbert wanted only a few axioms to govern logic, arithmetic, algebra, geometry, and all other areas of mathematical and scientific thought.

    At the turn of the century, this minimalist vision seemed like a reasonable, if ambitious, program. A new branch of mathematics, set theory, held hints of a power deep enough to unify all mathematical thought. By invoking only a few axioms to lay down the laws of manipulating sets of objects or similar concepts, mathematicians could set the foundations of logic, create all the numbers, build up the rules of arithmetic, and then proceed onward to geometry and other pursuits. This idea reached its culmination in the 1920s, when Bertrand Russell and Alfred North Whitehead started with a mere handful of axioms and then took roughly 1000 pages of dense mathematical scribbling to prove that 1 + 1 = 2.

    But the Hilbert program was doomed by yet another mathematician, Kurt Gödel. In 1931, Gödel proved that no self-consistent set of axioms is sufficient to prove everything that is true—that there are always true theorems beyond the ken of whatever axioms you choose.

    Gödel's incompleteness theorem swept away the Hilbert program. There is no overarching set of axioms that allows you to codify the whole of mathematics. Gödel's melancholy conclusion was perhaps the most significant discovery in 20th century mathematics.

    Unlike Hilbert's problems, the seven problems chosen by the Clay Mathematics Institute (CMI) in Cambridge, Massachusetts, do not share a unifying vision. Nor do they hew to the cutting edge of mathematics. “It's exactly the opposite,” says Arthur Jaffe, president of CMI. Instead of trying to promote a mathematical “program” as Hilbert did, Jaffe says the CMI chose to “focus on classical problems, each very important, each having resisted solution.” But mathematicians will be grateful to CMI if its awards affect mathematics a fraction as much as did Hilbert's failure.

  18. Interfering With Gene Expression

    1. Jean Marx

    An explosion of recent evidence is revealing a new cellular pathway for silencing specific genes at the messenger RNA level that may protect organisms against viruses and genetic damage

    In some ways, it was right under researchers' noses. For a decade, various groups had been seeing clues that cells have some novel way of shutting down or “silencing” genes. But they had been working in seemingly different fields and in different organisms, and they had given the murky mechanism a variety of names: cosuppression, quelling, and RNA interference (RNAi), among others. Only in the past year or so have researchers realized that they are all working on the same puzzle—and that they have stumbled upon what seems to be a critical pathway that cells use to protect themselves against viruses and certain kinds of genetic damage, and possibly to control normal gene expression as well. “Only recently, we've all come to realize that we are working on the same thing,” says Ronald Plasterk of the Hubrecht Laboratory in Utrecht, the Netherlands. “We have a new field coming together.”

    Cell biologists have known for some time that genes can be silenced directly by chemical modifications that prevent the first step of gene expression: the transcription of the gene into the messenger RNA (mRNA) that ultimately directs protein synthesis. But the new work is revealing a novel method of silencing that kicks in later—preventing gene expression by degrading the mRNAs. This mechanism, which is very widespread—it occurs in organisms ranging from the mold Neurospora to plants, worms, simple vertebrates like the zebrafish, and perhaps even mammals—is apparently triggered when the cell senses some kind of danger. This might be, for example, an invading virus, or the mobilization of the bits of repetitive DNA called transposons, which can jump about the genome causing mutations if they happen to land in a gene, or possibly the production of an abnormal mRNA.

    Whatever the exact cause, the cell directs an RNA-cutting enzyme (ribonuclease) to specifically degrade just the RNAs related to the trigger while other genes remain unaffected. What's more, by learning how the cell performs this feat, researchers have been able to devise a new method of inactivating specific genes—an ability that should be very useful for studying gene function and might also be used to create genetically modified plants and other organisms. “I believe RNA interference is going to be a very general and very exciting phenomenon,” says Phillip Sharp of the Massachusetts Institute of Technology (MIT).

    Clues to the existence of the new gene-silencing mechanism began surfacing about 10 years ago, when researchers trying to perform various genetic manipulations found that the target organisms sometimes responded in totally unexpected ways. Early examples came from two independent groups, one led by Rich Jorgensen, now at the University of Arizona in Tucson, and the other by Joseph Mol of the Free University in Amsterdam, the Netherlands. These researchers were trying to create petunia plants with a more intense purple color by adding extra copies of genes needed for pigment synthesis. Their experiments failed in an intriguing way: Some of the resulting transgenic plants turned out to have blooms that were all or part white, indicating that pigment production had been shut down rather than ratcheted up. Indeed, not only had the introduced gene, or transgene, not been expressed, but somehow the plant's own pigment-synthesizing gene had been inactivated as well—a phenomenon that came to be known as cosuppression.

    A few years later, plant researchers David Baulcombe of the Sainsbury Laboratory in Norwich, United Kingdom, and William Dougherty of Oregon State University in Corvallis noted something similar while trying to genetically engineer virus-resistant plants. For example, Baulcombe and his colleagues inserted into tobacco plants the gene encoding the replicase enzyme that potato virus X needs to reproduce. They hoped that excess replicase would check the growth of the virus by disrupting its life cycle. But while some of Baulcombe's plants became resistant to the virus, others didn't.

    On further examination, the group found that the resistant plants were making very little of the replicase, while the susceptible ones were making a lot. “That was odd,” Baulcombe recalls. “You'd expect it to be the other way around.” Eventually, the researchers worked out that those plants that became resistant did so because they had silenced both the replicase gene in potato X virus and the transgene the researchers had put into the plants—a situation, Baulcombe says, that looked a lot like cosuppression.

    Plant researchers weren't the only ones getting odd results from their genetic manipulations. Carlo Cogoni and Guiseppe Macino of the Università di Roma La Sapienza in Italy found that introduction of a gene needed for carotenoid synthesis in the mold Neurospora crassa led to inactivation of the mold's own gene in about 30% of the transformed cells. They called this gene inactivation “quelling.”

    Anomalous results also turned up in experiments in which researchers such as Su Guo and Kenneth Kemphues of Cornell University in Ithaca, New York, put so-called antisense RNA or DNA into cells from various organisms. These nucleic acids are constructed to have the reverse sequence of an active gene or mRNA—thus the name antisense—with the idea that they would bind to the gene or the mRNA, thus blocking its activity and preventing the synthesis of the corresponding protein. But researchers often found that the “sense” nucleic acids they used as controls, which shouldn't bind to the active genes or mRNAs, proved just as effective as the antisense constructs in blocking gene expression.

    Solving the puzzle

    By the mid-1990s, several teams had evidence that nucleic acids introduced into cells could specifically silence genes with similar sequences. The question was, how? By tracking down the answer, researchers found the new gene-silencing mechanism. The plant researchers provided one clue: Their measurements showed that the silenced gene was transcribed, even though the corresponding mRNA all but disappeared from the cell. In other words, the silencing was posttranscriptional, and it somehow led to the degradation of the mRNA so that the gene's protein product could no longer be made.

    Another clue came from researchers working on both the nematode worm Caenorhabditis elegans and on the fruit fly. While trying to figure out what was going on with antisense RNA in the worm, Andrew Fire of the Carnegie Institution of Washington in Baltimore, Maryland, and Craig Mello of the University of Massachusetts Cancer Center in Worcester, with other colleagues, injected worms with either an antisense RNA, the sense version, or a double-stranded molecule in which the sense and antisense RNAs were bound together. Much to their surprise, Mello says, “the double-stranded RNA was much more effective than either the sense or antisense RNA” in silencing the corresponding C. elegans gene. Similarly, Jason Kennerdell and Richard Carthew of the University of Pittsburgh found that injecting fly embryos with double-stranded RNAs was a highly effective way to inhibit specific genes. Even now, though, researchers don't fully understand why double-stranded RNA has this effect, which came to be known as RNA interference.

    More surprising still, silencing is not limited to the cells where it's initiated. For example, Mello's group found that when they injected double-stranded RNA into the intestines of worms, it not only triggered RNAi in all parts of the animals, but the silencing effects could even be transmitted through the germ line to one or more additional generations before petering out. Interference spreads in plants as well, as shown independently by Baulcombe's team and by that of Hervé Vaucheret of the Agricultural Research Institute in Versailles, France. The fact that the specificity of gene silencing is retained even as the interference spreads suggests that the signal is carried by a nucleic acid, Baulcombe notes.

    Within the past 6 months or so, researchers have begun to identify the biochemical machinery that brings about RNAi. They are tackling the problem from two directions, one genetic, screening for genes that, when mutated, lead to loss of posttranscriptional silencing in the various organisms, and the other biochemical, trying to isolate the various molecules involved.

    For starters, the genetic work is providing concrete evidence that quelling, cosuppression, and RNAi are likely to be one and the same. In the 16 March issue of Nature, for example, Plasterk's team described work showing that genetic mutations that block RNAi in C. elegans also block cosuppression in the worm—indicating that they both use the same biochemical machinery. And in that same issue, Cogoni and Macino reported that a gene they called quelling defective 2 (qde2), because mutations in the gene block quelling in Neurospora, is in fact the mold's equivalent of a gene called rde1 (RNA1-defective 1) originally identified by Mello's team in C. elegans.

    So far, researchers have identified roughly a half-dozen genes that are needed for RNAi. Their exact functions aren't yet known, because they were found by submitting the various organisms to mutagenic treatments and then screening to see whether RNAi still works. But the sequences of the genes that have been cloned provide some intriguing clues. For example, Plasterk's team in Utrecht cloned a gene called mut-7 and found that its sequence suggests that it's a ribonuclease, an enzyme that cleaves RNA. This makes sense, because mRNA degradation is apparently the final step of the RNAi process.

    The biochemical work also suggests that ribonucleases are involved—and that the enzyme has a novel way of recognizing its specific mRNA targets. Some of this work comes from Baulcombe and his Sainsbury colleague Andrew Hamilton. As reported last fall, they found that short pieces of antisense RNAs, corresponding to the particular gene being silenced and only about 25 nucleotides long, were present in tomato plants where posttranscriptional gene silencing was occurring (Science, 29 October 1999, p. 950). More recent results, described in the 31 March issue of Cell by a team including Phillip Zamore of the University of Massachusetts Medical School in Worcester, Thomas Tuschl of the Max Planck Institute for Biophysical Chemistry in Göttingen, Germany, and MIT's Sharp and David Bartel, point to short RNAs playing a role in RNAi in Drosophila as well.

    Working with a test tube system they previously developed for studying RNAi, the researchers put in a double-stranded RNA that triggers the breakdown of the mRNA encoding the luciferase protein. They found that the double-stranded RNA is broken down into short segments of 21 to 23 nucleotides. Furthermore, Zamore says, “when we looked at the fate of the mRNA, we found that it is also cleaved roughly every 22 nucleotides.” This suggests, he says, that the fragments from the double-stranded RNA are directing the cleavage.

    An indication of how the fragments are doing that comes from Gregory Hannon's team at Cold Spring Harbor Laboratory on Long Island. These workers partially purified a ribonuclease from Drosophila cells in which they had induced RNAi. Intriguingly, their findings suggest that the enzyme associates with an RNA roughly 25 nucleotides long—the same length as the RNAs detected by Baulcombe and the Zamore-Tuschl team.

    Putting all this together, it appears that a ribonuclease first cleaves the double-stranded RNA, producing the short fragments. Then the enzyme picks up the fragments, which direct it to any mRNA whose sequence matches, and the enzyme breaks down the messenger as well. “Here we have an enzyme that essentially has a guide sequence—a piece of RNA bound to it—that determines its specificity,” Zamore says.

    Another gene whose function is at least tentatively known may aid posttranscriptional gene silencing by making extra copies of the RNA triggers. This is the qde1 gene of Neurospora, cloned last year by the Cogoni-Macino team. The sequence of the protein encoded by this gene indicates that it is an RNA-directed RNA polymerase. As such, its activity may lead to the synthesis of more of the small RNAs that guide the mRNA-degrading ribonuclease to its targets.

    Such RNA amplification could help explain how RNAi spreads through plants and other organisms. As Hannon notes, “You either have to have amplification, or the enzyme [degrading the mRNA] has to be ferocious.” More work will be needed to establish exactly what the RNA polymerase does in RNA, but other investigators now have evidence pointing to some involvement. In today's issue of Cell, both Baulcombe's team and Vaucheret's describe an RNA polymerase from Arabidopsis plants that is needed for RNAi.

    How RNAi helps the organism

    Although much of the evidence for RNAi comes from experiments in which researchers have artificially perturbed cells by putting in foreign nucleic acids, they are finding that it provides essential services for the organism. Several groups, including Baulcombe's and that of Vicki Vance of the University of South Carolina in Columbia, have evidence that plants use RNAi as a defense against infection with viruses.

    It turns out that when viruses invade plant cells, the cells silence the viral genes needed for reproducing and spreading. Such silencing may be triggered by the double-stranded RNAs that plant viruses produce as part of their life cycle. Indeed, Baulcombe, Vance, and others have shown that, in the continuing evolutionary war to survive and reproduce, plant viruses have evolved genes that enable them to suppress silencing.

    RNAi may also help keep the transposable elements that litter genomes from jumping around and causing harmful mutations. Both Plasterk's team and Mello, Fire, and their colleagues found that mutations that knocked out RNAi in C. elegans led to abnormal transposon movements. “Transposons were jumping out all over the place,” Plasterk says. “These experiments tell us that RNAi's function is to protect your genome from transposons.”

    There are also hints that RNAi may be important in embryonic development. For example, Eleanor Maine of Syracuse University in New York and her colleagues found that the Ego1 protein, which is needed for germ line development in C. elegans, is structurally related to the RNA polymerase made by the Neurospora qde1 gene. Her evidence also suggests that Ego1 participates in RNAi in worms. When she knocks out the gene, she finds the resulting animals are defective in RNAi directed at some genes expressed in the worm germ line.

    Moreover, Florence Wianny and Magdalena Zernicka-Goetz of the University of Cambridge, U.K., have shown that they can elicit RNAi against certain genes in early mouse embryos by injecting them with the corresponding double-stranded RNAs. This indicates that RNAi could be used to inactivate specific genes in mammals, just as in the worm and fly, and could thus be a valuable tool for studying gene function in mammalian development.

    Although the explosion of recent results has provided a good start toward understanding RNAi, researchers know that many questions still remain to be answered. They have to pin down the functions of the genes they have identified so far, and they say there are additional genes waiting in the wings to be identified.

    Then there is the big question of whether RNAi, which is posttranscriptional and occurs in the cytoplasm, ties together in any way with the transcriptional silencing known to happen in the nucleus. Again, there are hints that it might. Plant researchers in particular have found that genetic manipulations that trigger RNAi often correlate with addition of methyl groups to the corresponding genes. Such methylation can lead to transcriptional shutdown of genes, but it's unclear which comes first in this situation.

    Baulcombe and his colleagues have suggested that methylation of transgenes that have become inserted in the genome might lead to formation of abnormal transcripts, rather than complete transcriptional inhibition. These aberrant RNAs might then be selected for copying by an RNA polymerase to make double-stranded RNAs, thus triggering posttranscriptional silencing. But other researchers, such as Michael Wassenegger of the Max Planck Institute for Biochemistry in Martinsreid, Germany, and Marjorie Matzke of the Austrian Academy of Sciences in Salzburg, have found that certain RNA constructs can lead to methylation of the corresponding gene—an indication that the RNA is somehow talking back to the DNA. If confirmed, Mello says, “such retrograde flow of information would be really remarkable.”

    But much of what researchers have already learned about RNAi has been remarkable. As Vance puts it, “It's been so incredibly cool.”

  19. Matching the Transcription Machinery to the Right DNA

    1. Elizabeth Pennisi

    The structure of a tandem set of folds called bromodomains reveals how they help set the stage for transcription

    Gene transcription in the nucleus is a bit like an elaborate wedding at St. Patrick's Cathedral in New York City. Imagine that the “aisle” is the DNA of a gene that's going to be transcribed—that is, copied into a messenger RNA, as the first step in protein synthesis. The ushers and bridesmaids are the proteins that line up along the DNA to prepare it for the enzymes that will do the copying. In either case, before the walk down the aisle can occur, all sorts of players must get in their proper places—despite what often seems like total chaos.

    Cell biologists have long wondered how this molecular event is choreographed. Now they have a new clue about how one key member of the transcription wedding party, a protein called TAFII250, knows where to stand. Researchers knew that this protein is an essential part of the transcribing machinery—a complex of many proteins, some in common for all genes and some unique to particular gene targets—but its role was unclear. Work described on page 1422 by structural biologist Robert Tjian and his colleagues at the University of California, Berkeley, may now provide an answer: The protein helps direct the transcription machinery to the right DNA targets and gets the DNA into the correct configuration for transcription to occur.

    Combining both structural analysis of a portion of TAFII250 and biochemical studies of its behavior, the Berkeley workers find that the protein apparently homes in on one of the histone proteins that wraps the DNA and forms beadlike structures called nucleosomes. In particular, the work suggests that two protein motifs in TAFII250 called bromodomains recognize and bind to two acetyl groups added to the histone. Such histone acetylation is known to be a signal for activating gene transcription and was at one time thought to work by disentangling the DNA from the nucleosome. Only then did the transcription machinery have access to DNA, or so cell biologists thought.

    But the Tjian team's findings suggest that nucleosomes play a more dynamic role in controlling gene expression: It looks as though the proteins involved in transcription might actually start to work while the DNA is still wrapped up in the nucleosome. “It seems pretty clear that there are different states of wrapping,” with transcription proteins helping to control how much unraveling of the DNA occurs, says Steve Buratowski, a molecular biologist at Harvard Medical School in Boston. Until now, most test tube studies of transcription have involved naked DNA, but this work indicates that researchers need to start looking at the DNA in nucleosomes as well.

    The new findings also add to a growing body of evidence that bromodomain recognition of acetyl groups plays an important role in bringing about transcription. The strength of that recognition suggests, too, that bromodomain-acetyl connections assist in other cellular transactions. “I am very excited about this paper,” says Ming-Ming Zhou, a structural biologist at Mount Sinai Medical School in New York City, whose work first hinted at that possibility. What's more, he adds, given how common bromodomains are, “I won't be surprised if nature doesn't use bromodomains to mediate many protein-protein interactions.”

    Three years ago when Zhou first decided to study bromodomains, researchers knew that these protein motifs are very common—they've been found in approximately 50 proteins—but their function was largely a mystery. In cells, bromodomain proteins associate with the so-called HAT enzymes (for histone acetyltransferases), which add acetyl groups to the amino acid lysine in histones and are thus involved in regulating gene transcription. One possibility, which had been suggested by cell biologist C. David Allis of the University of Virginia in Charlottesville, is that bromodomains direct HATs to acetylated histones, where they can enhance the acetylation and activate gene transcription. His work had indicated that TAFII250 was itself a HAT enzyme (Science, 10 January 1997, p. 155).

    To explore whether bromodomains do in fact recognize acetylated histones, Zhou turned to a bromodomain-containing protein called P/CAF. He and his colleagues first worked out the general structure of the P/CAF bromodomain. Then they made a synthetic version and tested its ability to bind to bits of proteins that had acetyl molecules attached to some of their amino acids. As they reported last year in the 3 June issue of Nature, the bromodomain tends to stick to acetylated lysines. “We were pleased,” recalls Allis, because that result supported his ideas. Still, he says, “the binding constants were really pretty wimpy,” which left him wondering if the binding was physiologically relevant.

    Meanwhile, Tjian's group was doggedly pursuing the structures and functions of the transcription complex and decided to take a closer look at the TAFII250 protein, which has two bromodomains side by side. After first making crystals of the piece of TAFII250 containing the bromodomains, Tjian and his colleagues used x-ray crystallography to determine its structure to a resolution of 2 angstroms. This revealed that each of the two bromodomains has a pocket just the right size to fit an acetylated lysine.

    Tjian's team then tested the fit between the TAFII250 segment containing the bromodomains and small synthetic proteins carrying pairs of acetylated lysines at varying distances. The researchers found that the TAFII250 bromodomain region bound more tightly to some of the test peptides than others. The best fit seemed to be with a peptide having the same acetylation pattern as the H4 type of histone, with the acetyl groups separated by seven amino acids. Its binding to the double bromodomain was 70 times stronger than the binding Zhou found with the single bromodomain protein—a finding that Allis calls “exciting.”

    It suggests that in cells, proteins such as P/CAF, with just one bromodomain, likely pair up with a molecular partner with another bromodomain to find a particular pattern of acetylated lysines. If their bromodomains align correctly, their weak ability to connect to a histone tail could improve dramatically, he suggests. There they may help unwind DNA and promote transcription.

    The strength of the double bromodomain connection needs to be tested in live cells, but already cell biologists are seeing how the result might fit into their view of how transcription occurs. They are coming to think that DNA becomes acetylated only in specific spots, likely near a promoter, a DNA sequence just upstream of a gene that acts as its on-off switch. Nobody really knows what puts the first acetyl groups on the histones, but once there, they may flag down proteins such as TAFII250 with double bromodomains. Then TAFII250 could add more acetyl groups to the histone, and the resulting hyperacetylation could help promote the binding of other proteins needed to get transcription going.

    In this way, “you essentially could kill two or three birds with one stone,” Buratowski says. “You have a positive feedback loop, and in addition you have a way to recruit [other proteins] to the site.” Moreover, in the April issue of Genes and Development, Buratowski described a double bromodomain protein related to TAFII250 that associates with the transcription machinery in yeast—suggesting that these double bromodomains are key for transcription in a wide range of organisms.

    Equally intriguing is the possibility that the configuration of the bromodomains relative to one another might help determine just which genes out of the many thousands in a cell are to be turned on at any one time. “There could be a whole language of bromodomains,” says Buratowski. Whereas TAFII250's bromodomains recognize and bind to acetylated lysines about seven amino acids apart, another protein's bromodomains could be configured to recognize those closer together or farther apart. As a result, Allis suggests, TAFII250 and other transcription factors could be targeted to promoters for specific genes.

    He and others plan to explore these ideas by looking at other proteins with bromodomains. Tjian, meanwhile, has kept his sights on the transcription machinery that TAFII250 is part of. These results, along with new, albeit fuzzy, structures of transcription complexes (Science, 10 December 1999, p. 2153), have invigorated his quest to understand the actions not just of TAFII250, but of all the players in the transcription “wedding.” “The transcription machinery is unbelievably complex,” he admits, “but we will absolutely be able to figure it out.”