News this Week

Science  19 Mar 1999:
Vol. 283, Issue 5409, pp. 1822

You are currently viewing the .

View Full Text

Log in to view the full text

Log in through your institution

Log in through your institution


    Academic Sequencers Challenge Celera in a Sprint to the Finish

    1. Elizabeth Pennisi*
    1. With reporting by Dennis Normile.

    The Human Genome Project has just entered warp speed. Several large grants announced this week by the U.S. government and the Wellcome Trust, a U.K. charity, may make it possible for researchers to determine the order of the 3 billion bases in the human genetic code much earlier than expected—by the spring of 2000.

    A year ago, even the most optimistic project leaders were predicting that the human genome would not be sequenced before 2005. Then, boosted by some early successes and prodded by a private competitor, they announced last October that they could deliver a “rough draft” by the end of 2001. Now, they've advanced that date by 18 months. “Every time we talk, we move [the deadline] up,” says Robert Waterston, director of the sequencing center at Washington University in St. Louis.

    The money for this accelerated schedule comes largely from the National Human Genome Research Institute (NHGRI). On 15 March, it announced that it had selected three major centers to do high-volume human DNA sequencing, awarding them $81.6 million over the next 10 months. NHGRI also expects to provide them with comparable support over the following 4 years. The winners include Waterston's group at Washington University ($33.3 million), Richard Gibbs's team at Baylor College of Medicine in Houston ($13.4 million), and Eric Lander's team at the Whitehead Institute for Biomedical Research in Cambridge, Massachusetts ($34.9 million).

    At the same time, the Wellcome Trust upped this year's support of the human genome sequencing effort by the Sanger Centre in Cambridge, England, from $57 million to $77 million. These four, together with the U.S. Department of Energy's (DOE's) Joint Genome Institute, have promised to sequence at least 90% of the human genome with fivefold coverage by March 2000. After that, they plan to spend 2 or 3 years combing through the data and creating an accurate version of the whole genome.

    The new agenda, announced by Francis Collins of NHGRI and Michael Morgan of the Wellcome Trust, has been greeted with both joy and dismay. Many researchers are delighted that human sequence data will be made available to the public more rapidly. But some fear that the smaller sequencing centers, left out of this round of competition, may become obsolete. Some international partners in the effort were upset, as well.

    Neither German nor Japanese groups were involved in setting this new deadline. “The policy was not agreed upon in the same international spirit as had [been cultivated] in the past,” says Andre Rosenthal, who heads the Institute of Molecular Biology in Jena, Germany, and hopes that his group will do about 7% of the human genome. Rosenthal is determined to have Germany contribute its share—as long as funding holds out—but “this announcement gives the impression that [we're] not needed,” he complains.

    For some researchers, however, the new sequencing target seems like a natural outgrowth of experiments that began several years ago. In 1996, NHGRI began funding labs to assess the feasibility of doing high-speed sequencing. “The problems that everyone thought were limiting 3 years ago have all been solved,” says Lander, who ran one of the pilot projects. As a result, “we have a rock-solid way to sequence the genome accurately.” At the same time, increasing amounts of human data from the pilot projects, along with the complete sequences of organisms such as yeast and nematode, “created a real enthusiasm for more [DNA sequence],” Waterston says.

    Despite such enthusiasm, some researchers continue to worry that too fast a pace might degrade the quality of data. Those qualms were debated intensely within the genome community—until May 1998. That's when sequencing maverick J. Craig Venter announced that he was teaming up with the Perkin-Elmer Corp. of Norwalk, Connecticut, to sequence the human genome by 2001. Academic researchers became concerned that Venter's new company, Celera Genomics of Rockville, Maryland, might patent the human genetic code, and the Human Genome Project participants responded by stepping up their own efforts. “Our community must compete with or beat Venter's efforts,” says Yoshiyuki Sakaki, a molecular biologist and sequencer at Tokyo's Institute of Medical Science.

    Morgan and Collins decided that the best way for nonprofit institutions to keep up was to scale up, concentrating resources in the most efficient centers. And the three groups that won NHGRI's big awards have already hunkered down to work with the DOE and Sanger Centre. Every Friday they get together by conference calls for a group lab meeting. They share opinions about capillary automated sequencers and efficient management. The pressure has created “a sense of needing to work together to get the job done,” says Waterston.

    His crew, for example, is fingerprinting two sets of clones so that the pieces of DNA they contain can be used more efficiently. In this way, the Human Genome Project participants will be using familiar processes, tracking the location of each clone along the genome. “That will be of great assistance,” says Collins, avoiding the time-consuming step of making detailed, sequence-ready maps. But this approach contrasts with the “shotgun” random sequencing of the whole genome being undertaken by Celera.

    As these high-production labs step out, they could be leaving behind some of the field's pioneers—like Bruce Roe of the University of Oklahoma, Norman, and Glen Evans of the University of Texas Southwestern Medical Center at Dallas. Both had genome center grants before NHGRI's pilot projects began in 1996. And they worked hard last year, with half a dozen others, to meet NHGRI requirements that each center should complete 7.5 megabases of finished sequence data. But last fall, the NHGRI advisory council decided to rank the competitors for new grants in two groups: those that had at least 15 megabases of sequence under their belt, and those that did not. That put some—including Evans and Roe—in the second tier, still awaiting funding. Although Evans applauds NHGRI's fast-paced approach, he also feels a bit left out in the cold. “It's kind of upsetting for all of us,” he says.

    When the deadline for completing the human genome rolls around next year, some researchers fear that interest in closing gaps in the data and removing errors will fade. Evans, for example, worries that a highly accurate, complete version of the genome may never be done. But Collins thinks these worries are not justified. “We didn't intend to pull the plug on the other centers,” he says, although he does not know how much money will be available for them. And he insists that next year's draft genome is on “a direct path” to the goal of producing a polished, error-free version. Morgan agrees: “We are determined to finish,” he says.


    From a Flatworm, New Clues on Animal Origins

    1. Elizabeth Pennisi

    One of nature's more enduring mysteries has been how millipedes, mollusks, snakes, and butterflies came to be. The fossil record shows an eruption of diversity of such groups—all of which have bilaterally symmetrical bodies—during the Cambrian explosion, some 530 million years ago. But fossils of the very first such creatures have been scarce. Now, a living creature, a humble flatworm, may provide some key clues.

    As Jaume Baguñà, a geneticist at the University of Barcelona in Spain, and his colleagues report on page 1919, tiny marine worms called acoels may be one of the closest living representatives of the first bilaterally symmetrical organisms on Earth. Acoels are usually grouped with Platyhelminthes, a group that includes such unpleasant parasites as tapeworms and liver flukes, and whose position in the tree of life has been subject to debate. But using DNA analyses, Baguñà's team concludes not only that the acoels don't belong with other flatworms, but that they alone represent a living relic of the transition between radially symmetrical animals such as jellyfish and more complex bilateral organisms such as vertebrates and arthropods.

    Putting acoels in this key position “is going to stimulate a lot of research,” predicts Julian Smith III, an invertebrate zoologist at Winthrop University in Rock Hill, South Carolina. The results are “quite exciting,” agrees David Jablonski, a paleontologist at the University of Chicago. “We might have one bilateral survivor from before the Cambrian explosion giving us a living window on early metazoan life.”

    Baguñà and Timothy Littlewood, a molecular biologist at The Natural History Museum in London, decided to use molecular studies to evaluate the acoel's placement on the tree of life because anatomical data—including simpler brains, kinked cilia, and a different pattern of development—suggest that acoels may differ from other flatworms. The pair obtained DNA from 18 acoel species from around the world and sequenced the gene for the 18S ribosomal RNA subunit from each. They then compared these data to the same genes from other platyhelminths and from both simpler and more complex organisms.

    The team first removed 16 fast-evolving acoel species from the analysis, because their DNA sequences were so different from those of simpler organisms that the phylogenetic analyses would be suspect. When they used only the two slow-evolving species to represent the group, “the acoels dropped out completely from the rest of the platyhelminths,” notes Mark Martindale, a developmental biologist from the University of Hawaii, Honolulu. The worms ended up branching off from an ancestral animal after the radial jellyfish and their cousins, but before the three major bilateral groups, today encompassing vertebrates, mollusks, and arthropods, began to diverge. By moving into this prime spot on the animal tree of life—close to the first bilateral animal—the acoels and their idiosyncrasies take on new meaning for evolutionary biologists, offering a living link between primitive and more complex animals, says Martindale.

    For example, primitive, radially symmetrical animals have just two types of cells, ectoderm and endoderm, whereas all bilateral animals, including acoels, also have mesoderm. Most of those with three layers have a distinct gut lined with mesoderm, but acoels have mesoderm but no true gut. “They may be some sort of ‘missing link,’” says Littlewood.

    Acoels also differ from other bilateral animals in the way their cells divide during development. During the early cell divisions (called cleavage) of related bilateral animals, the fertilized egg forms two, then four cells. Then each of those cells gives rise to many small cells, explains Martindale. But the acoel egg divides once, and the two resulting cells immediately generate many small cells. “They have a spiral cleavage pattern that's different,” and may have evolved separately from the pattern seen in most bilateral organisms, says Martindale. That suggests that acoels branched off from all other bilateral animals very early indeed, and that their cleavage pattern represents an early experiment in the evolution of body form. The acoels would therefore possess many of the same genes as the earliest bilateral animal. “It's beginning to look like we are looking at something close to the fuse for the Cambrian explosion,” Jablonski suggests.

    If so, then acoel biology may offer clues as to which traits evolved first in evolutionary history. Acoels go directly from egg to the adult form, skipping the larval stage seen in many more complex organisms, including some platyhelminths. That suggests that larvae evolved later in the tree of life.

    For all these reasons and more, acoels are now quite likely to get more notice. Although the shift out of the Platyhelminthes won't come as a surprise to some, “it will grab the people who teach general biology and shake them up,” predicts Smith. Baguñà thinks the group belongs in its own phylum, but Smith notes that a few other so-called platyhelminths may belong with them. And he would like to see more evidence that the acoels, which have a variety of reproductive strategies, are truly simple. “This is a very strange and diverse set of organisms to be finding as a basal group,” he says. But he agrees that acoels will have their day in the limelight. Says Jablonski, “Acoels have gone from being an obscure group to one that can provide potentially great insight into the radiation [of multicellular animals].”


    Key Molecular Signals Identified in Plants

    1. Marcia Barinaga

    Biologists trying to trace the communication systems that tell plant cells to develop into a leaf or a fruit or a flower, or to fight off deadly pathogens, have found some of the crucial switches and relays but don't know where the wires go. They have identified dozens of proteins called receptor kinases that receive signals from outside the cell, but they've had little luck in finding the signals that trigger specific receptors, or in tracing what happens in the cell once the receptor is activated. Now, two teams report advances toward putting together one such pathway for the plant Arabidopsis thaliana.

    The pathway in question helps control the growth of the specialized region at the tip of the shoot, called the apical meristem, that gives rise to such plant organs as the leaves and flowers. Geneticists have found three genes in that pathway, which were named CLAVATA after the Latin “clavatus,” for “club,” because mutations in the genes cause the meristem to become enlarged and club-shaped. Two years ago, researchers cloned one of the genes, CLAVATA1 (CLV1), and concluded from its sequence that it encodes a receptor kinase. Now, Elliot Meyerowitz of the California Institute of Technology in Pasadena, Rüdiger Simon of the University of Cologne in Germany, and their colleagues report on page 1911 that a protein called CLAVATA3 (CLV3) seems to be the signal, or ligand, that activates the receptor. And in the March issue of The Plant Cell, Steven Clark and his colleagues at the University of Michigan, Ann Arbor, announce that they have found two proteins inside the cell that associate with activated CLV1 and presumably help set in motion the intracellular events that keep meristem size in check.

    “It is a big advance,” says Joanne Chory, who studies plant receptors at the Salk Institute in La Jolla, California. Adds plant biologist John Walker of the University of Missouri, Columbia: “This is giving us direct insight into the mechanism of how [meristem growth control] works.” That in turn may pave the way for altering such agriculturally important traits as fruit size and yield. What's more, the new information will also help researchers figure out how similar receptor kinases that control other plant functions work.

    Even though many plant receptors resemble those that respond to extracellular signals in animal cells—a similarity researchers used to identify CLV1 and other plant receptor kinases—the match is not perfect. The disparities mean that plant researchers cannot conclude from the comparison alone what signals trigger the receptors, or which molecules relay their downstream effects.

    Simon's and Meyerowitz's teams have moved the field beyond that impasse, at least for CLV1, by cloning the CLV3 gene. Genetic analysis had already hinted that CLV3 interacts directly with CLV1, and the gene's sequence suggests that CLV3 encodes a small protein ligand, says Meyerowitz postdoc and lead author on the paper, Jennifer Fletcher.

    CLV3 has the hallmarks of a protein that is secreted from cells, and it is made in a different region of the meristem than CLV1. Both of these findings, plus others in the paper, suggest that CLV3 is an extracellular signal that travels to exert its effects on CLV1. The researchers have not yet shown that CLV3 binds to CLV1, and until they do, it remains possible that it helps to synthesize or somehow aids the binding of an as-yet-unknown ligand. Nevertheless, “it is very likely” that CLV3 is the ligand, says plant molecular geneticist Kiyotaka Okada of Osaka University in Japan.

    Clark's team at Ann Arbor was looking for proteins that associate with the CLV1 receptor and may trigger its effects in the cell. Research associate Amy Trotochaud searched for intracellular proteins that bind to CLV1 and found two CLV1-containing protein complexes, the larger of which apparently forms when CLV1 is turned on by ligand binding. Its levels closely mirror CLV1 activity. Trotochaud identified two other proteins in the complex besides CLV1. One, an enzyme called KAPP, was already known to bind to CLV1. It apparently inactivates the receptor by removing key phosphate groups. Clark suggests this could serve to raise the threshold for triggering the pathway or alternatively could turn off CLV1 once the pathway has been tripped.

    The other protein, identified with help from Zhenbiao Yang's group at Ohio State University in Columbus, is a good candidate for transmitting the CLV3 signal to other proteins within the plant cell. It resembles signaling proteins found in animal cells, called Rho proteins, which are involved in regulating a variety of cell activities. There's a twist, however. In animals, kinase receptors don't interact with Rho proteins, but instead transmit their signals through a related protein called Ras. Plants don't have Ras, Clark says, and the Rho-like proteins may take its place.

    With this as their beginning, researchers now hope to trace out the entire pathway controlling the size of the apical meristem. “We are moving out in both directions from the receptor,” Clark says. “What is nice about the CLAVATA story,” adds Walker, “is we are getting enough pieces together that it is going to be a lot easier to go further. You are going to be able to make good hypotheses about a lot of the other [plant] receptor kinases and their signaling pathways.”


    A Lava Lamp Model for the Deep Earth

    1. Richard A. Kerr

    Lava lamps, those glowing, roiling conversation pieces, went out with the '70s. And now they're back, not only with the '70s revival but in the thinking of geophysicists who ponder the mantle, the vast layer of viscous rock between Earth's molten iron core and the outer shell of tectonic plates. For decades researchers have debated whether the mantle is more like a giant layer cake, neatly divided at a depth of 660 kilometers into two layers that never mix, or a boiling pot of water, churning from top to bottom over the eons. Neither picture quite fits. Seismic images of sinking ocean plates piercing the 660-kilometer “barrier” have upset the layer cake model (Science, 31 January 1997, p. 613), yet geochemical data pose problems for the one-pot model by suggesting that some of Earth's ingredients are sequestered in an isolated part of the mantle.

    Now in this issue of Science (beginning on p. 1881), seismologists and modelers offer a new model that incorporates elements of each of the old ones and might best be described with a third metaphor—that of a lava lamp on low. Just as a lava lamp's heat causes its two layers to shrink and swell in complex patterns without mixing, so in this model Earth's radiogenic heat—abetted by plunging tectonic plates—causes the bottom mantle layer to vary markedly in thickness, bulging upward in some places and squeezing close to the mantle floor in others. Yet just as the colored fluid in a lava lamp's lower layer never mixes upward, a very deep rock layer, from 1700 kilometers or so down to the base of the mantle at 2900 kilometers, remains intact (see diagram).

    “This might be an answer to our dilemma,” says geochemist Albrecht Hofmann of the Max Planck Institute for Chemistry in Mainz, Germany. Others are more cautious. “Certainly, something strange is going on down there” in the deep mantle, says mineral physicist Craig Bina of Northwestern University, but he isn't sure it's best described by this scenario. Still, “it's a good model to take potshots at.”

    Data from earthquake waves probing the deep mantle provided the impetus for the new model. In their paper on page 1885, seismologists Rob van der Hilst and Hrafnkell Kárason of the Massachusetts Institute of Technology (MIT) note that above a depth of 1700 kilometers or so, the changing velocities of seismic waves—which depend on both the temperature and the composition of the rock—clearly show how cold slabs of ocean plate sink through the 660-kilometer barrier and into the middle mantle. But below 1700 kilometers, this pattern breaks up, and seismic velocities vary widely from point to point in no recognizable pattern. This suggests to van der Hilst and Kárason that the lowermost mantle represents a separate regime.

    On page 1888, seismologists Satoshi Kaneshima of the Tokyo Institute of Technology and George Helffrich of the University of Bristol in the United Kingdom offer another hint of a deep boundary. They present seismic evidence of a thin, chemically distinct slice of rock between 1400 and 1600 kilometers down that could be a very old crustal slab come to rest on the top of the lowermost mantle layer.

    If the lowermost mantle really is sealed off, it should have a different composition from shallower regions, and that's what other seismic observations reported in the last few years imply, van der Hilst and Kárason say. For example, small regions tens of kilometers across in the lowermost mantle scatter seismic waves, an effect that could only be due to compositional variations, because temperature would not vary on such a small scale. And two huge “megaplumes” rising from the base of the mantle slow seismic waves more than temperature alone could, again suggesting a different composition, perhaps richer in iron, deep in the mantle.

    To see whether the lowermost mantle could stay isolated and chemically distinct over the eons, modeler Louise Kellogg of the University of California, Davis, geophysicist Bradford Hager of MIT, and van der Hilst constructed a computer model of a mantle (reported on p. 1881) with a bottom layer 4% denser than the overlying rock. They turned on the radiogenic heat of the lowermost mantle, sent slabs descending from the top, and found that the bottom layer swelled upward in places and thinned beneath slabs. Yet the layer survived as a distinct entity for billions of model years.

    Of course, the point of a lava lamp is its mesmerizing variety of flows, and other researchers have their own versions of this in the mantle. Geophysicist Richard O'Connell of Harvard University has suggested that separate blobs of chemically distinct, more viscous rock might bob about in the lowermost mantle (Science, 23 February 1996, p. 1053). Between these blobs, cold slabs might slip all the way to the bottom of the mantle, and hot plumes might rise. “It might be hard to distinguish between a layer and blobs,” he says.

    But other researchers are far from convinced that the lowermost mantle is a distinct layer—or even a collection of blobs—that has resisted mixing. Seismologist Thorne Lay of the University of California, Santa Cruz, notes that the seismic images from below 1700 kilometers may be muddy simply because seismic data are poor at those depths. And there are seismic signs, he says, that the deepest mantle could be more dynamic than allowed by the layers in the lava lamp model. He also notes that the proposed structure will be “very difficult to detect seismically.”

    Van der Hilst agrees that testing the model will be a challenge, but “it's certainly more promising than any of the end-member models presented so far.” If the model does pan out, lava lamps are one '70s craze that may have a lasting effect.


    NASA Plans Earlier Hubble Rescue

    1. David Malakoff

    WASHINGTON, D.C.—For months, NASA has been weighing the possibility of losing its most productive science instrument, the Hubble Space Telescope, against the certainty of disrupting a carefully choreographed launch schedule involving its most important engineering mission, the international space station. Hubble won. NASA announced last week that it will mount a special space shuttle mission in October to replace failing gyroscopes that threaten to cripple the telescope's ability to do science.

    Since its launch in 1990, the $2 billion telescope has delivered a steady stream of spectacular images of the universe. But it has had to overcome its share of problems, including a now-corrected flaw in an expensive mirror that initially rendered its images nearly useless to scientists. The Hubble's current predicament involves its six gyroscopes—small, rapidly spinning wheels enclosed in liquid-filled containers that act like compasses. The navigational aids make it possible for the telescope to lock onto targets and maintain a rock-steady focus on small patches of space for long periods. In 1995 and 1998, for instance, the Hubble stared at two patches of sky for 10 consecutive days each, revealing thousands of previously unseen galaxies believed to be at the edges of the universe.

    Concerns about the Hubble's gyroscopes surfaced last October, after two of the devices—which were installed in 1993 and checked in 1997—failed within 18 months. The losses, which a NASA official termed “disquieting,” left the Hubble with four working gyroscopes. Three are needed to keep the spacecraft from entering a self-protective safe mode, in which its scientific instruments are shut down. When a third gyroscope showed signs of breaking down in late January, NASA activated an emergency response plan. “When Hubble reached the point of being one failure away from doing science, our flight rules said we must look at a mission to correct the situation,” explains John Campbell, Hubble project director at NASA's Goddard Space Flight Center in Greenbelt, Maryland.

    NASA officials faced several obstacles, however. After a series of delays, the next Hubble maintenance mission was scheduled for June 2000, and some key instruments would not be ready for an earlier launch. Moreover, three of the agency's four shuttles have been reconfigured to carry construction payloads for the space station, and their launch schedules were booked. That left Columbia as the favored vehicle for Hubble repairs. But NASA had a long-scheduled overhaul of Columbia planned for this fall—just when engineers feared the telescope might be forced out of action.

    To forestall that disaster, NASA officials decided to jury-rig the shuttle Atlantis, stretch out the planned Hubble maintenance over two missions, and juggle some space station launches. One crew of astronauts will make a 9-day service call to Hubble this fall to replace all six gyros and a computer and do a few other chores. Next year or in 2001, a second team will install an improved camera, solar panel, and new science instruments. The two visits could cost NASA up to $75 million extra over the next couple of years, officials say.

    Although putting off the installation of new instruments may delay some science, researchers support the plan. Indeed, “splitting the mission may make things easier, since the servicing mission was getting very crowded,” says astrophysicist Rodger Johnson of the University of Arizona, Tucson, lead scientist for the Hubble's Near-Infrared Camera and Multiobject Spectrometer. The instrument, which has been out of action since January due to a lack of nitrogen coolant, will now have to wait at least 6 months longer for a new supply. But, says Johnson, “it gives us more time to analyze the data we've already got.”

    The October mission will also give NASA an early opportunity to do autopsies on the dead gyros and get its first look at the performance of a new design. Engineers believe the existing gyroscopes—which cost $3 million each and are built by Allied Signal Corp. of Teterborough, New Jersey—fail when their copper electrical cables become corroded. The corrosion occurs, they believe, because compressed air was used to pack the thick fluid surrounding the spinning wheels into the devices. Oxygen from the air mixes with bromine in the fluid, catalyzing the corrosive reaction.

    To defuse that threat, the company is now using compressed nitrogen to pack the gyroscopes. But building the devices is “like crafting a fine watch: It can take years,” says Campbell. As a result, just three of the new units may be ready to be installed on the October flight; the other replacements will probably be of the older type. But Campbell is confident that the arrangement can keep Hubble pointing in the right direction until its planned demise in 2010.


    Cresson Resigns in Wake of Fraud Report

    1. Robert Koenig

    The European Union's (EU's) embattled research commissioner, Edith Cresson, submitted her resignation this week along with the other 19 EU commissioners in the wake of a scathing report by a European Parliament investigative panel that alleged cronyism and mismanagement in the EU's executive body. The panel singled out Cresson for the harshest criticism. As Science went to press, it was not yet clear whether some commissioners—all of whom are political appointees—would be renamed to their positions by their respective governments, to serve out terms that had been scheduled to expire at the end of this year. However, two sources in Brussels said it was “highly unlikely” that the French government would restore Cresson to her post, and officials are beginning to speculate about her successor.

    The commission's mass resignation—roughly equivalent to the entire U.S. federal Cabinet stepping down at the same time—comes just a month after the EU's science directorate, known as DGXII, formally launched its new 4-year, $17.6 billion research program, Framework 5. DGXII officials say it is still unclear exactly what impact, if any, the resignation of the unpopular Cresson will have on the nascent program.

    The European Parliament's 140-page report, issued on 15 March by a five-person panel of independent experts, was scathing about aspects of Cresson's management of DGXII and the education directorate, DGXXII. The report said that Cresson, a former French prime minister who has headed the directorates since early 1995, “failed to act in response to known, serious, and continuing irregularities over several years” in the 5-year, $700 million Leonardo da Vinci program to help fund vocational and professional training. Audits have accused an outside contractor of defrauding the program of millions of dollars.

    Cresson also was criticized in the report, and in earlier inquiries, for helping a French friend with dubious qualifications gain contracts to work for DGXII and, later, the EU's Joint Research Centre. However, this week's report said that no commissioner “was directly and personally involved” in fraud or received money personally. Cresson did not comment on Tuesday. Earlier, she had denied being aware of any fraud.

    In the wake of the resignations, officials in Brussels are speculating that an interim commissioner might be named to head DGXII until the new commission is chosen later this year. One Brussels insider says Swedish officials had expressed an interest in the science directorate. Another source says that Portugal's research minister, José Mariano Gago, might be considered for the permanent position. But, says another source, “it's too early to even speculate.”


    Genetic Study Shakes Up Out of Africa Theory

    1. Elizabeth Pennisi

    A new DNA analysis is casting doubt on the popular notion that all modern humans descended from one small population of ancient Africans. This “Out of Africa” theory had gained support in recent years, as a string of genetic studies suggested that a single group of ancient, sub-Saharan people left traces of their genes in modern people—implying that only this group succeeded in taking the final evolutionary leap to becoming modern humans. This new human species then migrated throughout world, replacing populations of “archaic” humans, such as Neandertals. Or so the story goes.

    But a few anthropologists have always questioned this tale, and this week the skeptics added new data to their cause, as population geneticist Jody Hey and anthropologist Eugene Harris of Rutgers University in Piscataway, New Jersey, presented evidence that two human populations dating to at least 200,000 years ago left their genetic legacy in modern people. One group gave rise to modern Africans and the other to all non-Africans, Hey and Harris report in the 16 March Proceedings of the National Academy of Sciences.

    To remain distinct, the two ancestral populations presumably lived in different places, which fits with a competing theory of human origins, called multiregionalism, in which modern human traits evolved in various populations and then were spread around the world by small groups of migrants who interbred with other populations. “It's important evidence,” Henry Harpending, an anthropologist at the University of Utah, Salt Lake City, says of the new study. “A lot of us thought [the question] was answered.” And although Harpending, who has done genetic work supporting the Out of Africa scenario, doesn't support multiregionalism, he agrees that “if we follow the implications of [this work], then the Out of Africa hypothesis is wrong.”

    Multiple analyses of mitochondrial DNA and Y chromosome variations have bolstered the Out of Africa hypothesis. But Hey and Harris found a different pattern when they compared different versions, or haplotypes, of a gene on the X chromosome called PDHA1, which codes for a key enzyme in sugar metabolism. They gathered DNA from six French, seven Chinese, five Vietnamese, one Mongolian, six Senegalese, three African Pygmies, three members of the Khosian tribe near Angola, and four South Africa Bantus.

    By assuming that the number of sequence differences between two haplotypes corresponds to the time since populations carrying them split apart, Harris and Hey built an evolutionary tree for the gene. To turn the sequence differences into an absolute measure of time, they calculated the gene's mutation rate, based on the number of differences between chimp and human PDHA1 genes, which are assumed to have split 5 million years ago. Such molecular clocks have come under fire lately (Science, 5 March, p. 1435), but the team notes that other analyses show that PDHA1's clock appears to keep steady time.

    The tree showed that modern variants of the gene go back to two ancestral haplotypes. One gave rise to several modern haplotypes found only among Africans. The other ancient haplotype eventually gave rise to one variant seen today in some Africans, and another variant that—some 200,000 years ago—evolved into the two haplotypes seen today in non-Africans. What's more, the team found a so-called “fixed difference” between Africans and non-Africans: At one spot in the sequence, all the Africans had one base, while all the non-Africans had a different base. This is the first time such a fixed regional difference has been found in human genes, and it “is a strong indication of an historical division” in the population, says Hey.

    All this offers a serious challenge to the Out of Africa hypothesis, says Rosalind Harding, a population geneticist at the Institute of Molecular Medicine in Oxford, United Kingdom. Although the previous studies may have accurately traced particular genes, a given gene may not accurately reflect a population's movement. Moreover, the new work isn't the only one questioning Out of Africa. Harding's previous work revealed ancient, non-African haplotypes in the beta globin gene. And work by Michael Hammer of the University of Arizona, Tucson, showed that a haplotype on the Y chromosome apparently arose in Asia and then moved back to Africa in an early migration (Science, 25 April 1997, p. 535). But the new study, with its finding of a fixed difference, offers more clear-cut evidence of multiple ancient populations. “It's the best study of the lot,” says anthropologist Milford Wolpoff, a longtime multiregional defender at the University of Michigan, Ann Arbor.

    But both Hey and Harding say Out of Africa isn't yet obsolete. For one, “[our study] is just a one-gene view of human history,” Hey cautions. For another, he thinks that the two ancestral populations both could have lived in Africa, close enough for some interbreeding, so that the traits that distinguish modern humans emerged in both groups. Then, perhaps 100,000 years ago, one group left Africa. Thus humans “could still be out of Africa,” Harding says.

    What's needed now, Harding and Hey say, are more studies of more genes, particularly nuclear genes, to see which scenario they match. If future work supports the tale told by the PDHA1 gene, says Harding, then in 5 years, “we could be looking back and saying this [report] was the key paper.”


    Sweden Considers More Oversight of Research

    1. Joanna Rose,
    2. Annika Nilsson*
    1. Rose and Nilsson are writers in Stockholm, Sweden.

    STOCKHOLM—Swedish scientists may soon see radical changes in the way research proposals are evaluated. Last month, a committee of parliamentarians issued a sheaf of recommendations designed to increase public oversight of research. Their proposals would subject all academic research involving human subjects or tissue to ethical review, turn the peer review of grants on its head by giving applicants anonymity while revealing the identity of reviewers, and require all graduate students to attend courses in research ethics.

    The committee was set up 20 months ago in response to several well-publicized cases of research fraud, studies indicating sex bias and nepotism in the awarding of grants in Sweden, and controversy over research such as the cloning of Dolly and experiments on human embryos. The suggested reforms aim to shore up public confidence in science by creating more of a dialogue between researchers and the public. “If we don't handle it right, the general public will lose trust in science,” says committee chair Barbro Westerholm, a liberal party parliamentarian. The report is now being sent to research organizations across Sweden for comments, after which the government will decide whether to act on its recommendations.

    The most significant proposal would require each university to establish an independent ethics committee—made up of equal numbers of laypeople and scientists—to review research involving humans or human tissue. Psychology and sociology studies, and any work that uses identifiable data from medical or scientific records, would be included. Currently, academic ethics committees review medical research conducted with public grants, but they are not required to look at privately funded studies.

    Most researchers have welcomed this proposal in principle, but there has been skirmishing over the details. For example, the report suggests that the committees' decisions should be made by consensus, but medical ethicist Birgitta Forsman of Göteborg University argues that “majority decisions are much better for controversial issues.” The risk, she says, is that “people would vote to accomplish consensus in order not to appear too difficult instead of expressing their true opinion.”

    The report suggests that if researchers are not satisfied with a committee's decision, they could seek a second opinion from another independent ethics committee. But a minority of the parliamentarians argued that the ethics committees should be given clearer ground rules and that their decisions should be legally binding. “We already have a number of committees devoted to questions of ethics in human medical research. But what we still haven't seen is the legal regulation as to what principles the committees should work from,” says lawyer Elisabeth Rynning of Uppsala University.

    To counter scientific misconduct, the report urges the government to set up a central commission to deal with individual cases. It also says researchers should be required to document and file important scientific material for at least 10 years, and they should be obliged to reveal any industrial or financial interests in their research. And—stressing that prevention is better than cure—it suggests that all graduate students be required to take courses in research ethics.

    Stellan Welin at the Center for Research Ethics in Göteborg argues that even these measures lack teeth. “It would be better if reporting of scientific misconduct was made obligatory by law,” he says. But Forsman counters that obligatory whistle-blowing would create a legal minefield: “Because there is no exact definition of what scientific misconduct consists of, it is extremely difficult to create formal legislation.”

    As for peer review, the committee took note of recent studies indicating that the allocation of grants in Sweden is biased against women, young researchers, and workers in cross-disciplinary fields. One remedy, the committee says, is to appoint more women to evaluating committees. “Both women and men should be educated in techniques for gender-neutral evaluation,” says immunologist Agnes Wold of Göteborg University. But it also has a more radical suggestion: Grant applicants should remain anonymous in the first stage of the review, while the reviewers should be named. And the results of peer review should be made publicly available so that applicants can debate the decisions with reviewers.

    This idea is likely to be controversial. “In general, my experience is that the applicant and what they have already accomplished is a better indication that interesting science will result,” says astrophysicist Bengt Gustafsson of Uppsala University. “Moreover, opening the reviews to public scrutiny will make them more conventional and polite, which is of no benefit to science.”

    Concerns like these are likely to get a thorough public airing over the next few months.


    Nabel to Head NIH Vaccine Research Center

    1. Jon Cohen

    After searching for more than a year, the National Institutes of Health (NIH) has finally found a scientist to head its nascent Vaccine Research Center (VRC): Gary Nabel, a gene therapy expert at the University of Michigan, Ann Arbor. Donna Shalala, secretary of the U.S. Department of Health and Human Services, announced on 11 March that Nabel has agreed to run the vaguely defined VRC, which will focus initially on searching for an AIDS vaccine.

    President Clinton announced that NIH would build the new center—which will have a budget of $16.5 million this year—in a landmark speech on 18 May 1997, in which he challenged scientists to develop an AIDS vaccine by 2007. The leading AIDS vaccine advocacy groups have criticized NIH for taking so long to find a suitable scientist to head the venture. But Anthony Fauci, head of the National Institute of Allergy and Infectious Diseases, says difficult jobs take longer to fill. Fauci, who had a say in the final selection, did acknowledge, however, that NIH had offered the job to a few other scientists who turned it down.

    NIH originally wanted a candidate who had worked in industry, which Nabel has not. “If we could get an excellent person—scientifically a heavyweight—who had industrial experience, we would have gravitated toward it,” allows Fauci. Failing that, he says, “we'd rather have a heavyweight than someone from industry.” Nabel emerged as the leading candidate earlier this year (Science, 1 January, p. 17), but a deal took several months to close. It was finalized when his wife, Elizabeth, a prominent cardiologist, secured a top job at the National Heart, Lung, and Blood Institute.

    Nabel has worked on gene therapy for AIDS, therapeutic cancer vaccines, and a candidate Ebola vaccine, but he is a relative newcomer to AIDS vaccine work. He says, however, that his experience prepares him well for his new job, and he jokes that his limited AIDS vaccine research “should be an advantage because I'm not invested in old ideas that didn't work.”

    Nabel's background sits fine with Nobel laureate David Baltimore, head of NIH's AIDS Vaccine Advisory Committee—and Nabel's former postdoctoral adviser. “I have high hopes that Gary will be a great leader,” says Baltimore, noting that Nabel is an “excellent manager” who is “widely knowledgeable in immunology and virology, giving him the perfect perspective for taking on this role.”

    One aim of the VRC—an idea first proposed by NIH immunologist William Paul—is to move fundamental research results more aggressively into clinical trials. But beyond that, its agenda is still largely up in the air. “It depends a lot on how Gary wants to build it,” says Fauci. “We decided early on that we're going to put a lot of flexibility in the hands of the director.”

    A five-story building to house 100 VRC scientists is now going up at NIH's Bethesda, Maryland, campus. “The assumption is that the majority of the people are going to be brought in from the outside,” says Fauci. Nabel, who starts his new job next month, plans to keep his Michigan lab running until the building is ready for occupants, by the middle of 2000.


    A Surprising Partner for Angiostatin

    1. Marcia Barinaga

    The proteins angiostatin and endostatin have generated a lot of excitement in the past year or two because of reports that they can stop or slow cancer growth in mice, apparently by preventing the birth of new blood vessels needed to nourish growing tumors. But as potential therapies, they have a built-in drawback: Protein drugs can be fragile and hard to produce, as the pharmaceutical company Bristol-Myers Squibb acknowledged last month when it announced that it was giving up work on angiostatin. Efforts to get around those problems by developing small molecules that would mimic the effects of these proteins have been handicapped because little is known about how they act. Now, a research team at Duke University Medical Center seems to have solved part of that puzzle, at least for angiostatin.

    The team, led by Salvatore Pizzo, reports in the 16 March issue of the Proceedings of the National Academy of Sciences that angiostatin binds to a surprising target on the surface of the endothelial cells responsible for blood vessel growth: an enzyme called adenosine triphosphate (ATP) synthase, never before found on the outer membranes of normal cells. The team can't say if this is angiostatin's only target on the cells, but they have evidence that the binding is necessary for angiostatin's antigrowth effects.

    “This paper is important because of all of its implications,” says Judah Folkman of Harvard Medical School in Boston, in whose lab angiostatin was discovered. For one, it may provide an explanation for endothelial cells' ability to grow in very low oxygen environments such as tumors. ATP synthase manufactures the energy-rich molecule ATP and thus may be providing endothelial cells with an extra energy source. If angiostatin's binding to the enzyme blocks its activity, that could be the means by which the protein prevents blood vessel growth. What's more, the work suggests that small molecules tailored to block ATP synthase might mimic angiostatin's effects and be useful as anticancer drugs. The paper “is sure to be very provocative,” says ATP synthase researcher Gordon Hammes of Duke University.

    Tammy Moser, an associate researcher in Pizzo's lab, uncovered the enzyme in a search for endothelial cell proteins that bind angiostatin that she undertook on the assumption that such proteins help angiostatin stop cell growth. From a preparation of endothelial cell membranes, she fished out an angiostatin-binding protein and sent it to Peter Højrup at Odense University in Denmark. Using mass spectrometry, Højrup determined that what Moser had found was actually two proteins, the α and β subunits of ATP synthase. Because that enzyme was thought to be present only in the energy-producing organelles of higher cells, “our reaction was shock,” Pizzo recalls.

    By probing endothelial cells with antibodies that bind to the α subunit of the ATP synthase, the researchers soon confirmed, however, that it is indeed on the endothelial cell surface. They also found that the antibodies decreased angiostatin binding to the cells by more than 50%. This in turn led to an 80% decrease in angiostatin's ability to inhibit endothelial cell growth, which suggests that angiostatin works at least in part by binding to the ATP synthase. The Pizzo team recently acquired antibodies to the enzyme's β subunit and plans to see whether those block any of the remaining effects.

    The ATP synthase could play a key role in the survival of endothelial cells, which live, Folkman notes, “in the lowest oxygen of all cells.” Those in the capillary beds that drain tissues are bathed in blood that has been depleted of oxygen by the oxygen-hungry tissues. In tumors, which tend to compress their blood vessels, oxygen levels are even lower, but nevertheless, “you can see tumor vessels coursing through an environment where all the other cells are necrotic” from lack of oxygen, says Duke University tumor biologist Mark Dewhirst.

    Normally, oxygen-deprived cells have trouble synthesizing enough ATP to survive. But the ATP synthase in the endothelial cells' outer membranes might produce ATP in a process that doesn't require oxygen. During energy generation in the mitochondria, the enzyme is driven by a gradient of protons across a membrane, produced by the organelle's oxygen-burning metabolism. But in endothelial cells, the gradient could result from the lack of oxygen, which tends to acidify the inside of cells compared to the outside. Endothelial cells could also have a plentiful supply of adenosine diphosphate (ADP) for conversion to ATP, Pizzo notes, because red blood cells release lots of ADP in low-oxygen conditions.

    Hammes calls the hypothesis “speculative, but very intriguing.” ATP synthase is a large enzyme, made up of many subunits, and the Pizzo team hasn't yet shown whether the whole enzyme is present and functioning in the endothelial cell membrane. The researchers are joining forces with biochemist Richard McCarty, who studies ATP synthase at Johns Hopkins University, to explore that issue. They will also look to see how angiostatin affects the enzyme.

    If angiostatin does achieve its effects by inhibiting the enzyme, as Pizzo suspects, drug developers will likely start searching for small molecules that do the same thing. They might work as angiogenesis inhibitors that could be administered by mouth. “If I were a pharmaceutical company,” says Folkman, “that's what I would do.”


    Nurture Helps Mold Able Minds

    1. Ingrid Wickelgren

    Recent studies are showing that the environment, especially early in life, can influence a person's IQ—for better or for worse

    Each morning 20 years ago, a young mother waited with her child Susie* for the bus that would take Susie to school. Nothing unusual in that, except that Susie started when she was just 2 months old, and “school” was an experimental program at the University of North Carolina (UNC), Chapel Hill. There Susie received a wealth of interventions designed to foster her mental development—everything from bright objects dangled in front of her eyes while she was still a baby to lessons in the ABCs, color names, and counting as she became a toddler.

    Without this early start, healthy development would have been a miracle for Susie, whose mother had an IQ, or intelligence quotient, in the 40s and could not read signs or determine how much change she should get from a cashier. Her grandmother had been similarly ill-equipped for modern life. Today, however, Susie's IQ measures some 80 points higher than her mother's did. She holds two bachelor's degrees and is on her way to a master's degree in speech pathology.

    In the 1970s, when psychologist Craig Ramey, now at the University of Alabama, Birmingham, started the early-intervention project Susie attended, the tests used to assess a person's IQ were largely assumed to measure innate abilities—the product of genes, not the environment. Some researchers still think they do. For example, the authors of the 1994 book, The Bell Curve—political scientist Charles Murray of the American Enterprise Institute in Washington, D.C. and the late Harvard psychologist Richard Herrnstein—argued that genetic differences are a major reason why lower IQs are statistically more prevalent among certain races, such as African Americans. Others, however, attribute such variations to poverty and other environmental and cultural influences, such as poor schools, that might lead to intellectual impoverishment.

    But even though the issue is important—a person's IQ consistently predicts both school and job performance—the disputes have been heavy on ideology and light on evidence. With a few exceptions, such as Ramey's program for the children of poor, low-IQ mothers, direct tests of the effects of the environment on the particular aptitudes measured by intelligence tests are a recent development.

    And as in Susie's case—an admittedly dramatic one—they are showing that the environment, especially early in life, can exert a profound influence on IQ. Researchers have found that IQs can be modified, for better or worse, depending on such factors as how parents talk to their infants, the availability and quality of infant and toddler day-care programs, and the amount of schooling a person gets. “We have demonstrated that intellectual skills often believed to be innate are extremely sensitive to the environment,” says Janellen Huttenlocher, a cognitive developmental psychologist at the University of Chicago.

    Not everyone agrees that IQ is so easily tweaked. But even some who are focused on genes are enthusiastic about the attempts to tease out environmental influences on IQ. The studies “show you can make a difference” in IQ, says behavioral geneticist Robert Plomin of The Institute of Psychiatry in London. “Even something that's highly heritable may be malleable through interventions.”

    Much still remains to be learned about the nature and extent of the environmental influences on IQ. But what researchers have found so far already has important implications. Among other things, the new results provide support for the idea that racial differences in IQ are not genetically determined. The work implies that well-designed day-care programs might lower the risk of cognitive impairment and school failure in the 23% of American children who spend at least part of their childhood in poverty.

    Talking IQ

    Studies of how the environment influences one ability often measured by IQ tests, namely language, have provided some of the new evidence. Vocabulary, in particular, is a common component of IQ tests. Until the early 1990s, the wide variation in people's vocabularies was largely attributed to differences in people's inborn abilities to learn language. But then Huttenlocher and her colleagues decided to systematically study what role an environmental input—speech by a young child's mother—might play in building the child's vocabulary. To test this, Huttenlocher's team taped many hours of chatter between 22 toddlers and their mothers during the children's typical daily activities.

    The researchers did the tapings every 2 to 4 months when the children's language skills were developing most rapidly, between 16 and 26 months of age. From the tapes, the researchers detected a remarkable parallel between the size of a child's vocabulary at 26 months and the talkativeness of his or her mother. At the extremes, the mothers varied 10-fold in how much they talked, and the toddler of the most talkative mother had a vocabulary more than four times the size of the vocabulary of the child of the quietest mother.

    Of course, the correlation might result at least partly from genes for verbal ability shared by mother and child. But that's unlikely to be the primary cause, says Huttenlocher, because the moms in the study did not differ much in verbal IQ. What's more, the children were clearly picking up their vocabularies from their mothers, because the words each child used the most frequently mirrored those favored by the mother, and the mothers differed very little in the relative frequency with which they used various words.

    And now, in as-yet-unpublished work, Huttenlocher and graduate student Elina Cymerman have found something similar for speech syntax, or grammar, an aspect of language long thought to develop similarly in all people due to shared mental machinery for language. Cymerman and Huttenlocher examined speech taped from 34 parents and their 4-year-old children for the proportion of complex, multiclause sentences, such as, “I am eating because I am hungry,” versus that of simple, single-clause ones like “Pick up the truck.” They found a striking relationship between the proportion of complex sentences spoken by the parents and the proportion of such sentences uttered by their children both at home and at school.

    Although mothers and children also undoubtedly share some language genes, Huttenlocher says a syntax gene alone is unlikely to result in the close similarity her team found in the language used by a child and his or her mother. Developmental psychologist Peter Jusczyk of Johns Hopkins University in Baltimore agrees. Calling Huttenlocher's work “very interesting,” Jusczyk says it lends considerable support to the idea that early speech input can have a dramatic effect on the development of a child's language skills.

    Bringing up baby

    These results suggest that mothers or other caregivers can help infants improve their language skills, but researchers have also long wanted to know if outside intervention could help as well. In 1964, researchers began the first national preschool program for poverty-level families, Project Head Start, then just a summer program for 5-year-olds. But a 1969 Westinghouse report on the project concluded that by the time the children had completed first grade, there were no detectable differences, in IQ or school performance, between children who had participated in Head Start and children of similar background who had not.

    A budding cadre of developmental psychologists advanced an explanation that boiled down to too little, too late. Studies were hinting that the human brain develops at breakneck speed during the first years of life. To make a difference, they reasoned, one would have to intervene early, and with a vengeance. So in 1972, Ramey, his wife Sharon, and their colleagues started the Abecedarian preschool intervention program at UNC, which began at infancy and lasted 5 years. For the project, the researchers randomly assigned 111 children from poor and uneducated families in the surrounding community to either an intervention group, which received full-time, year-round day care along with medical and social-work services, or a control group getting medical and social benefits but no day care.

    The day-care program included gamelike learning episodes integrated into the day and aimed at improving language, motor, social, and cognitive skills and concepts. For example, preschoolers participated in baking projects that required them to measure amounts, group chats about objects collected from a distant place, or games that involved jumping into containers filled with materials of different textures.

    The success of the endeavor, and its ability to mold IQ, was evident by the time the children were 3 years old. The toddlers in the program showed normal IQs averaging 101, a whopping 17-point advantage over the average IQs of the controls. Follow-up results, to be reported in an upcoming issue of Applied Developmental Science, demonstrate that the effects are long-lasting. More than a decade later, at age 15, children from the intervention group still maintained an IQ advantage of five points over controls, with respective averages of 97.7 and 92.6. They also did better on standardized tests of reading and math and were less likely, by nearly a factor of 2, to have been held in the same grade in school for a second year.

    And the greatest improvements were shown by children whose mothers had particularly low IQs, those below 70. (The average IQ of the mothers in the Abecedarian group was 85.) At age 15, these children showed a 10-point IQ advantage over a group of children whose mothers had IQs of less than 70 but who did not receive intervention. Comparable results came from a similar preschool intervention study begun several years earlier, called the Milwaukee project, in which all the mothers' IQs were below 75.

    The psychologists suspect that the early stimulation leads to lasting physical changes in the brain, analogous to what William Greenough, a neuroscientist at the University of Illinois, Urbana-Champaign, has seen in studies of rats. In the late 1980s and early 1990s, Greenough's team found that the brains of rats reared in groups surrounded by interesting plastic forms and toys showed more extensive neuronal branching and more connections, or synapses, between neurons than the brains of rats reared alone in sparsely furnished cages. The rats in the enriched environments also had double the total volume of capillaries feeding individual brain neurons that the isolated rats had.

    Still, recent work by Ramey suggests that the early-intervention approach may be less effective at compensating for physical disadvantages, such as low birth weight, which is associated with depressed intelligence. In the mid-1980s, he and colleagues at medical centers in eight U.S. cities recruited 985 babies who weighed 2.5 kilograms or less when born. Children in the intervention group received weekly home visits as infants and attended preschool from ages 1 to 3.

    Early gains were impressive—at age 3, the toddlers in the intervention group had IQs up to 13.2 points higher than controls, with somewhat smaller gains for the lightest children. But over time, the benefits diminished, particularly for the lightest infants. In 1997, a team led by pediatrician Cecelia McCarten, then at Albert Einstein College of Medicine in the Bronx, New York, one of the eight sites, reported no lasting benefits at age 8 for the children who had been born extremely small, but an IQ advantage of 4.4 points for those born somewhat heavier, who also showed significantly higher scores on math and vocabulary achievement tests.

    The gains may have been less dramatic than those of the Abecedarian Project because the low-birth weight babies may have needed more nonenvironmental interventions, such as medical attention, than the program provided. But another possible explanation, the study's investigators believe, is that the program was shorter, ending at age 3, due to limited funds. “[It] stopped before developmental change reached its apex,” Ramey suggests.

    Teaching intelligence

    Although very early intervention may be the most effective at bolstering IQ, a later source of environmental input—school—now seems to have a smaller and more gradual, but still significant, effect. Psychologists and social scientists have long known that people with higher IQs tend to have more education, but many assumed this resulted solely from the fact that smarter people tend to get farther in school. Over the years, a smattering of studies has hinted that schooling itself can also push up a person's IQ, or prevent it from falling. But nothing emerged to convince the doubters of schooling's impact until 1991, when Cornell developmental psychologist Stephen Ceci reviewed the results of dozens of studies and concluded that schooling is a strong force in forming and maintaining IQ.

    For example, he cited studies that found a high correlation between schooling and IQ after controlling for the fact that smart children tend to begin school earlier and remain there longer. Other reports showed that IQ can drop over summer vacations and that the IQs of children born to gypsies or transients declined as they missed more and more school. Still other data documented IQ drops resulting from the sudden unavailability of school, as in the Netherlands during World War II when the Nazi occupation forced the closure of many schools. Probably as a result, the children's IQs dropped by about seven points.

    Since Ceci's paper appeared, even stronger evidence for schooling's impact on IQ has emerged. In 1996, economists Derek Neal, now at the University of Wisconsin, Madison, and William Johnson of the University of Virginia, Charlottesville, found that a year of schooling can raise IQs by about 3.5 points. They came to that conclusion by comparing the scores on an IQ-like test called the Armed Forces Qualifying Test of children whose birthdays were in the first 9 months of the year with the scores of children born in the last 3 months of the same year, who generally entered school a year later. Because the amount of schooling was determined by a chance event in the timing of birth and not on personal decisions that could reflect IQ differences, the lower IQs of students with late-year births are “entirely a function of [these students] being more likely to attend school one less year than their peers born during the first 9 months of the year,” comment Ceci and colleague Wendy Williams in a 1997 paper in American Psychologist.

    In a similar vein, Huttenlocher, along with Chicago colleague Susan Levine and UNC's Jack Vevea, measured the rate of IQ growth in a national sample of 1500 children over 6-month periods that vary in the amount of schooling children receive. In work published this past August in Child Development, they found that the children's language, spatial, and conceptual skills improved much more sharply during the school-packed October-April period than in the April-October interval, which includes summer break and the less intense beginning and end of the school year. “It's a very clean way of showing that schooling has an effect on IQ or IQ-like tests,” says Levine. Overall, conclude Harvard sociologist Christopher Winship and economist Sanders Korenman of Baruch College in New York City in the 1997 book, Intelligence, Genes, and Success, “a year of education most likely increases IQ by somewhere between 2 and 4 points.”

    If school does influence IQ, it might help explain something called the Flynn Effect after its discoverer, political scientist James Flynn of the University of Otago in Dunedin, New Zealand. In 20 countries to date, Flynn has documented a rise of about 20 IQ points every 30-year generation—a trend obscured by the fact that the major IQ test manufacturers renorm their tests every 15 to 20 years, resetting the mean to 100. However, if everyone who took an IQ test today was scored using the norms set 50 years ago, more than 90% of them would be classified as geniuses, with IQs of about 130 or higher, depending on the test. Similarly, if our parents' or grandparents' IQ scores circa 1949 were measured using today's norms, over 90% of them would be labeled “borderline mentally retarded,” with IQs below 70 or so.

    Although biological factors, such as better nutrition, could underlie the Flynn Effect, gene-pool changes are much too slow to account for it. Schooling is a primary suspect, however, as the average length of schooling has increased enormously—from less than 8 years in the 1920s to more than 13 years today. Another possible contributor to the Flynn Effect, Ceci notes, is more cognitively advanced home environments created by better educated parents.

    None of this means, the researchers say, that a person's genetic heritage plays no role. They concede that IQ is a product of genes, but of genes that environmental forces can, over time, deftly bend this way or that, to boost or depress IQ. “The old debate—nature versus nurture—is not a constructive way to frame this issue,” says Ramey. “Instead, we must recognize that for any [genetic makeup] there are experiential contributing factors. We want to catalog which of these factors contribute to intelligence and discover” how much they contribute.

    Some researchers, however, continue to downplay environmental factors in IQ. The intervention literature resonates with “a depressingly common theme,” says Bell Curve co-author Murray. “The anecdotes are great, but every time you look at those data in detail—from the Abecedarian project, the Milwaukee project, and so on—again and again the claims of major gains become very hard to sustain.” He argues that benefits are seen only in severely deprived children and that nobody knows how to raise the IQs of children from only moderately poor backgrounds.

    But Ramey counters that although the most deprived children do benefit the most, his intervention studies show benefit to children from a broad range of backgrounds, from those whose mothers dropped out of high school to those whose mothers attended college. Ramey says that Murray dismisses the intervention data simply because “they present the most direct challenge to his central thesis” that genes largely determine a person's intelligence.

    Ideology aside, major questions remain about what specific kinds of intervention produce the biggest effects on cognitive performance—from the relative benefits of full-time versus part-time programs to the value of providing home visits by social workers. For now, the evidence strongly suggests that quality preschooling by any standard would provide an important safety net for children who might otherwise not get the mental stimulation they need. The price tag is steep—more than $10,000 per child per year. But the payback in productivity, and in reduced social support programs later in life, may be even greater, if national preschool programs accrue a benefit similar to that seen in North Carolina—something now being tested by a 200-site, Early Head Start program in which infants are enrolled right after birth.

    And although researchers are only now starting to define the particular kinds of external stimuli that promote optimal intellectual development, the work may reveal clear guidelines for parents about how to increase the odds of bringing up bright, well-adjusted children. Already, science is starting to underline some of the common-sense guidelines conscientious parents have followed for years—giving children activities that challenge their minds, praising them generously, and of course, talking to them a lot.

    • * This name has been changed.


    Restorers Reveal 28,000-Year-Old Artworks

    1. Michael Balter

    The Grande Grotte's extensive gallery of cave art remained hidden until 1990. Now art restorers are chipping away the calcite that kept it out of sight

    ARCY-SUR-CURE, FRANCE—Eudald Guillamet, an art restorer from the tiny European republic of Andorra, is facing the biggest challenge of his career. Lying on his back deep inside a cave just outside this Burgundy village, Guillamet looks up at a low ceiling of flat limestone, where the outline of a painted animal is just barely visible. Over many hours, he works at the image with the diamond tip of a dentist's drill, carefully removing the layers of white calcite that had long kept it hidden from modern human eyes. Finally, having left just a thin layer of calcite to protect it, Guillamet reveals the artwork in all its vivid glory: a whimsical mammoth, executed in black charcoal and red ochre paint nearly 30,000 years ago.

    The work at Arcy marks the first time archaeologists have attempted to restore cave paintings. “They are rediscovering things we didn't even know were there,” says prehistorian Randall White of New York University. Since 1997, Guillamet and Spanish art restorer Javier Chillida have revealed 16 of the more than 130 drawings, paintings, and engravings known to decorate the Grande Grotte, the largest of several caves near Arcy that were once occupied by prehistoric humans. They have brought nine of these artworks to life within the last month alone.

    Such extreme measures have not been necessary in other famous painted caves, such as Chauvet and Lascaux in southern France, where drier conditions have left spectacular bestiaries as fresh as if they were painted yesterday. But over thousands of years, constant dampness in the Grande Grotte caused a buildup of calcite—the stuff of stalactites and stalagmites—burying the paintings under mineral layers up to 4 millimeters thick. They were so well hidden that although the Grande Grotte has been visited by modern humans for more than 300 years, it was not until 1990 that anyone realized there were paintings there at all.

    Researchers are going to all this trouble because the Grande Grotte—one of only a few painted caves known in the north of France—houses one of the three oldest cave art collections yet identified anywhere. The team of archaeologists working at Arcy—led by cave art expert Dominique Baffier and archaeologist Michel Girard—has determined that the artworks were probably executed 28,000 or more years ago. Moreover, thematic similarities between the art at Arcy and that at Chauvet—specifically, the high percentage of so-called “dangerous animals,” such as mammoths, rhinos, and bears—raises speculations about how mythological thinking might have diffused between southern and northern France. The Grande Grotte “is really becoming a major cave,” says prehistorian Jean Clottes, who leads the explorations at Chauvet. “It confirms one of the important findings of Chauvet, which is that dangerous animals were prevalent” in the earliest days of cave painting.

    Just after World War II, the late French prehistorian André Leroi-Gourhan began excavating the evidence of prehistoric human occupation at Arcy and quickly established it as a site of premier importance. But Leroi-Gourhan, who spent nearly 2 decades working here, never saw the paintings. They were revealed one day in 1990, when a television crew filming in the cave turned on its bright lights and the faint outline of a wild goat suddenly sprang into view. Baffier and Girard—who had both been students of Leroi-Gourhan—immediately began work.

    Many paintings had been damaged or destroyed in 1976, when the manager of the caves—unaware of the artwork—used a high-pressure hydrochloric acid solution to remove centuries of soot deposited on the walls and ceilings by visitors' torches. But the team identified a wide variety of remaining artworks using infrared photography. They ranged from early signs of scraping and engraving to later, more sophisticated works in charcoal and red ochre. Although the calcite layer initially prevented them taking direct samples from the charcoal drawings for radiocarbon dating, Girard excavated the cave floor and found bits of charcoal and burnt bone in association with drops of red ochre paint. Radiocarbon analysis of the charcoal gave dates up to 28,000 years, while the bone fragments were as old as 30,000 years.

    Baffier believes that the newly discovered paintings may provide important clues about how artistic styles and religious myths spread throughout Europe. Prehistorians had long noticed similarities between flints and tools found in the caves at Arcy and those from excavations of caves in France's Ardèche region. Moreover, similarities in style between the paintings at Arcy and those in some Ardèche caves—for example, the way the chest and front legs of mammoths were drawn with one sweeping line—also suggested communication between the two regions.

    This hypothesis was strengthened after the discovery of Chauvet—home of the world's oldest cave art at 32,000 years (Science, 12 February, p. 920)—in the Ardèche in 1994 revealed strong thematic similarities between these two caves. “There is a great convergence … there must have been exchanges” between southern and northern France, Baffier says. White agrees that there might have been a “north-south corridor,” perhaps along France's network of rivers, that allowed diffusion of ideas between the Ardèche and Burgundy. But Clottes cautions that prehistorians must be “wary” about drawing conclusions from stylistic comparisons, which have sometimes led cave art experts astray.

    Such speculations may gain a firmer footing in a few months, when the results of direct radiocarbon dating of samples from the charcoal drawings in the Grande Grotte become available. And the team, which has just ended a 3-week exploration, plans to return to Arcy in June to breathe life into yet more animals.


    Submillimeter Astronomy Reaches New Heights

    1. Govert Schilling*
    1. Govert Schilling is an astronomy writer in Utrecht, the Netherlands.

    High in the Chilean Andes, astronomers are planning the world's loftiest observatory: a 10-kilometer-wide array of dishes to peer into the universe's cold recesses

    LLANO DE CHAJNANTOR, CHILE—At over 5000 meters above sea level, this large plateau in the Chilean Andes, near the border with Bolivia and Argentina, is not a nice place to spend time. In the thin air, breathing is laborious, quick movements produce instant dizziness, and it soon becomes hard to think straight. But this very airlessness makes the site a haven for submillimeter astronomy. Water vapor in the atmosphere easily absorbs photons at submillimeter wavelengths, sandwiched between the infrared and radio bands. So Llano de Chajnantor, which not only lies in one of the driest deserts on Earth but also is perched above 50% of the atmosphere, is the next best thing to outer space for submillimeter studies. Within 10 years, 64 12-meter radio dishes will sprout from the barren plateau, forming an array 10 kilometers across and opening a sharp eye on celestial features ranging from surface markings on Pluto to extrasolar planets to the dusty central regions of active galaxies.

    A higher plane.

    Weather stations on Llano de Chajnantor.


    At least that's the vision of astronomers in a nascent collaboration between the European Southern Observatory (ESO), which already runs a number of telescopes in Chile, the U.S. National Radio Astronomy Observatory (NRAO), and possibly Japanese astronomers as well. The United States and Europe have earmarked $40 million for 3 years of design and development starting this year, with the final cost estimated to be $400 million. On 7 March, ESO officials and astronomers shared their vision of a submillimeter array with journalists and guests at the site, following the inauguration of ESO's Very Large Telescope on Cerro Paranal, a few hundred kilometers to the west and 2.4 kilometers lower in altitude. Provided with personal oxygen flasks and watched closely by physicians for signs of puna—the Chilean word for altitude sickness—the visitors slowly walked around the site, where automatic instruments are continuing to assess the observing conditions. Although most visitors suffered no ill effects beyond headaches and nausea, one of the party was brought down the mountain by ambulance, while others were under close medical supervision.

    The submillimeter waves that the Chajnantor array would observe come from the coldest regions in the universe. Millimeter telescopes are already operating in the United States, Europe, and Chile, which have studied cool nurseries of stars as well as distant, dusty galaxies. But the new array would be able to see details as small as 10 milli-arc seconds, an improvement of about 50 times over current scopes, and it will be sensitive to a wider range of wavelengths, from 10 down to 0.3 millimeters. Those capabilities should allow astronomers to see extrasolar planets directly, image protoplanetary disks around other stars in great detail, and observe extremely distant galaxies in the young universe. “It's a major step in astronomy,” says Karl Menten of Germany's Max Planck Institute for Radio Astronomy.

    The first plans for a Large Submillimeter Array (LSA) surfaced in ESO about 10 years ago, says Roy Booth of the Onsala Space Observatory in Sweden. The Swedish-ESO Submillimeter Telescope had just been built at ESO's La Silla Observatory in Chile and, as in radio astronomy, researchers realized that to get better resolution they would have to build an array of dishes and combine their signals through a process known as interferometry. “An interferometer was the next logical thing,” says Booth.

    NRAO was also planning its own Millimeter Array (MMA) at Mauna Kea, Hawaii. Then, “a few years ago, the Americans decided that northern Chile would be a much better site [for submillimeter astronomy] than Mauna Kea,” says Booth. “Japanese radio astronomers were also planning their own interferometer. It seemed ridiculous to have two or even three arrays.” In 1997, ESO and NRAO began to discuss the possibility of merging their two projects. Later this year, the discussion will be formalized with the signing of a “memorandum of understanding”—the first step in major international collaborations. “There are very good chances that the Japanese will also join the collaboration,” says Menten.

    Originally, the NRAO planned to build a relatively small array of 40 8-meter antennas, with a total collecting area of 2000 square meters. ESO favored a 10,000-square-meter array of 60 15-meter dishes. The two partners have now compromised on 64 12-meter dishes with a collecting area of 7000 square meters. Although ESO originally tested another site in the Andes, both partners now favor Llano de Chajnantor, which lies in the Atacama desert, one of the driest regions on Earth. “The site is extremely well suited to do submillimeter observations for 50% of the time,” says Menten, because of its still air and low water vapor content. Although remote, it is within easy reach of San Pedro de Atacama, an oasis that has been continuously inhabited for almost 10,000 years by the Atacameños and Aymaras Indians and is now a village of some 1000 people, popular with adventurous tourists.

    The 64 dishes will probably be built in or near San Pedro and transported to Chajnantor by truck. At the site, their configuration will be adjustable. Every 6 months or so, workers will move them from one set of bases to another to form a compact group with a diameter of some 3 kilometers (best for picking up weak signals) or a 10-kilometer-wide ring (giving the best resolution), says Booth. Astronomers will control the array remotely from San Pedro, which is at a comfortable altitude of 2400 meters. According to Booth, “there will be no scientists on the site. We'll probably hire Chilean workers who are used to the high altitude.”

    Despite the successful collaboration so far, the partners have not yet been able to agree on a name for the array. Candidates include the Atacama Array, LAMA (Large Array for Millimeter Astronomy), COSMIC (Chajnantor Observatory Sub-Millimeter International Collaboration), and VLSA (Very Large Submillimeter Array). “Right now, we simply call it the LSA/MMA,” says Menten, “but within a few months, a better name will be chosen.”


    Forging a Link Between Biofilms and Disease

    1. Carol Potera*
    1. Carol Potera is a writer in Great Falls, Montana.

    The sticky conglomerations of bacteria known as biofilms are being linked to common human diseases ranging from tooth decay to prostatitis and kidney infections

    “United we stand, divided we fall” was a watchword of the American Revolution—with good reason. The rebels who fought off the yoke of British colonialism recognized that, although individually they might be no match for the well-equipped British soldiers, they might prevail if they stuck together. Now, scientists are coming to recognize that, to a much greater degree than previously thought, bacteria also adopt this strategy, allowing them to counter both the might of the body's immune system and the weapons that physicians use against them.

    Until recently, the slimy conglomerations of bacteria known as biofilms were recognized mostly for their propensity to coat—and corrode—pipes. But within the past few years, mounting evidence has shown that they cause a host of medical problems as well. Biofilms foul tubing and implants, such as heart valves and artificial hips, and they attack body tissues, such as the teeth and gums, the lungs, the ears, and the urogenital tract. Indeed, infectious disease experts at the Centers for Disease Control and Prevention (CDC) estimate that 65% of human bacterial infections involve biofilms.

    The heavy toll these cohesive bacterial troublemakers take on human health has caught the attention of the National Institutes of Health (NIH). It will soon begin awarding the first grants in a new coordinated biofilm program involving eight of the 23 institutes. “We want researchers to know that we recognize the importance of biofilms and [want to] bring people together to work on the problem,” says Dennis Mangan of the National Institute of Dental and Craniofacial Research (NIDCR), who is spearheading the effort. Biofilms require a coordinated attack by researchers with expertise in everything from microbiology and immunology to materials science and mathematical modeling, Mangan explains. The goal is to understand how and why biofilms form so that researchers can identify their Achilles' heel and devise better treatments, which are badly needed.

    Bacteria sequestered in biofilms are shielded from attack by the host's immune system and are often much harder to kill with antibiotics than their free-floating or “planktonic” counterparts, says William Costerton, director of the Center for Biofilm Engineering at Montana State University in Bozeman. That may be why it's so hard to rid the body permanently of some infections, such as those of the ear or urinary tract.

    Biofilms' links to diseases

    The first inklings that biofilms could be a health problem came in the mid-1960s when dental researchers Johannes Van Houte and Ronald Gibbons of the Forsyth Dental Center in Boston, Massachusetts, recognized that bacteria living in the mouth synthesize gummy adhesives that accumulate on the teeth, gums, and tongue. They proposed that bacteria attach themselves to solid surfaces in areas such as the mouth where they might otherwise be washed away. But although that helps the bacteria hang on, it can be bad for the body. In the mouth, for example, it results in dental plaque, tooth decay, and gum disease.

    Evidence supporting the idea that bacteria attach themselves to surfaces throughout the body began appearing about a decade later when Thomas Marrie of Dalhousie University in Halifax, Nova Scotia, using the then recently developed scanning electron microscope, detected a biofilm coating a heart pacemaker removed from a patient. The following year, he also saw the films creeping up urinary catheters. Such biofilms often lead to infections of the bladder or other organs.

    Since then, biofilms have been implicated in numerous infections. Perhaps the most notorious is Legionnaire's disease, named after the infection that killed 29 members of the American Legion attending a convention in Philadelphia in 1976. The culprit turned out to be chunks of biofilm containing the bacterium Legionella pneumoniae that had wafted out of air conditioners.

    And in the mid-1980s, Joseph Lam of the University of Calgary in Alberta, using the transmission electron microscope, confirmed that biofilms are present in the lungs of cystic fibrosis patients. More recent studies by Nels Hoiby of the University of Copenhagen in Denmark show that biofilms containing the bacterium Pseudomonas aeruginosa clog the lungs of 80% to 90% of these patients. This eventually leads to death by respiratory failure. “Antibiotic [therapy] kills some cells, but biofilms hunkered down survive the onslaught,” says Peter Greenberg of the Cystic Fibrosis Research Center at the University of Iowa, Iowa City.

    Microbiologist Fred Quinn, chief of the Pathogenesis Laboratory at the CDC, believes that something similar may occur in tuberculosis. Sputum from patients in the late, infectious stage of the disease consists of viscous clumps of the causative bacterium Mycobacterium tuberculosis, resembling the P. aeruginosa clumps seen in cystic fibrosis patients. “Perhaps this late stage of tuberculosis is a biofilm,” surmises Quinn. Chest x-rays of tuberculosis patients also show dense areas that suggest biofilm-filled cavities in the lungs, he notes.

    Researchers now suspect that biofilms are also behind a number of additional medical conditions. For example, microscopic exams have shown that biofilms form the glue that binds struvite kidney stones. Accounting for a quarter of all kidney stones, struvite stones damage the kidney more than other types. They also tend to return after surgical removal. The biofilm connection has led to better surgical techniques, however. Urologists know they “must get every little bit [of the stones] or the biofilm recurs,” says urology specialist Curtis Nickel of Queens University in Kingston, Ontario.

    Long-lasting biofilms, undetectable by traditional culture techniques, may also cause some common recurring infections. Using the sensitive DNA detection technique known as the polymerase chain reaction, microbiologist Garth Ehrlich of Allegheny University of the Health Sciences in Pittsburgh, Pennsylvania, found evidence for gene expression by the bacterium Haemophilus influenzae, a common cause of ear infections, in ear fluid from children weeks after they had antibiotic treatment. At the time, planktonic cultures of the fluids were negative. Ehrlich theorizes that the ear pathogen persists in biofilms.

    Further evidence for that possibility comes from Ehrlich's Allegheny colleague, Xue Wang, who detected bacterial protein synthesis in culture-negative ear fluids. “This strengthens our idea that bacteria are metabolically active in a biofilm, yet can't be cultured under planktonic conditions,” says Ehrlich. If so, he says, physicians should consider using tubes made of materials that resist bacterial growth to drain the fluids from chronically infected ears.

    Urologists have long battled urinary tract infections caused by biofilms creeping up catheters. Now they have evidence linking biofilms to other disorders of the urogenital tract. Most researchers had assumed that prostatitis, a common inflammation of the prostate gland that produces chronic pain and sexual dysfunction, isn't a bacterial disease because bacteria cannot be cultured from prostate fluid. But Nickel saw small amounts of biofilms in electron micrographs of prostate tissue removed from many prostatitis patients. The inflammation is always centered on the biofilms, he says, and contamination of just 1% of the prostate is sufficient to cause a serious problem. Nickel plans further studies to determine just how many cases of prostatitis stem from biofilms.

    What makes biofilms so tough

    As researchers have gathered evidence that biofilms are a common cause of infections, they have gained new respect for their powers of cohesion. In the early 1990s, Costerton and his Montana State colleague Zbigniew Lewandowski trained a confocal scanning laser microscope, which magnifies living cells in real time, on biofilms and saw that they are highly organized structures consisting of mushroom-shaped clumps of bacteria bound together by a carbohydrate matrix and surrounded by water channels that deliver nutrients and remove wastes.

    Microbiologists are now documenting the changes that allow bacteria to form biofilms—information that may help in designing therapies to combat the infections biofilms cause. Some of the findings, for example, point to why bacteria in biofilms are so much more resistant to antibiotics than their planktonic counterparts.

    Many common antibiotics, such as penicillins, prevent planktonic bacteria from synthesizing certain of the building blocks of their cell walls. But studies of gene expression patterns by Hongwei Yu of the University of Calgary have shown that up to 40% of the cell wall proteins of bacteria in biofilms may be different from those of their planktonic brethren. So antibiotics' targets may disappear when the organisms form biofilms. And even if they are present, antibiotics may not be able to get at them: Work by Costerton and by Nels Hoiby and his Copenhagen colleague Ami Kharazmi has shown that bacteria in biofilms secrete a sticky carbohydrate armor that can't be penetrated by antibodies or many antibiotics. And biofilm bacteria can survive without dividing, making them resistant to antibiotics that attack only dividing cells.

    One of the principal thrusts of the current work is to find ways to overcome these defenses. As NIDCR's Mangan points out, “we still don't know how to treat or prevent biofilms. That's a big reason for more research.” Some researchers are trying to identify the biochemical signals needed for biofilm formation. Last year, for example, researchers in Greenberg's lab at Iowa and David Davies at the Center for Biofilm Engineering discovered molecules, called acylhomoserine lactones, that instruct planktonic P. aeruginosa bacteria to join forces and build biofilms (Science, 10 April 1998, p. 295). These lactones turn on a series of 40 genes that tell bacterial cells to make a slime coating and remodel their outer walls. It might be possible to find drugs that interfere with that signaling.

    In a similar vein, microbiologist Richard Lamont of the University of Washington School of Dentistry in Seattle is searching for the signals involved in the formation of dental plaque. His team found that although the bacterium Porphyromonas gingivalis dominates in causing periodontal infections, another organism, Fusobacterium nucleatum, embedded in plaque, provides the anaerobic conditions needed for P. gingivalis to wreak its destruction. Lamont is now looking for the signals that enable F. nucleatum to set up housekeeping in plaque, in hopes of finding compounds that block them and make plaque inhospitable to P. gingivalis. “We don't know if acylhomoserine lactones are involved, but some signal definitely is,” Lamont says.

    Researchers are exploring a different strategy to combat biofilm infections already under way. Most likely, a signal “tells [biofilm bacteria] it's time to leave the nest,” Greenberg suggests. Once identified, these dispersal signals could be used to disrupt biofilms, rendering them more susceptible to killing by antibiotics or the immune system.

    Other researchers are probing biofilm microenvironments in hopes of coming up with more effective antibiotic regimens. Laboratory-grown Pseudomonas biofilms display marked differences in pH, chloride concentration, permeability, and oxygen supply at various locations in the biofilms. “No single antibiotic can work in all these microenvironments,” says Costerton.

    With the CDC's Quinn, Costerton plans on using microelectrodes to probe the lung lesions of rabbits with an experimental form of tuberculosis and then use that information to fine-tune the antibiotic cocktails used to treat the disease. For instance, if an antibiotic wipes out all bacteria except those thriving in pockets with a pH below 4.0, it could be combined with another that selectively kills acid-loving bacteria.

    It may even be possible to put bacteria themselves to work in combating noxious biofilms. As it turns out, not all biofilms are bad. In looking for ways to prevent urinary tract infections, microbiologist Gregor Reid of the University of Western Ontario in London stumbled on “good” biofilms containing some 50 species of bacteria in the urogenital tracts of healthy women. Urinary tract infections disrupt this healthy biofilm, which can be restored by adding specific strains of Lactobacillus bacteria.

    In a pilot study of 55 women who had had multiple urinary tract infections, Reid found that a vaginal Lactobacillus suppository, applied weekly for a year, reduced recurrences from an average of six per year to 1.6. But no drug company wants to make the suppositories. “The biopharmaceutical industry isn't ready for biological preventive therapy,” Reid says.

    But perhaps the NIH initiative will help catch the industry's attention by providing a better understanding of how to separate the bacterial militiamen from their comrades. Rather than waging an all-out war on biofilms with old weapons like antibiotics, Costerton says, “we have to learn how to manipulate their bothersome ways.”

Stay Connected to Science