News this Week

Science  28 Mar 1997:
Vol. 275, Issue 5308, pp. 1876
  1. Cancer Genetics

    New Tumor Suppressor Found--Twice

    1. Elizabeth Pennisi

    Two research teams have separately homed in on a tumor suppressor gene, the loss or inactivation of which may be important for the progression of brain, prostate, and other cancers

    When Jing Li joined Ramon Parsons at Columbia University's College of Physicians and Surgeons last year to hunt for breast cancer genes, he expected the work to be intense. But when news reached the lab that another team might have cloned the same gene his lab was working on—a tumor suppressor, the inactivation of which seemed to contribute to the development of both prostate cancers and the highly malignant brain tumors known as gliomas—Li got a true taste of just how high the stakes have become as academic and corporate labs scramble to find important disease genes (see sidebar). Now, the race for this gene has ended in a dead heat.

    Tumor progression.

    When a brain tumor loses its PTEN genes, a low-grade cancer (bottom) is likely to turn highly malignant (top).

    L. LANGFORD, P. STECK/M. D. ANDERSON CANCER CENTER

    On page 1943, Li, Parsons, and their colleagues report that they have cloned the tumor suppressor, which resides on chromosome 10. And in the April issue of Nature Genetics, cell biologist Peter Steck of M. D. Anderson Cancer Center in Houston and Sean Tavtigian of the biotech firm Myriad Genetics in Salt Lake City will announce that they have found the same gene.

    Called PTEN (for phosphatase and tensin homolog deleted on chromosome 10) by the Parsons group and MMAC1 (for mutated in multiple advanced cancers 1) by Steck and his colleagues, the new gene joins some 16 other known tumor suppressors. But while it's far from the first such gene discovered, cancer researchers are enthusiastic, because the early data indicate that PTEN might rank in importance with p53, retinoblastoma, and p16, tumor suppressors that have been linked to several types of tumors. “[PTEN] seems to be a major gene in some pretty important cancers,” says Kenneth Kinzler, a molecular geneticist at Johns Hopkins University. In addition to prostate cancer, which afflicts some 317,000 men every year in the United States, and gliomas, which strike another 15,000 people, these might include breast and kidney cancer.

    But equally intriguing, says molecular biologist Stephen Friend of the Fred Hutchinson Cancer Research Center in Seattle, is the apparent mode of action of the PTEN protein. Its amino acid sequence indicates that it resembles two different types of proteins: tyrosine phosphatases, which are enzymes that remove phosphate groups from the amino acid tyrosine in other proteins, and tensin, a protein that helps connect the cell's internal skeleton of protein filaments to its external environment.

    Cancer researchers suspected that tyrosine phosphatases might be tumor suppressors because they directly counter the actions of another set of enzymes, the tyrosine kinases, which add phosphates to tyrosines and are part of the cell's growth-stimulating pathways. But there had been no direct evidence for that—until now. “This is proof of a long-held speculation that phosphatases would be important,” Friend says. In addition, the tensin resemblance suggests that PTEN might help cells stay in their normal locations within a tissue. Its loss, then, might be one of the steps that give tumor cells the ability to spread.

    Parsons began the current work about a year ago, when he joined forces with Michael Wigler of Cold Spring Harbor Laboratory on Long Island to apply a technique Wigler had developed earlier to the hunt for breast cancer genes. Called representational difference analysis (RDA), the technique can identify abnormalities in DNA by comparing the equivalent sections of DNA from normal and diseased cells (Science, 12 February 1993, p. 946). By 1996, the technique had already helped researchers home in on BRCA2, the second of two genes that cause hereditary susceptibilities to breast cancer.

    Many of the gene changes that lead to cancer are not inherited, however, but simply develop in specific cells, like those in the breast epithelia. To find such noninherited gene changes, Wigler had applied his method to cells from 12 primary breast tumors, identifying about a dozen possibilities for such cancer-causing gene changes, including a deletion on chromosome 10. Parsons was particularly interested in following up on that observation. Chromosome 10 is completely or partially missing in a variety of cancers, especially the aggressive brain tumors called gliomas—a prime indication that it carries a tumor suppressor. Researchers also suspected that it carries the gene responsible for a rare inherited disorder called Cowden disease, whose victims are predisposed to breast and other tumors.

    To narrow down the location of the suspected tumor suppressor, Wigler and the Parsons team examined cells from 65 human breast cancers to see whether their DNA lacked any of nine genetic markers located in the part of the chromosome that the RDA had identified as abnormal. One marker was absent in two of those samples, and when it also proved to be missing in some prostate and glioblastoma cell lines, Parsons and Wigler knew they were closing in on the gene. By October 1996, they were ready to try a technique called exon trapping to pull it out. This involves looking for messenger RNAs made by the deleted region, then using them to find the corresponding exons, which are the protein-coding regions of a gene.

    They found two exons. To get the rest of the gene, the group consulted the GenBank database, which includes not only the sequences of full genes but also the short DNA pieces called expressed sequence tags (ESTs). More than a dozen ESTs in the database matched different parts of the exons. Aided by a computer program called UNIGENE, which groups ESTs that seem to be part of the same gene, the researchers were then able to piece together the whole gene, using the ESTs as guides for sequencing it.

    In contrast to Parsons's 1-year blitz for the chromosome 10 gene, Steck's progress has been slow and steady, and he began his quest in gliomas rather than breast cancers. To try to find the crucial chromosome 10 gene that is missing in many of these brain cancers, Steck and his colleagues began adding progressively smaller pieces of the chromosome back to cultured glioma cells. The idea was to demonstrate that one or more genes on the chromosome could reverse some of the cancerous changes in the cells, and then to narrow the search for those genes to ever smaller pieces of the chromosome.

    This approach got the researchers to within 5 million bases of the gene. To close in further, they determined whether glioma samples lacked a genetic marker located within that region, and by last summer had found four samples in which both copies of chromosome 10 were missing that marker. There was a 75,000-base pair overlap in the missing DNA in these samples—a gap that presumably extended over their tumor suppressor.

    However, the researchers still had a lot of DNA to sort through, and Steck thought it might be too big a project for his three-person lab group. He then went for help to Myriad, a company experienced in locating and sequencing genes, having done so for both BRCA1 and -2, and p16. In November, Myriad's Sean Tavtigian stepped in; with Steck, he completed the hunt for the gene—all in about a month, Tavtigian says, using basically the same approach as the Parsons group. They also found signs that the gene is involved in some kidney, breast, and prostate cancers, as well as in gliomas.

    Although this team called the gene MMAC1, its sequence shows that it is the same as PTEN. “We started from two different places for two different reasons and got to the same place at the same time,” says Steck, who was unaware of the Parsons effort until a few months ago. “We confirm each other's work.”

    Both groups also attest to the importance of the gene. The Parsons group, for example, confirmed the Steck group's evidence that the gene is missing in many gliomas, as well as in some breast cancers. Their results hint that the gene is also important for prostate cancer. It was missing or altered, for example, in all four samples of the cancer that the Parsons group studied. Indeed, Johns Hopkins's Kinzler says, “there have been other candidate [prostate cancer] genes proposed, but I think this is the real McCoy.” And he predicts, “the chances are, it's going to be involved in other cancers.”

    Researchers still have a lot to do to find out just how the gene's loss could contribute to these cancers, although its sequence provides some important clues. As a phosphatase, the PTEN protein may counteract the work of the growth-stimulating kinases, which can help make cells cancerous when they are mutated into an overactive form. The researchers have not yet shown directly that the protein is a phosphatase, however, nor have they identified any possible targets for its phosphate-removing activity.

    The cytoskeletal connection might also help explain the abnormal growth of cancer cells. Because of its links to the protein matrix outside the cell, the cytoskeleton is thought to be part of the system that helps cells know that they are in contact with neighboring cells. Normal cells tend to stop multiplying when they encounter their neighbors, but cancer cells often keep dividing, as if they never got the message to stop. PTEN's absence might be what blocks the message. PTEN may also somehow help anchor cells, in which case its loss may enable a cell to metastasize. “If [PTEN] does have a role in cell motility or cell structure, that might be quite interesting,” says Eric Fearon, a cancer geneticist at the University of Michigan, Ann Arbor. How the protein's proposed roles as a phosphatase and a cytoskeletal protein might relate to each other is unclear, however.

    Even before researchers know how the gene works, it may prove useful to clinicians. Tavtigian points out that if this gene is the one mutated in Cowden disease, it could form the basis of a prenatal diagnostic test. And if the loss of the gene helps a cancer invade other tissues, then PTEN's status may help oncologists predict how malignant a glioma or prostate tumor will be—information that could help clinicians decide how aggressive they should be with surgery, chemotherapy, or other treatments. “If you had a molecular marker that could aid a clinician in that decision, that would be very significant,” Steck suggests.

    And then there's the possibility that the PTEN work might provide guides to better cancer therapies by leading researchers to protein it normally dephosphorylates, putting the brakes on cell growth. A drug that either blocks the phosphorylation of the protein or removes phosphates from it might cure a cell of any cancerous tendencies.

    Given all this potential, Li's life will not likely slow down any time soon, Parsons notes: “I think it's going to continue to be crazy here for at least another 6 months.”

  2. Science Publishing

    Prepaper Publicity Ignites Race to Publish

    1. Elizabeth Pennisi

    In mid-January, Ramon Parsons received a phone call that is every researcher's worst fear. Just weeks earlier, the molecular biologist, who works at Columbia University's College of Physicians and Surgeons, and his research associate Jing Li had finally nailed the tumor-suppressor they had been hunting for the past year. It was potentially a major prize, but they still had to verify that the gene was indeed a tumor suppressor—one whose loss or inactivation can lead to cancer development—and determine the range of tumors with which it might be involved.

    But just as they were anticipating the fruits of success, one of their collaborators, molecular biologist Michael Wigler of Cold Spring Harbor Laboratory on Long Island, called to say that he had just read in a biotechnology newsletter that Myriad Genetics had also found a tumor-suppressor. There was scant information in the press release, but what was there set off alarms. Myriad had linked its gene to malignant brain tumors called gliomas—just as Parsons had. The two genes, he feared, were the same.

    What happened over the next few weeks, as both the Parsons and Myriad groups rushed to get papers in press and file patents on the gene, is testimony to how complex life has become for researchers tracking down disease genes. With industrial collaborations on the rise, the competition has grown more intense, and patenting and stock-market worries are having an ever greater influence on how scientists go about their business.

    The immediate cause of Parsons's panic was a press release that Myriad put out on 22 January. This release simply highlighted the gene's role in gliomas, without mentioning its chromosomal location, or giving any information about its protein product or other cancers the gene might be involved in. Also missing was any indication that the work had been published, or was at least submitted for publication. “I thought it was bizarre, because they were announcing a discovery without publishing it,” Parsons recalls.

    Mark Skolnick, Myriad's vice president of research, says the company put out the release to guard against possible charges of insider trading by the U.S. Securities and Exchange Commission. Sean Tavtigian of Myriad notes that the company was about to enter one of the quarterly periods during which employees with stock options are allowed to trade their Myriad stock, and wanted to make sure that the public knew what the employees knew—that it had the glioma gene in hand—during that period. “We have to be uniform in our release of information,” says Skolnick. “There's a potential liability if information gets out in an uneven fashion.”

    But when the release was mentioned in the biotech newsletter, Bioworld, it also alerted Parsons to the competition at Myriad. “From reading the press release, [it seemed] we were farther along than they were,” says Parsons. Nevertheless, he worried that if the two groups had converged on the same gene, this announcement might jeopardize his chance to get credit for the discovery. “Do you know how hard it is to publish in a small lab if you're second?” Parsons asks.

    Li and two graduate students worked around the clock for the next 4 days screening various tumor samples, mostly primary brain tumors, to verify that the gene is indeed missing or aberrant, as would be expected for a tumor suppressor (see main text). But they skipped some of the tests they had planned to show that the gene is aberrant in more kinds of tumors, and also put off filing a patent on the gene until the paper was submitted. “My interest was to get a paper out the door,” Parson says. Indeed, on 31 January, as soon as the paper was finished, Li flew to Washington, D.C., to hand-deliver it to Science. “It was pretty crazy,” he says.

    Meanwhile, Myriad's academic collaborator on the project, Peter Steck of the M. D. Anderson Cancer Center in Houston, found himself caught up in Myriad's commercial priorities. “The first emphasis was patenting,” he explains. In addition, he was racing to meet a renewal deadline for his grant. He didn't get to his paper until later, even though he began hearing through the “industrial grapevine,” as Steck calls it, that they had competition. The paper was submitted in late February to Nature Genetics, accepted within a week, and published 3 weeks later, technically 4 days after Parsons's report.

    But while Parsons beat Steck and Myriad to publication, albeit by a narrow margin, there's no telling yet which group will wind up with the patent. And perhaps neither will. Both groups' searches led them to the GenBank computer database of gene sequences, which already turned out to contain several small DNA bits, called expressed sequence tags (ESTs), that fell inside the gene. A computer program had even grouped those ESTs into a tentative gene, which contained a sequence indicating that its protein product is a dephosphorylating enzyme. Myriad's Tavtigian points out that this could mean that a company that has generated many ESTs—Human Genome Sciences in Rockville, Maryland—may have beaten both Steck and Parsons to the Patent Office. That company declined comment on that possibility.

  3. Materials Science

    Shape-Changing Crystals Get Shiftier

    1. Robert F. Service

    A talented family of materials has gained some even more gifted members. So-called piezoelectric crystals have the unique ability to swell or shrink when zapped with electricity, as well as give off a jolt of juice themselves when compressed or pulled apart. Engineers have exploited this trait for decades to convert mechanical energy to electricity and back again in applications ranging from phonograph needles to telephone speakers.

    Crystal growth.

    A weak field displaces atoms toward the corners of the unit cells, but a stronger field rearranges the lattice.

    Source: T. Shrout and S.-E. Park

    Now, a pair of researchers from Pennsylvania State University has bred new piezoelectric wunderkinds, some of which display an effect 10 times greater than that of current family members. A paper by the researchers, materials scientists Thomas Shrout and Seung-Eek Park, is scheduled to appear this spring in the inaugural issue of the journal Materials Research Innovations, but early word of the new work is already turning a few heads. “It's an exciting breakthrough,” says Eric Cross, another piezoelectric materials expert at Penn State, who is not affiliated with the project. “Improvements by a factor of 10 are not easy to come by in a field that's 50 years old and considered mature.” If the materials are commercialized, as Cross and others believe they will be, they could usher in a new generation of piezoelectric devices that would improve everything from the resolution of ultrasound machines to the range of sonar listening devices.

    Piezoelectric materials owe their abilities largely to the asymmetrical arrangement of positively and negatively charged atoms in their crystal structure. The positive and negative charges balance out in each of the crystal's unit cells—its basic repeating units—but the positive charges, for instance, may be weighted toward the top of each cell. An electric field can displace the charges even farther, which distorts the overall shape of the unit cell and of the crystal as a whole. The process can also run in reverse: Squeezing or stretching the material shifts the charges relative to each other, redistributing electric charge around the surface of the crystal, which can produce a small electric current.

    The usual showcase for these properties is a cheap ceramic material called PZT, containing millions of crystalline grains in different orientations. PZT, which is composed primarily of lead, zirconium, titanium, and oxygen, can deform by as much as 0.17% in a strong applied field. To boost this shape-shifting ability, researchers have tried to grow single crystals of PZT, in which all the unit cells would line up in the same direction. Their contributions to the piezoelectric effect would also line up, enhancing it. But because PZT's components tend to separate during processing, the ceramic is extremely difficult to grow as a single crystal, says Shrout.

    To coax the material into forming single crystals, Shrout and Park tried varying its composition. They settled on a couple of different mixtures, such as a combination of lead, zinc, and niobium spiked with varying amounts of lead-titanate (PT). The researchers found that a small admixture of PT—less than 9%—yielded materials that not only grew into single crystals, but also ended up with piezoelectric abilities that are enhanced more than they expected.

    Just why that is, “we still don't know for sure,” says Shrout. But he and Park believe that at least part of the enhancement is due to the fact that an electric field applied to the new materials does more than just shift a few atoms around in the unit cell, as in PZT: “We think it causes the whole crystalline lattice structure to change from one form to another,” says Shrout. The changed crystal structure, in turn, frees individual atoms to respond more strongly to the field, increasing the overall distortion of the material. Likewise, a mechanical distortion probably produces a similar lattice shift, enabling the material to generate more current than standard PZT.

    Whatever the reason for the effect, it's likely to be very useful, says Robert Newnham, another piezoelectricity expert at Penn State. The new crystals will undoubtedly cost more than ceramics like PZT, says Park, because growing single crystals is a slow and painstaking process. But he adds that he and Shrout are working on ways to speed it up. If they succeed, the new piezoelectric wunderkinds could grow up to live expansive lives indeed.

  4. Endocrine Disrupters

    Synergy Paper Questioned at Toxicology Meeting

    1. Jocelyn Kaiser

    CINCINNATI—For a modest set of test-tube experiments, a study published in Science last summer made quite a splash. It was part of a rush of work looking into the controversial hypothesis that hormonelike chemicals in the environment could be contributing to cancer and reproductive problems in humans. The study found that in cultures of yeast cells specially fitted with receptors for the hormone estrogen, pairs of certain pesticides appeared to be up to 1600 times more potent at triggering an estrogenlike response than was either chemical alone.

    This stunning synergy—reported by Steven Arnold and others in the laboratory of endocrinologist John McLachlan at Tulane University in New Orleans (Science, 7 June 1996, p. 1489)—didn't just intrigue basic scientists studying steroid hormones. It sent a chill through the Environmental Protection Agency (EPA) and other regulatory agencies, which now faced the possibility that all their safety tests of single chemicals might be suspect. And some observers say it helped give a final nudge to provisions in federal pesticide and drinking-water laws passed last August, requiring that EPA screen chemicals for estrogenic effects. “I never saw a paper have such impact,” says one federal scientist close to the issue.

    In the months since, however, the findings have been getting attention for a different reason: Two studies involving five laboratories have tested the same chemicals for synergy in yeast and mammalian cells; they have come up empty-handed. The ensuing debate has deepened rifts among scientists already sharply divided over the risks of endocrine disrupters, as was evident this month at a meeting here of the Society of Toxicology, where new studies failing to find synergy were presented. Although some scientists agree that the results still merit further study, others doubt that the findings will hold up or have already written them off as unlikely to be of relevance to animals or people.

    The paper caused a hubbub because it seemed to address a central criticism of the endocrine disrupter hypothesis: that the suspected chemicals appear to be far less biologically active than natural estrogens in the body. The synergy results suggested that, in combination, endocrine disrupters might not be so weak after all. The McLachlan lab used an ingenious piece of genetic engineering: They manipulated yeast cells to express human estrogen receptors and some related genes—so that when estrogenic substances docked onto the receptors, the cell made a protein that turned the cell blue. They then tested combinations of the pesticides dieldrin, endosulfan, toxaphene, and chlordane (all but endosulfan have been banned but persist in the environment). Singly, these chemicals bound weakly with the estrogen receptors. But when two pesticides were tested together, their estrogenic activity shot up 160- to 1600-fold. The team also found fivefold synergy with polychlorinated biphenyls (PCBs), using transgenic human endometrial cells. The results, says John Sumpter of Brunel University in the United Kingdom, appear to be “immensely important. It has ramifications for not just endocrine disrupters but the whole spectrum of how you test and assess chemicals.”

    But last fall, the synergy theory began to fray at the edges, as other groups tried to repeat or extend the findings. At least five teams have now looked for synergy, using the same chemicals in 10 standard endocrine test systems. These range from transgenic yeast cells to breast cancer cells (which proliferate when treated with estrogen) to uterine assays—in which young female rats are injected with a suspected estrogen to see if it causes the animal's uterus to grow. The effects of the chemical mixtures, reported as Technical Comments in Science (17 January, p. 405) and in Nature (6 February, p. 494), were merely additive in every case, according to the investigators.

    McLachlan maintains that a key difference may be that his custom-made yeast cells had very low levels of receptors—only 500 per cell. Because the cells of a developing fetus also have few receptors, he says, the finding still could be relevant to humans. McLachlan says he and collaborators are now looking for synergistic effects in newborn mice. But two other groups—William Kelce's laboratory at the EPA in Research Triangle Park, North Carolina, and Steve Safe's group at Texas A&M University in College Station—reported at the toxicology meeting that they have tried the experiments in yeast and mammalian cells with low levels of receptors, and they have found no synergism. McLachlan wasn't at the meeting, but after viewing portions of the session posters, he said, “I can't explain the difference.”

    Some scientists, such as Sumpter, say that even if synergy happens only in one cell system under specific conditions, the explanation could be “potentially a very interesting one.” Postdoc Tom Wiese, in Kelce's lab, says he thinks it's important to design future experiments so that synergy could be detected. For others, the possibility may not be worth pursuing. “If you can't reproduce it [the Tulane group's results], you can't ask questions or extend it any further,” says Kenneth Korach of the National Institute of Environmental Health Sciences, a co-author on the January Technical Comment in Science.

    At the meeting, there was speculation that contaminated reagents might have given a false result. Recently, McLachlan told Science that in his lab's latest experiments, synergy “[has] not always been at the same magnitude.” He declined, however, to say what level of synergy the group is now seeing, because “we're doing the additional studies right now.” McLachlan, who some researchers say has been slow to share his materials, has recently given samples of his yeast cells to three other groups, and they hope to have results within a few weeks.

    “I think it's incredibly important that we resolve the issue,” says Duke University pharmacologist Donald McDonnell, also an author of the Technical Comment. “This is not meant to be a witch hunt, but this issue got so much public press.” Some researchers even say that if, in this new round of studies, the Tulane group's finding of 1000-fold or more synergy turns out to be much lower, the team should issue a formal retraction.

    To some of McLachlan's colleagues, the furious debate going on among toxicologists is a healthy sign. “If they truly got these results, … in a way you have to admire them for going with it,” says John Gierthy of the Wadsworth Center at the New York State Department of Health in Albany. “I just see this as how science works. We're trying to find the truth through debate.”

  5. Evolution

    Predator-Free Guppies Take an Evolutionary Leap Forward

    1. Virginia Morell

    For a small population of Trinidadian guppies, meeting evolutionary biologist David Reznick in 1981 was like winning the lotto of life. The University of California, Riverside, researcher scooped the lucky fish from a waterfall pool brimming with predators, then released them upstream in a pool where only one enemy species lurked. The guppies adjusted to the perks of life with few predators by growing bigger, living longer, and having fewer and bigger offspring. Now, a new analysis of this 11-year study offers intriguing insights into the speed of such evolutionary change.

    Living in the fast lane.

    Guppies transplanted from this predator-filled pool rapidly evolved to larger sizes.

    F. H. Rodd

    Although natural selection is often thought of as a slow pruning process, Reznick and his colleagues find that it can shape a population as fast as a chain saw can rip through a sapling. On page 1934, they report that the guppies adapted to their new environment in a mere 4 years—a rate of change some 10,000 to 10 million times faster than the average rates determined from the fossil record. Although lab studies have shown similarly fast rates of natural selection, this is one of the few examples from a natural environment. “That's why this study is so marvelous,” says Dolph Schluter, an evolutionary ecologist at Canada's University of British Columbia in Vancouver. Reznick has “brought his evolutionary experiment out of the lab and into the wild.”

    And Reznick claims that the work offers clues to larger evolutionary patterns such as that of stasis (a lengthy period when organisms don't appear to evolve) punctuated by rapid bursts of change. Not everyone agrees, but some biologists support the idea. “There's always that question,” says Thomas B. Smith, an evolutionary biologist at San Francisco State University. “Can we get some inkling about what happened in the past by observing what's going on today? Microevolutionary studies such as this show that we can.”

    The current work starts with research that Reznick and other colleagues published in 1990, showing that guppies evolve in size, reproductive strategies, and other traits in response to predators in the wild. For example, in Trinidad's Aripo River, a species of cichlid fish feeds primarily on relatively large—2 to 3 centimeter—sexually mature guppies; in nearby tributaries, killifish prefer tender young fish. In response to these different pressures, the guppies have evolved two different life-history strategies. Those in the Aripo River reach sexual maturity at an early age and bear many young, while the guppies in the tributaries do just the opposite. Reznick's team proved that predation caused this pattern; he did so by transplanting guppies from the Aripo River to a tributary that happened to be empty of guppies and where killifish were the only predators. By 11 years later, the transplanted guppies had switched strategies, delaying their sexual maturity and living longer.

    Building on that study, Reznick, Frank and Ruth Shaw from the University of Minnesota, and F. Helen Rodd from the University of California, Davis, now analyze their data to see just how fast those changes occurred. In as little as 4 years, male guppies in the predator-free tributary were already detectably larger and older at maturity when compared with the control population; 7 years later, females were also noticeably larger and older. The team then used standard quantitative genetics to determine the rate of evolution for these genetic changes.

    Their analysis showed that in only 4 years, the male guppies increased 15% in weight—roughly, from that of a dime to that of a penny. Biologists estimate the speed of evolutionary change in darwins, or the proportional amount of change per unit of time, and the guppies evolved at a rate between 3700 and 45,000 darwins. For comparison, artificial selection experiments on mice show rates of up to 200,000 darwins, while most rates measured over millions of years in the fossil record are only one-tenth to 1.0 darwin. “It's further proof that evolution can be very, very fast and dynamic,” says Philip Gingerich, a paleontologist at the University of Michigan, Ann Arbor, who has analyzed evolutionary rates. “It can happen on a time scale that's as short as one generation—from us to our kids.”

    Because the guppies evolved at rates four to seven orders of magnitude greater than those estimated from the fossil record, Reznick suggests that selection on such short time scales is powerful enough to be behind major evolutionary changes. Indeed, he says the study demonstrates that it is “possible to reconcile large-scale evolutionary phenomena, such as stasis, with what we can see in our lifetimes.”

    Reznick notes that his study and one by Princeton University's Peter and Rosemary Grant show two different ways that organisms can attain stasis. In their study, the Grants found that Galápagos Island finches also adapt rapidly to sudden changes in their environment, such as variations in rainfall. But because years of drought favor larger, big-billed finches while rainy weather favors smaller, small-beaked finches, the net long-term effect was that “nothing seemed to change,” says Reznick. The guppies also quickly entered a new period of stasis, but in a different way: Because their new environment was constant, they stopped evolving when they reached a new optimum size and age. Unless faced with some fresh selective pressure, the transplanted guppy population will remain as it is now—in stasis.

    To some, Reznick's study has implications for reading the fossil record. For example, paleontologists sometimes interpret dramatic changes in size as a sign of the birth of a new species. But, notes Brian Charlesworth, an evolutionary geneticist at the University of Chicago, “papers such as this demonstrate that is not [necessarily] true. As far as I know, these are still guppies.”

    Others aren't so sure that such short-term studies have any bearing on fossils. Particularly irksome, they say, is Reznick's comparison of his decade-long study with that of the millennia-long fossil record. Says Cornell University evolutionary biologist Amy McCune, “His study is fine, but his conclusions drive me crazy. He's overstepped the boundaries of what it means.” Evolutionary rates from the fossil record are necessarily lowered because they average periods of rapid change with periods of slow change or stasis. So, it is not meaningful to compare these rates with the flash-in-the-pan 11 years of guppy changes, McCune says. The data also provide little understanding of such phenomena as bursts of new species, she notes. That would take studies of speciation itself, rather than research on change within an interbreeding group of guppies.

    Still, this evolution-in-the-wild study has fired the imaginations of other biologists bent on studying evolution in action. British Columbia's Schluter, for one, is already designing a similar project. “We're entering a new age of these … experiments,” he says, “and from them, we'll gain a deeper understanding of evolutionary patterns.”

  6. Earth Science

    ‘Real-Time’ Oceanography Adapts to Sea Changes

    1. Steve Nadis

    For a month last summer, a flotilla of robotic submersibles, sensors, acoustic transmitters and receivers, and other instruments probed the Haro Strait, a narrow, freighter-clogged channel between Vancouver Island and Washington state. The immediate goal of this experiment was to trace a so-called tidal front in the strait—an invisible barrier separating fresh water discharged by rivers from the open ocean's salt water—and to learn how the two water masses mix across the front. But the Haro Strait experiment was also a test of an approach that some researchers call a new paradigm for working in the ocean.

    Oceanographic experiments have traditionally been like space probes: vast data-collecting expeditions with limited flexibility. Ship schedules are usually too rigid and data analysis too slow for oceanographers to modify their plans partway through an experiment to follow a lead suggested by new data. “Normally, we take our measurements and are long gone by the time we're able to analyze the data,” explains Henrik Schmidt, an ocean engineer at the Massachusetts Institute of Technology (MIT). The Haro Strait project was one of the first large-scale scientific studies to employ “adaptive sampling,” or “real-time oceanography.” This approach allows oceanographers to change their plans in midstream—or, in this case, midocean.

    The key was a feedback loop between the instruments positioned in the strait and a data-analysis effort onshore. A computer simulation of the tidal front, developed by Allan Robinson and his Harvard University colleagues, crunched through each day's results and made forecasts about where the front, and hence the “action,” would be found in the strait the next day. And two robots called autonomous underwater vehicles, or AUVs (see sidebar), provided a rapid-response capability: They could be reprogrammed each evening to explore new parts of the strait. “As the AUVs started turning in data, we could send them to places where the most noteworthy things were happening,” says Schmidt, who was one of the lead investigators.

    Although final results aren't in, the researchers say this strategy has yielded the most refined view yet of how water masses interact in the Haro Strait front. “It was [also] a demonstration that feedback can work,” says Robinson, and a prelude to more ambitious applications of the strategy. Adaptive sampling, says James Miller, an ocean engineer at the University of Rhode Island, has shown that it “[can be] a really effective observational technique, especially for studying complicated coastal environments.”

    The strategy is not entirely new. Robinson and his Harvard colleagues began running small-scale experiments guided by real-time forecasts as early as the 1980s, and since then his team has used the technique in a dozen or so spots around the world—from Bermuda to Iceland and California to Sicily. But these earlier experiments were done with ships, which have response times on the order of days rather than the minutes to hours of AUVs. And few tests have taken place in environments as dynamic and fast-changing as Haro Strait.

    The details of how incoming seawater collides with outgoing fresh water at a tidal front have never been pinned down, but they are crucial to understanding the biological richness of tidal regions, as well as fighting pollution there. David Farmer, an oceanographer at the Institute of Ocean Sciences in Sidney, British Columbia, who coordinated the Canadian side of the research effort, explains, “The mixing process determines the salinity and density differences of the resultant water layers, which, in turn, influence circulation in the vicinity of the strait.” The circulation stirs nutrient-rich waters toward the surface, where they nourish plankton—and it can also spread spilled oil and other pollutants.

    To explore these circulation patterns, the team unleashed a network of navigational beacons and current meters, the pair of AUVs, a drifting buoy, and four moorings equipped for acoustic communication with the AUVs. The AUVs, moorings, and drifter all carried sonar devices that could trace changes in water characteristics across the front by passing sound waves through it along many different paths. This technique, called acoustic tomography, is analogous to computerized tomography scans in the world of medicine.

    The AUVs also carried temperature and salinity sensors, as well as a Doppler imaging system that measured current velocities. “By using various technologies, some new and some not so new, we acquired pictures of this environment that would be impossible to get by any single means,” Farmer says.

    Feedback to the Harvard model helped sharpen these pictures. Based on each day's findings, the model predicted where the front would be in the morning. The oceanographers used that prediction to send their vehicles off to find the front and collect new data, which were then fed back into the model. “Sometimes, the front wasn't exactly where it was predicted to be,” notes James Bellingham, who runs the Sea Grant AUV Laboratory at MIT. “By putting new information into the model, with the latest corrections, we were able to get a steadily improving model.”

    The technology did fall short in some ways, however. Underwater noise from the heavy boat and tanker traffic through the strait interfered with acoustic communications between the AUVs and the moorings. As a result, says Bellingham, the experimenters couldn't revise their sampling efforts minute by minute—“talking to the vehicle while it's out there, getting a picture every minute, and at the same time sending commands telling it where to go.”

    Once those kinds of hitches are overcome, says Schmidt, adaptive sampling with AUVs will be a powerful tool for specific kinds of ocean tasks—among them monitoring oil spills, tracking pollution plumes, or searching for objects on the sea floor. Large-scale measurements of the ocean, he adds, will still require ships, which can move faster and have a much bigger range than AUVs. “A decade ago, this technique was virtually nonexistent,” says Robinson. “Today, it's a workable technique, and a decade from now it should be commonplace.”

  7. Technology

    Robotic Subs for Rapid-Response Science

    1. Steve Nadis

    The Haro Strait experiment tested a new method of doing oceanography in “real time” (see main story)—and a versatile new tool for collecting the data. Along with an array of buoys and a ship, the experiment relied on two robotic submersibles known as autonomous underwater vehicles (AUVs).

    AUVing a look.

    As conditions in the ocean change, scientists can reprogram autonomous underwater vehicles to explore new regions.

    MIT/Sea Grant AUV Laboratory

    No more than a few meters long, capable of probing deep-ocean waters while laden with acoustic, chemical, and thermal sensing equipment, AUVs have already made forays into the Arctic Ocean, the seas off Antarctica, and waters over the Juan de Fuca Ridge near Washington state. Most recently, AUVs searched for the elusive giant squid in an undersea canyon off New Zealand.

    The Haro Strait experiment established several milestones. It was the first to use AUVs in a so-called “adaptive sampling” mode, which involved reprogramming the vehicles based on computer analysis of each day's data. It also marked the first time that two AUVs had worked together side by side on a scientific mission.

    As such, the experiment was a step toward ocean engineers' audacious vision for these craft: releasing fleets of them to explore remote parts of the sea. Explains James Bellingham, who runs the Massachusetts Institute of Technology's (MIT's) Sea Grant AUV Laboratory (a leader in AUV development), “It's a big ocean, and ships don't sample it that well.” Large oceanographic vessels can cost $10,000 or more per day to operate and often must be booked years in advance. In contrast, “if we hear about an undersea volcanic eruption, or some other rarely seen event, we can take our AUVs out right away on a small fishing boat,” says Bellingham.

    AUVs can also be dispatched to risky environments—beneath the polar ice sheets or inside murky, underwater caves. “At $16,000 per vehicle, we can afford to lose a couple, and even plan on it, for a good cause,” Bellingham notes.

    Bellingham and his colleagues still need to develop an effective method of “docking” the AUVs: programming them to park themselves at an unattended mooring, where they can recharge their batteries, dump some data, and upload new commands. “Docking is like landing an airplane,” says Bellingham. “If you can't land it, it's not much good.” For now, AUVs must surface and wait to be retrieved by boats.

    In spring 1996 tests in the shallow, calm waters of Buzzards Bay, Massachusetts, AUVs successfully followed acoustic homing signals to a docking site. But Bellingham is eager to try this technology in the open ocean. That test should come in January or February 1998, in an experiment in the Labrador Sea that will combine remote operation, docking and undocking, direct acoustic communications with the vehicles, and real-time adaptive programming. Three AUVs and two propeller-less “glider” vehicles that ride the sea currents will independently explore a region where salty surface waters lose so much heat that they become dense enough to sink into the abyss, generating much of the cold, salty water that blankets the ocean floor.

    “The Labrador Sea is one of the few regions in the ocean where water plummets to great depths,” explains MIT oceanographer John Marshall. “We're trying to understand exactly how the water sinks and the implications this process holds for the oceans, atmosphere, and climate in general.” Bellingham adds that “this whole complicated experiment will be done completely with robots. We'll leave the vehicles there, and by the time the real action is occurring, we'll be back here in Cambridge looking at the data sent via satellite.”

    Even if the Labrador Sea experiment succeeds, it won't spell the end for ships, submersibles with human crews, and other research vehicles, says Bellingham: “Rather than trying to eliminate such vessels, we're trying to make them more productive.” Dan Fornari, who oversees submarine and remotely operated vehicle (ROV) programs at the Woods Hole Oceanographic Institution, agrees. “People have suggested that ROVs will replace submersibles, and that AUVs will replace ROVs, but that's not likely to happen,” he says. “It's the synergy of these systems working together that will revolutionize the way deep-ocean science is conducted in the next century.”

  8. Biophysics

    DNA on the Big Screen

    1. Erik Stokstad

    It wasn't up for an Oscar, but a video of a performer new to the silver screen is winning rave reviews. This 8-second clip, aired in Kansas City, Missouri, last week at a meeting of the American Physical Society, shows the first sequential images of an enzyme sliding down a strand of DNA. The film is a tantalizing first glimpse of how researchers might someday study enzymes in action.

    While electron microscopes can make high-resolution images of DNA and RNA, the samples must be stained and motionless. Such images are “like snapshots of a ballerina,” says Paul Hansma, a physicist at the University of California, Santa Barbara. “They won't tell you about the ballet.”

    Hansma and colleagues at Santa Barbara and the University of Oregon were intent on watching the ballet itself, so they borrowed an instrument from physics: the atomic force microscope. An AFM doesn't require a fixed, stained specimen; instead, it creates an image by tapping an ultrafine probe across the sample, with a touch gentle enough not to disturb a busy molecule. With it, Hansma's group took pictures of RNA polymerase (blob in images at right) at work on a DNA molecule (faint strand). RNA polymerase ratchets down the DNA strand, linking nucleotide bases to create an RNA copy that in turn serves as a template for a protein.

    To see this process, Hansma and his colleagues faced the challenge of anchoring the molecules so that they would stay put for inspection without hampering their activity. After several attempts, the team found that zinc ions added to the water would loosely attach the molecules to the sample dish. The DNA strand could still wiggle, but not so much as to blur the image. The team also had to slow down the enzyme to keep it from outdistancing the AFM. By putting fewer nucleotide bases into solution, the researchers reduced the polymerase's transcribing speed from about 45 bases per second to 1 base per second.

    Experts are impressed with the video. At a preview in Texas last year, biologists in the audience “were all on the edges of their seats,” says Mike MacLeod, a molecular biologist at the M. D. Anderson Cancer Center in Houston. Hansma himself envisions a gripping sequel: An AFM movie might reveal subtle changes in the shape of RNA polymerase as it passes over different letters of the genetic code on its way down a DNA strand. And that, he says, “could revolutionize the sequencing of DNA.”

    S. KASAS ET AL., BIOCHEMISTRY 36, 461 (1997)

Log in to view full text