News this Week

Science  06 Nov 1998:
Vol. 282, Issue 5391, pp. 1014
  1. CELL BIOLOGY

    A Versatile Cell Line Raises Scientific Hopes, Legal Questions

    1. Eliot Marshall

    Imagine being able to reach into the freezer, take out a cell culture, treat it with growth factors, and produce almost any tissue in the human body. Sounds like science fiction? Today, it is. But the raw material for such human tissue engineering—in the form of a type of universal cell called a “stem” cell—is now growing in the laboratory. In a long-awaited announcement, biologist James Thomson and his team at the University of Wisconsin, Madison, report in this issue of Science that they have isolated stem cells from human embryos and coaxed them to grow in five “immortal” cell lines.

    Other biologists are hailing the work, reported on page 1145, as an important advance that will provide a powerful tool for biological research. In a matter of years, some researchers say, it may even be possible to use such cells to repair blood, bone, and other tissues. But the achievement has also created a dilemma, which will only intensify as other groups who are close behind Thomson's report similar feats: Many researchers who would ordinarily jump at a chance to use and develop these cells may not be able to do so, because they may be blocked by a U.S. law that forbids the use of public funds for research on tissues derived from human embryos. As Science went to press, the National Institutes of Health (NIH) was reviewing whether these cell lines come under the law. The law may apply in this case because the cells used to create Thomson's cell lines came from embryos donated to research by couples at in vitro fertilization (IVF) clinics in Wisconsin and Israel.

    Thomson had to carefully wall off his own research from any public funding, by setting up a separate lab in a building “across campus” from where he does NIHfunded research. All the equipment and personnel in the duplicate lab are funded privately, mostly by the Geron Corp. of Menlo Park, California, plus a grant from the Wisconsin Alumni Research Foundation, the university's patent agent. In return, Geron expects to get an exclusive license for commercial uses of Thomson's technology.

    Labor-intensive.

    James Thomson's technique requires a deft touch for cells to grow without differentiating.

    The challenge Thomson faced was to create an environment that was neither too harsh, which would prevent the cells from thriving, nor too cozy, which would allow them to differentiate into specialized forms. Thomson, who began working with embryos from rhesus monkeys 5 years ago, stimulated cells from days-old human embryos, called blastocysts, to grow on a layer of mouse “feeder” cells in a lab dish. Other researchers had gone this far, but Thomson took the next step: He coaxed the balky cells to continue growing without differentiating—making an irrevocable commitment to grow into a particular type of tissue. Thomson nudged the cells gently into this new state through very “labor-intensive” tending, he says, and their chromosomes survived intact. (Tumor cells are immortal, too, but their DNA is usually deranged.) And judging by the presence of a critical enzyme called telomerase, which repairs frayed chromosome ends, Thomson concludes that the cells are capable of reproducing indefinitely. Yet tests showed that the cells retain the potential to develop into all the basic tissue types.

    Only a few of Thomson's peers had learned of his accomplishment last week, but those who knew of it said they were impressed. Austin Smith, a stem cell researcher at the University of Edinburgh in Scotland, called it an “extremely important” milestone. Molecular biologist Brigid Hogan of Vanderbilt University in Nashville, Tennessee, a pioneer of mouse stem cell technology, calls the development “very encouraging.” John Gearhart, a developmental geneticist at The Johns Hopkins University School of Medicine who is using a different method to establish a culture of human embryonic cells, describes Thomson's research in a commentary on page 1061 of this issue as “a major technical achievement with great importance for human biology.”

    Gearhart was in a close race with Thomson to publish first but wasn't able to move his project along quite as rapidly. In a paper coming out in the 10 November Proceedings of the National Academy of Sciences, Gearhart will announce that he, too, has established a line of embryonic stem cells. His are derived from primordial germ cells, precursors of sperm and oocytes, isolated from medically aborted fetuses. Gearhart and his team have sustained some of these cells in culture for as long as 9 months, but he concedes that “Jamie [Thomson] has done a lot more” to characterize his stem cells and deserved to be first. Like Thomson and Roger Pedersen, another stem cell researcher at the University of California, San Francisco, Gearhart turned to Geron for support because it was unclear whether he could do the work with public funds. And, like the other two U.S. groups, his team plans to license patents to Geron.

    Other developers of human embryonic stem cell technology are close behind. Martin Pera at Monash University in Clayton, Australia, reports that his team—together with scientists at the Hadassah Medical Center in Jerusalem and the National University of Singapore—has “achieved extensive serial cultivation” of cells from human blastocysts, which he expects will meet the criteria for human embryonic stem cells. Smith says that his team at Edinburgh has been trying to develop a human stem cell line, too, but doesn't yet have anything to announce.

    Thomson says the first big payoff will be to aid fundamental research on human development. He points out that the details of human embryo development after implantation are essentially unstudied. Animal models haven't been useful, he says: “For example, the placenta and all the extraembryonic membranes differ fundamentally between humans and mice.” Now, scientists may be able to produce cells specific to stages of human development that have been inaccessible to research. By manipulating gene expression in these cells, they might be able to probe how development can go wrong.

    Another payoff, one that could be lucrative for Geron in the not-too- distant future, according to Geron Vice President Thomas Okarma, will be drug screening. Okarma says, “The potential to supply unlimited quantities of normal human cells of virtually any tissue type could have a major impact on pharmaceutical research and development.” Cell lines used for drug screening are currently derived from animals or “abnormal” human tissue, such as tumor cells.

    The real “home run” of this technology, Okarma says, is the “enormous” possibility that researchers might be able to tailor stem cells genetically so that they would avoid attack by a patient's immune system, then direct them to specialize into a particular kind of tissue and transplant them into diseased organs. Geron suggests it might be possible to repair damaged heart muscle by injecting new cardiomyocytes, for example. Okarma points out that researchers have already used mouse stem cells to produce cardiomyocytes that were successfully transplanted into a mouse heart.

    But that possibility also remains the most distant. “Right now,” says Thomson, “we don't know how to direct [stem cells] to become any specific cells.” And developing cells that can be immunologically suitable for transplantation will take even more work. Still, Thomson says, “it's no longer in the realm of science fiction; I really believe that within my lifetime I will see diseases treated by these therapies.”

    For some researchers, however, the complicated legal issues associated with the cell lines may prove discouraging. Federal law governing this topic was updated most recently in the 4000-page appropriation bill Congress passed on 20 October. It says U.S. funds may not be used for “the creation of a human embryo” for research purposes, or for “research in which a human embryo or embryos are destroyed, discarded or knowingly subjected to risk of injury or death. …” The embryo is defined as any organism not protected as a human subject under other laws (such as those applying to fetal tissue) “that is derived by fertilization, parthenogenesis, cloning, or any other means from one or more human gametes or diploid cells.”

    When NIH officials learned of Thomson's work, their initial reaction was that federal funds could not be used for research using his cell lines. But director Harold Varmus sought legal counsel, and a top aide told Science that the cells may be exempt from the law because they could not grow into embryos. NIH was scrambling to come up with a final ruling by the time Thomson's paper was published. The cell line Gearhart is developing may not have the same legal complications because it was derived from fetal, not embryonic, cells.

    The law clearly prohibits the use of federal funds for the initial development of an embryonic stem cell line, however. Okarma says Geron carefully considered the ethical implications before proceeding. “We recognize and affirm that there is moral authority associated with this tissue,” he says. Geron has established a panel of ethical advisers, chaired by Karen Lebacqz of the Pacific School of Religion in Berkeley, California, representing “five different religious traditions,” Okarma says. The panel approved the stem cell project, he says, on the basis that Geron was making beneficial use of fetal tissue and IVF embryos that would have been discarded or frozen indefinitely.

    Researchers are hoping that the legal uncertainties hanging over Thomson's cell lines can be cleared up quickly. “People have been a little scared off by the controversy” already, says Smith, and “it would be a tragedy if [legal barriers] exclude the best people” from the field.

  2. U.K. RESEARCH FUNDING

    Life Sciences Win Bulk of Cash Bonanza

    1. Nigel Williams

    London—When the British government announced in July that it would pump an additional $1.1 billion into science spending over the next 3 years, researchers whose budgets had been squeezed hard for more than a decade greeted the news with delight. Last week, the government released the details of how this new largess will be distributed, and life scientists were left cheering the loudest.

    Among Britain's six research councils, which provide most of the government's funding for basic research, the biggest increase—6.8% above inflation—will go to the Medical Research Council (MRC). “I'm enormously pleased,” MRC chief executive George Radda told a press conference in London when the allocations were announced last week. The MRC subsequently said it would spend part of its increase on three major new initiatives: in mouse genetics, cancer research, and the study of the human form of mad cow disease. Not everyone is celebrating, however. The council responsible for particle physics and astronomy is slated to get no significant increase in real terms.

    These increases are the fruits of a yearlong comprehensive review of government spending launched by the Labour government soon after it was elected in May last year. The first results of the review, announced in July, put science among the most favored areas of spending (Science, 17 July, p. 314). When he announced the allocation of the promised money last week, trade and industry secretary Peter Mandelson, the Cabinet minister responsible for research, said he wanted the increases to tackle the “postgenomic challenge”: helping British scientists exploit advances in genetic research in which they have been key players.

    This is reflected in the allocations: Alongside the MRC's 6.8% boost, the other two councils involved in life sciences—the Natural Environment Research Council (NERC) and the Biotechnology and Biological Sciences Research Council (BBSRC)—will get increases of more than 3% above inflation. BBSRC says it plans to back more projects that attempt to exploit novel gene products as therapeutics and other high- value chemicals, while NERC is planning a program of research on genomes and the environment, such as looking at how the genomes of different populations of plants may affect their response to climate change. The smaller Economic and Social Research Council, which has also won an increase, is expected to step up funding on the social and ethical implications of genetic research.

    Biology boost.

    Life sciences get the biggest rises in Britain's 3-year budget allocation.

    CREDIT: U.K. OFFICE OF SCIENCE AND TECHNOLOGY

    The MRC's budget increase—a total of $144 million over the next 3 years—will allow it to expand existing work in three priority areas. The council will devote an as-yet-undecided sum to boost research on mouse genetics, including sequencing more of the mouse genome and developing mutant mouse strains. “With genome sequence data and the ability to develop new mutant strains, the mouse is a powerful way to get models for human diseases,” says geneticist Nick Hastie of the MRC Centre for Human Genetics in Edinburgh, part of the mouse genetics effort.

    The MRC is also putting $13 million over the next 3 years into a new center aimed at converting advances in the molecular analysis of cancer into improved patient treatment. The new center, which will be established in collaboration with Cambridge University and a medical charity, the Cancer Research Campaign, will be headed by cancer biologist Ron Lasky of the Wellcome Centre for Cancer and Molecular Biology in Cambridge. It will be up and running by early 2001. Among the initial goals will be the development of tests for early diagnosis of common cancers, including cervix, lung, breast, and colon. Longer term goals include better diagnostic analysis of tissue samples to determine how far the disease has progressed and tracking down genes that predispose people to cancer.

    The third MRC initiative will focus on the study of so-called new variant Creutzfeldt-Jakob disease (CJD), the fatal brain disorder linked to mad cow disease that has so far killed 29 people. The MRC will set up a new center in London based around the unit headed by molecular biologist John Collinge at the Imperial College School of Medicine at St. Mary's Hospital. “We hope to develop a critical mass of around 60 people,” says Collinge. He will be joined next year by another leading specialist in the field, molecular biologist Charles Weissman of the University of Zurich. The MRC plans to provide $2.4 million a year in addition to funds Collinge will receive as a principal fellow of the research charity the Wellcome Trust. A high priority for the new center will be the development of simple new tests for the hallmark “prion” protein that has been linked to the disease. A screening test is urgently needed for blood samples to ensure that the disease is not spread through transfusions and to detect the protein in tissues such as tonsils.

    Physical scientists have feared that the government's enthusiasm for biology would leave them out in the cold. But they can take some comfort from the new allocations. The Engineering and Physical Sciences Research Council won a respectable 3.4% increase in real terms. And even the hard-pressed Particle Physics and Astronomy Research Council (PPARC), with an increase of just 0.55%, won added security through a new contingency fund designed to absorb currency fluctuations that can increase subscriptions to bodies such as the CERN particle physics center near Geneva. “Our domestic budget has now been assured, and the problems created by currency fluctuations for our overseas commitments have also been addressed to help plan our future,” a spokesperson says. But Astronomer Royal Martin Rees of the University of Cambridge says that the budget outcome is very disappointing. “It's a pity PPARC did not make a better case for additional investment,” he says.

    As part of the budget allocation, the government also announced a raft of measures to help improve career prospects for university researchers. This year it increased the minimum annual grant for Ph.D. students by $1600 to $10,500—the first increase above inflation for 30 years. And now the number of fellowships awarded by the Royal Society will be increased from 265 to more than 300. The increased research council budgets will also mean that university-based researchers on council grants will be able to hire more graduate and postdoctoral staff, and the councils' own labs will also be able to create new positions.

    Even the lobby group Save British Science, a longtime critic of government funding policy set up during the previous Conservative government, could find little to complain about. Says lobby chair Richard Joyner, dean of research at Nottingham Trent University: “I'm very pleased to see that everybody has got something.”

  3. AGING RESEARCH

    Low-Calorie Diets May Slow Monkeys' Aging

    1. Jennifer Couzin

    Scientists are edging closer to proving in primates what's been demonstrated dozens of times in rodents since the 1930s: Sharply reducing caloric intake can slow the process of aging to a crawl.

    At a Society of Toxicology meeting 2 weeks ago in Reston, Virginia, three groups presented data showing that rhesus monkeys fed severely calorie-restricted diets show fewer signs of diseases associated with advancing age, including diabetes, heart disease, and cancer, than their comfortably full—and in some cases comparably lean—counterparts. Because most of the hungry monkeys are only now entering middle age, it's too early to tell whether the low-calorie diets will significantly extend their life-spans. But one of the studies provided a tantalizing hint: Mortality due to disease among the calorie-restricted monkeys was slightly lower than among the controls.

    Even if monkeys do live longer on low-calorie diets, it doesn't necessarily follow that humans would experience similar benefits—or that they would find such diets acceptable. But researchers hope that these animals might provide clues to why calorie restriction is beneficial—information that could point to strategies and medications for delaying aging in humans.

    Hungry but healthy.

    Monkeys eating sharply restricted diets (right) may live longer than well-fed controls.

    CREDIT: JOSEPH KEMNITZ, JON RAMSEY

    The three groups reporting their results at the meeting—which were led by Mark Lane at the National Institute on Aging (NIA), Richard Weindruch at the University of Wisconsin, Madison, and Barbara Hansen at the University of Maryland, Baltimore—kept the animals on tight rations but well above starvation levels. The Wisconsin and the NIA teams provided the test animals with 30% fewer calories than the controls (while enhancing their diets with a vitamin and mineral supplement), while Hansen tailored the monkeys' food intake to prevent them from putting on more pounds than they carried in young adulthood.

    All three groups found that in nearly every system tested, the calorie-restricted (CR) animals were better off than the controls. All recorded lower blood lipids and blood pressure, enhanced insulin sensitivity, and a lower incidence of diabetes in calorie-restricted monkeys. The Wisconsin group also found less spinal arthritis, while Lane's team saw fewer cancer cases and a slightly lower mortality rate due to diabetes and cardiovascular disease. One of the 120 animals on the diet died, compared to five among the 120 controls. “[The] major message from the monkeys is that 99.9% of those markers that we have examined in the monkeys behave exactly as they do in rodents,” says Lane.

    What's more, the severe calorie reduction seems to produce few adverse effects. Lane's group, which began caloric restriction in young animals, saw the only potential problem: delayed sexual and skeletal maturity. None of the primates have been bred, however, so no one knows whether their reproductive capabilities are affected. And although all three groups acknowledged that their animals were regularly hungry—wolfing down food more quickly than controls, or becoming excited if accidentally given excessive food—none found that the added stress affected behavior. Controls and CR monkeys were equally energetic, social, and nonaggressive, and a weeklong videotape of Lane's animals showed no measurable differences between the two groups.

    Why caloric restriction so dramatically improves the functioning of organ systems remains under debate. Certain changes, like the reduced incidence of diabetes, might simply be a benefit of leanness, as obesity predisposes to the disease in nonhuman primates as well as in most humans. Others are more puzzling, however.

    One possibility, Weindruch says, is that restricting food consumption reduces the production of tissue- damaging oxygen free radicals that are a byproduct of food metabolism. He has shown in mice that such oxidative damage leads to muscle atrophy, producing the frailty common in old age.

    But reducing oxidative damage is only one way calorie restriction might work. “The problem with [caloric restriction] is that it fits any of the theories of aging,” says Roy Walford, a professor of pathology at the University of California, Los Angeles, and a pioneer in the field. “[It] increases DNA repair, regulates glucose insulin, decreases free radical damage, preserves the immune system.”

    The primate studies haven't gone on long enough to determine whether caloric restriction will result in the kind of increases in life-span seen in near-starving rodents, which live up to 40% longer than controls. Rhesus monkeys can live up to about age 40 in the lab, whereas the test animals are still in their mid-20s. The primate data are “very tantalizing preliminary results,” says Lane. “But [I'm] not at the point where I'm willing to stand up and wave the flag and say it works.”

    If further work confirms that caloric restriction pays off in extended primate life-spans, though, and researchers can pin down the reasons why, aging experts hope to tap into something that, until now, has been restricted to the realm of fiction—controlling the process of aging.

  4. NEUROBIOLOGY

    New Leads to Brain Neuron Regeneration

    1. Marcia Barinaga

    Neurobiologists have long considered the neurons in the adult brain to be like a precious nest egg: a legacy that dwindles with time and illness and is difficult if not impossible to rebuild. Two sets of findings published this week raise hopes that this principle could one day be overturned. In one, research teams at Harvard and the National Institute of Neurological Disorders and Stroke (NINDS) independently isolated what appear to be the first human cells that can differentiate into all the cell types found in the brain—so-called neural stem cells. In the other, a team based in California and Sweden found a small area of the human brain that produces new neurons into old age.

    The discoveries aren't biologically surprising, because both neural stem cells and the birth of neurons in adult brains have been seen in other mammals. But identifying them in humans was a demonstration that could be critical to future therapies for diseases such as Parkinson's or Alzheimer's. The stem cell results, reported in the November issue of Nature Biotechnology, “provide proof of principle that you can do this with human cells, and that is a necessary step on the way to using these cells in the treatment of human diseases,” says developmental neurobiologist David Anderson of the California Institute of Technology in Pasadena. And the discovery of neuron growth, reported in Nature Medicine, raises an even more enticing—although distant—prospect: inducing patients' own brain cells to regenerate neurons lost to disease.

    The two teams that discovered the stem cells, led by Evan Snyder at Harvard Medical School in Boston and by Ronald McKay of the NINDS, both began by culturing cells from the brains of aborted human fetuses. The first clue that the cultures contained stem cells was the fact that some of the cells would differentiate into the three main types of brain cells: neurons and two types of support cells called glia. The true test of a stem cell, however, is to show that one cell can give rise to all the different cell types of a tissue. (For reports on stem cells from an earlier stage of development, which can differentiate into a wider range of tissues, see pp. 1014 and 1145.)

    To show that they had neural stem cells, Snyder's team cloned a single cell and then showed that cells from that cloned culture could differentiate into glia and a variety of neuronal types when they were put into developing mouse brains. McKay's group didn't develop clones of identical cells. But they did transplant sets of cells that appeared identical into the brains of fetal rats in utero and found that the cells joined the rats' own neurons in forming various kinds of brain tissue. While acknowledging that there is much more research to be done, Snyder says he hopes the field will be ready “to start applying this to human diseases within 5 years.”

    The prospect of coaxing the brain to heal itself is even more distant. But thanks to work by Fred Gage of the Salk Institute in San Diego and Peter Eriksson and his colleagues at the Sahlgrenska University Hospital in Göteborg, Sweden, it is no longer considered impossible.

    For years neurobiologists thought that no new human brain neurons are generated after birth. That view was bolstered in the 1980s by work in which Pasko Rakic's team at Yale University found no evidence that neurons are dividing in the brains of rhesus monkeys. However, researchers later turned up evidence of new brain neurons in adult rodents, tree shrews, and marmoset monkeys. But no one had come up with a way to perform the absolute test—looking in human brains.

    To answer the question once and for all, Gage, Eriksson, and their colleagues took advantage of a cancer study in Sweden. Patients had been given a marker chemical called bromodeoxyuridine (BrdU), which is incorporated into the DNA of dividing cells, to follow the growth of their tumors. After five of those patients died, the scientists examined samples of their brain tissue. They found BrdU-labeled neurons in the dentate gyrus, a small part of the brain area called the hippocampus, which is involved in forming memories. This was a telltale sign that those cells had been recently formed by the division of precursor cells.

    This means, says Gage, that “neurogenesis does not stop phylogenetically at [marmosets]” and that human brains do replace neurons. In fact, Rakic says that, with techniques similar to Gage's, his lab has filled in the missing link between humans and marmosets by finding evidence of new neurons in the dentate gyrus of rhesus monkeys. They will present the finding at the annual meeting of the Society for Neuroscience in Los Angeles next week.

    Rakic and Gage caution that the practical significance of the discovery is unclear. The dentate gyrus is just one small brain area, says Rakic, and “the principle [of no new neurons] still applies for most structures of the brain.” Gage adds, “We don't have any evidence that these cells are hooked up in any way” to functioning neural networks.

    But the finding could mean that researchers might someday learn how to encourage regeneration in other parts of the brain, such as areas hit by neurodegenerative disorders. That's a tall order, but it is a concept that people “weren't even thinking about” before this finding, says Gage.

  5. PALEONTOLOGY

    Earliest Animals Old Once More?

    1. Richard A. Kerr*
    1. With reporting from Pallava Bagla in India.

    Toronto—In the past month, the apparent age of the first known animals nearly doubled to a startling 1.1 billion years, then swung back to the conventional figure of 600 million years. And last week at the annual meeting here of the Geological Society of America, the pendulum swung one more time, back toward the extraordinarily early dates claimed a month ago. Paleontologists may have to reckon after all with signs of animals 500 million years earlier than the first known animal fossils.

    The first dramatic claim came in the 2 October issue of Science (pp. 19 and 80), when researchers said they had found tracks of multicellular animals in 1.1-billion-year-old Indian rocks. Then, paleontologist Rafat Jamal Azmi of the Wadia Institute of Himalayan Geology in Dehra Dun, India, claimed in the Journal of the Geological Society of India that he had found tiny fossils, known to be from about 540 million years ago, in rocks just above the purported trace fossils. If so, the tracks might actually be only about 600 million years old (Science, 23 October, p. 627). Paleontologist Anshu K. Sinha, director of the Birbal Sahni Institute of Paleobotany in Lucknow, noted that Azmi's finds might be confused with certain kinds of sedimentary structure and that his work had not been replicated. But Sinha and other paleontologists who read Azmi's paper and studied scanning electron microscope (SEM) images of the finds concurred that they were indeed small, shelly fossils (Science, 23 October, p. 601).

    In the eye of the beholder. Some say the regular pattern on bits of rock like these make them look like fossils, but others say they are only artifacts.

    CREDIT: R. J. AZMI/WADIA INSTITUTE OF HIMALAYAN GEOLOGY

    In a question-and-answer session at the meeting, however, paleontologist Nicholas Butterfield of the University of Cambridge reported that after Azmi visited and gave him a look at actual samples, he believes they are not fossils at all but artifacts. “They're very convincing in black-and-white” SEM images, says Butterfield, “but they're absolutely not biogenic when seen in Technicolor” under a light microscope. Once he could view the objects from any angle and under varied lighting, Butterfield concluded that their ribbed structure was simply a reflection of fine layers in the rock itself. The texture of the rock plus the acid treatment Azmi used to extract any fossils apparently created the oddly shaped bits, Butterfield says.

    Others also have doubts. Two other Cambridge experts in Cambrian fossils, Simon Conway Morris and Soren Jensen, studied the samples with Butterfield when Azmi visited Cambridge 2 weeks ago, and they agree that the bits are not fossils. Even one-time supporters, such as paleontologist Martin Brasier of the University of Oxford, who found the photographs persuasive but hasn't seen the samples themselves, now agrees that, based on the Cambridge experts' views, “it looks doubtful that they are convincing.”

    Azmi, however, stands by his find, saying that Conway Morris studied unpublished fossils rather than the examples cited in his recent paper. He says that Butterfield's “generalized statement” is “very confusing,” because it does not address the issue “specimen by specimen.” Azmi concludes: “There cannot be any doubt that these are fossils, for they are not artifacts.”

    Even if this particular challenge to the claim of billion-year-old animal tracks may be fading, paleontologists at the meeting weren't quite ready to embrace such a startlingly ancient origin of animals. Some critics still aren't sure the tracks are those of living creatures. Confirming the age of the rocks may require new radiometric dates, which will take a few years to complete. The age of the first animals is—still—a question mark.

  6. CATALYSIS

    Chemical Accessories Give DNA New Talents

    1. Robert F. Service

    Cells have a strict division of labor: DNA conveys genetic information, while proteins run the chemistry of life. Teams of chemists and biologists are now working to bridge that division by creating hybrid molecules that tack the chemically active functional groups of proteins onto DNA's coiled backbone. The goal is to create molecules that are both chemically adept, like proteins, and easy to copy and vary, like DNA—properties that might enable researchers to “evolve” valuable new catalysts. But this elegant scheme faced a serious hurdle: The enzyme that copies DNA, called DNA polymerase, refused to play along, balking when it encountered a modified DNA building block.

    Now two groups have managed to outwit the finicky enzymes, opening the door to a new family of DNA-based catalysts and raising questions of whether such molecular hybrids could have played a role in the evolution of life. In last week's Angewandte Chemie, International Edition in English, molecular biologists Kandasamy Sakthivel and Carlos Barbas III of The Scripps Research Institute in La Jolla, California, reported that adding a rigid chemical arm to the side of a DNA building block, or nucleotide, allowed them to tack on a wide variety of functional groups to the molecule. DNA chains containing the altered building block could still be copied by DNA polymerase. And at the American Chemical Society meeting in August, another team led by Steven Benner at the University of Florida, Gainesville, reported going one step further. They too found that specialized DNA polymerases could copy synthetic nucleotides adorned with functional groups. But they also showed that the DNA hybrids could take a first step toward doing chemistry, by binding avidly to a molecular target.

    Although neither of the new experiments actually shows that hybrid DNA-protein molecules can catalyze chemical reactions, “they are getting very close,” says Michael Famulok, a biochemist at Ludwig Maximilians University in Munich, Germany. And that's exciting, he adds, because it's far easier to generate enormous families of DNA chains, each one slightly different from the others, than it is to create libraries of related proteins. Such libraries are hunting grounds for new catalysts, says Bruce Eaton, a molecular evolution specialist at NeXstar Pharmaceuticals in Boulder, Colorado. “There's a chance to evolve new chemistries no one has ever seen before.”

    Researchers have long been generating large families of RNA and DNA chains to see if they could isolate individual ones that performed interesting chemistry. But DNA and RNA by themselves are “rather poor catalysts,” says Barbas, because their nucleic acid backbones don't contain the diverse chemical groups needed to carry out a wide variety of reactions. Last year, Eaton and his colleagues improved RNA's catalytic abilities by, for example, modifying RNA bases to carry groups known as pyridines, which are well known for binding to catalytically active metals.

    Armed and ready.

    An amino acid linked to a DNA building block could enable the double helix to catalyze chemical reactions.

    SOURCE: CARLOS BARBAS III

    Barbas and Sakthivel wanted to see if they could do the same kind of thing with DNA, because it's more stable than RNA and even easier to replicate. But they had to get around the problem that DNA polymerases are far more finicky about copying modified bases than RNA polymerases are. The chemists thought that if they were careful to make each change away from the business end of each nucleotide—the part that faces its nucleotide counterpart on the complementary strand of the DNA—it might not affect the pairing and duplication.

    The scheme worked. After only three tries, they devised a rigid hydrocarbon arm that projected from the back end of a thymidine nucleotide without affecting its ability to be incorporated into DNA chains and duplicated by DNA polymerases. The arm proved to be quite versatile. Barbas and Sakthivel initially found that they could hook numerous functional groups to the end, including a complete histidine amino acid—a common constituent of proteins—and they have added other amino acids since then.

    Meanwhile, Benner's team—which included colleagues at Florida and at GenEra Inc., in Alachua, Florida—took a different route: They modified both the DNA and the polymerases. They had spent years designing novel nucleotides that could be incorporated into a DNA strand alongside the four standard building blocks, including one with a hydrocarbon linker and an amino group tacked onto the back of the molecule. They then engineered mutations into a small family of polymerases and tested them all until they found one that would tolerate the odd DNA.

    Benner's group used its combination of modified DNA and tolerant polymerase to go a step further and evolve a DNA that could bind to a specific target. Using a polymerase prone to making occasional random errors, they copied a DNA chain containing the modified nucleotide many times over to produce a library of chains, all slightly different. Benner's team then ran the chains through a column containing a molecular target for the DNA: immobilized adenosine triphosphate (ATP) molecules. Most chains passed the target by, but a few stuck. Those that did were later removed and used as the starting material for a new round of replication. After about 12 rounds of this evolution and selection, they found a DNA hybrid that stuck to ATP 100 times more strongly than a similarly evolved but natural DNA strand.

    Both groups say they are now working hard to isolate DNA hybrids that catalyze actual reactions. If that works, Benner says, it will confirm that DNA, like RNA and proteins, can carry out the two key functions of life. The prospect that DNA hybrids can do this, says Benner, raises the question of whether such hybrid molecules might have played a role in early evolution, taking care of both genetics and chemistry, before proteins and DNA went their separate ways. For now, there's more evidence to suggest that RNA was the original single biopolymer, notes Gerald Joyce of Scripps. But he adds that the new work underscores the notion that the chemistry of life may have changed over time. Says Joyce: “The paints on the palette [for life's beginnings] don't have to be the same ones we see in biology today.”

  7. IMAGING

    Technique Probes Electrons' Secret Lives

    1. Meher Antia*
    1. Meher Antia is a science writer in Vancouver.

    Electrons on the surface of a material can no longer hide from probing instruments, which can track virtually their every move. But physicists can't spy so easily on electrons that lurk below the surface and have only been able to guess at how they behave. Now, however, two groups of researchers have found a way to peer beneath an insulating surface and image intricate patterns formed by the electrons trapped in a thin, two-dimensional semiconductor layer.

    At a meeting sponsored by the National High Magnetic Field Laboratory in Tallahassee, Florida, last week, the leader of one group, Raymond Ashoori of the Massachusetts Institute of Technology, showed the latest fruits of the technique, which maps subsurface charges by scanning the semiconductor with a sharp probe. The images, which show enigmatic rings and filaments of electrons, only deepen the puzzle of how electrons behave when they are trapped in a two-dimensional layer. “The experiments contain an incredible amount of information,” says Sankar Das Sarma of the University of Maryland, College Park, “but it is still not clear how to absorb that information to make sense of it.”

    Ashoori and his colleagues, along with a group led by Amir Yacoby of the Weizmann Institute of Science in Rehovot, Israel, have been studying the phenomenon that earned a trio of other physicists the Nobel Prize this year: the quantum Hall effect. Under exacting conditions—temperatures within a fraction of a degree of absolute zero and strong magnetic fields—the current flowing through a two-dimensional “electron gas” (2DEG) exhibits a series of plateaus where it no longer increases as the voltage is cranked up. The quantum Hall effect has intrigued physicists since its discovery in 1980, and this year's Nobel Prize is the second to be awarded for studies of it (Science, 23 October, p. 613). But physicists don't have a clear view of how the electrons behave as they stack up in quantum energy levels—the source of the plateaus. “There are a lot of data on how a two-dimensional sheet conducts electricity,” says Ashoori, “and a lot of guessing as to what's going on inside. But nobody has ever looked.”

    To see what's happening in the 2DEG, Ashoori adapted a technique called scanning probe microscopy, a standard way to map atoms on surfaces. A sharp tip probes the semiconductor surface, but instead of tracing its atom-scale undulations, it picks up tiny charge variations with the help of an ultrasensitive charge detector. To distinguish the subsurface charges from ones sitting on the surface, the team pumps charge in and out of the semiconductor at a frequency of about 100 kilohertz. The tip locks onto a signal of just that frequency, indicating charges moving in the 2DEG, and ignores the unmoving charge at the surface. Yacoby's probe detects static charges instead and relies on other techniques that don't depend on frequency to tease out the subsurface signal.

    Both groups are using their techniques to study the current plateaus seen in the quantum Hall effect. Physicists speculate that “incompressible” regions in the 2DEG—places where the electrons are already squashed together as tightly as the laws of quantum mechanics allow—are responsible. These incompressible regions can't accommodate any additional electrons, and so they block increased current flow. In between the plateaus, current flows more freely because “compressible” regions open up in the 2DEG.

    What Ashoori's team has seen so far confirms that general picture but adds puzzling details. “Imagine dumping charge in [the material] and waiting to see how it spreads,” says Ashoori. “To our shock, it goes into these patterns that are enormously sensitive to field.” Ashoori interprets the patterns, which range from long filaments to small droplets to large islands, as variations in the compressibility of electrons. They appear only at the current plateaus, and they change when the field is changed by just 1%. Yacoby's group also sees an odd mosaic of compressibility variations.

    The images Ashoori showed in Florida only deepened the puzzle. “We see objects that look like perfect rings,” he says. “Now why would electrons form circles like that?”

    Some physicists caution that the technique itself—in particular, the presence of the tip and the charge pumping—might be creating these patterns. Still, most physicists are enthusiastic about the technique's potential. Allan MacDonald of Indiana University in Bloomington thinks it might also be useful for revealing other exotic electron configurations that can form in a 2DEG, such as a Wigner crystal, where the electrons don't slosh around like a liquid but remain in fixed positions to form a lattice. Even when they are buried in a semiconductor, electrons can't hope for much privacy anymore.

  8. ASTROPHYSICS

    Powerful Cosmic Rays Tied to Far-Off Galaxies

    1. Dennis Normile

    A pair of astronomers may have solved a long-standing puzzle about the source of ultrahigh-energy cosmic rays, particles that slam into the atmosphere with 100 million times the energies reached in the largest particle accelerators. They have traced a handful of these particles back to highly energetic active galactic nuclei, the turbulent centers of distant galaxies that may harbor massive black holes. The finding, reported in the 26 October issue of Physical Review Letters, could upset current notions about the nature of ultrahigh-energy cosmic rays.

    Astrophysicists have figured that the highest energy cosmic rays have to originate near our galaxy. That's because any charged particle, like a proton, that has traveled farther would have been slowed to lower energy levels by the microwave background—the low-energy radiation that pervades the universe. But no one has been able to find a nearby source for the ultrahigh-energy rays.

    Glennys Farrar, now at New York University, and Peter Biermann of the Max Planck Institute for Radioastronomy in Bonn suspected a more distant source: a highly energetic class of active galactic nuclei that have intense magnetic fields, which might be capable of accelerating particles to high energies. Because each incoming cosmic ray sets off a chain reaction in the atmosphere that ends in a detectable shower of electrons or positrons, Farrar and Biermann could figure out the approach angle of five cosmic rays. In each case, the path of the incoming ray could be traced back to a previously identified active galactic nucleus. The probability of the cosmic rays lining up with such galaxies by pure chance is only 0.5%, the researchers say.

    “If the correlation is as good as they claim, then it's very, very suggestive that we may well have found the source of these extremely high-energy cosmic rays,” says Raymond Protheroe, an astrophysicist at the University of Adelaide in Australia. But he adds that this would upset the current assumption that cosmic rays are made up of protons or atomic nuclei, because they could never retain such high energies over such long distances. If Farrar and Biermann are right, “whatever's getting to us cannot possibly be a proton,” says Protheroe.

    Farrar and Biermann hypothesize that the ultrahigh-energy cosmic ray particles could be new neutral particles or neutrinos, which would not interact with the microwave background. But given that the analysis rests on just a handful of events, they say, much more work will be needed to close the case.

  9. RESEARCH MANAGEMENT

    New Law Could Open Up Lab Books

    1. Jocelyn Kaiser

    Tucked into last month's giant spending bill is an unwelcome message to academic researchers: Their data may be fair game for anyone who asks.

    A few words in the section funding the White House Office of Management and Budget (OMB) would extend the federal Freedom of Information Act (FOIA)—a 1966 law to make government more accountable to the public—to extramural grants. That opens the possibility that scientists at universities, hospitals, or nonprofit organizations might have to turn over the contents of their computer disks of data, or even their lab notebooks, in response to a request to the agency that funded their work. “We're all very troubled,” says Wendy Baldwin, deputy director for extramural research at the National Institutes of Health.

    The language, inserted by Senator Richard Shelby (R-AL), says OMB must revise its rules for administering federally funded research grants “to require Federal awarding agencies to ensure that all data produced under an award will be made available to the public through the procedures established under the Freedom of Information Act.” Private parties requesting the data may be charged “a reasonable user fee.” At present, only funding agencies themselves can ask grantees for data. The new language implies that federally funded researchers must turn over their data to anyone who files a FOIA request. “The taxpayers have a right to much of this information,” says Shelby.

    The roots of the provision go back to last year's controversy over new Environmental Protection Agency air pollution rules for fine soot. Industry groups and some legislators demanded that university researchers hand over their data on the health effects of the pollution, leading to an unsuccessful legislative proposal requiring public data release (Science, 8 August 1997, p. 758). This year, a separate funding bill containing a request for OMB to study the issue was vetoed by President Clinton for unrelated reasons, leading Shelby to insert more direct language in the massive spending bill passed before Congress adjourned (Science, 23 October, p. 598).

    Some observers are outraged that this sweeping measure was passed with no hearings. “It is ironic that a provision described as a sunshine provision needed to be tucked into a 4000-page bill in the dead of night,” says Representative George Brown (D-CA), ranking Democrat on the House Science Committee. And some health researchers are worried that the directive will give industry a new tool to stall health regulations. “If past history is any indication, vested interests will misuse [this provision] to discredit valid research results they don't like and to harass the researchers doing the work,” says New York University environmental scientist George Thurston, whose studies helped form the basis for EPA's contested regulations.

    Others worry that raw data will be requested before it has been analyzed and peer reviewed. “It's important that we have processes in place for data sharing, but this basically opens the door to anyone's data without any filters,” Baldwin says. University researchers say that privacy and proprietary data might also be compromised.

    The question facing OMB now is how to implement the new requirement. Agency officials say they hope to be consulted in a process likely to take many months.

  10. ENVIRONMENT

    Acid Rain Control: Success on the Cheap

    1. Richard A. Kerr

    In the United States, a flexible, free-market approach has helped to curb acid rain at a bargain price. Could it work for greenhouse gases around the world?

    Back in the 1970s, sulfuric acid seemed to be consuming the environment. Spewed from power plant smokestacks, it rained or drifted down on lakes, streams, forests, buildings, and people in ever-increasing volumes, killing fish and trees, disfiguring stone buildings, and corroding the lungs of people.

    How low can you go?

    U.S. sulfur dioxide emissions from selected plants have already dropped below the levels required by law.CREDITS : SOURCE: A. E. SMITH ET AL., 1998 AND D. BURTRAW, 1998; PHOTO: UNIPHOTO

    But today, after 20 years of control, acid rain is a problem on the mend. In the United States, emissions of sulfur dioxide—the chief precursor of acid rain—are down by half. The nation is on track for another round of reductions beginning in 2000, and, with some significant exceptions, lakes and forests are on the road to recovery. Perhaps even more surprisingly, U.S. acid rain control has been a bargain: The latest cost estimates are about $1 billion per year— dramatically lower than earlier forecasts of $10 billion or more, and about half as much as even the lowest estimates.

    As negotiators gather this week in Buenos Aires to try to figure out how to cut greenhouse gas emissions (see sidebar), the story of U.S. acid rain control offers a case study in the successful regulation of a wide-ranging pollutant. Economists are still trying to understand just why control is proving so cheap, but they agree that at least partial credit must go to the unusually flexible U.S. regulations and their use of the free market. In the 1990 Clean Air Act Amendments, Congress told power plant operators how much to cut emissions but not how to do it, and established an emissions trading system in which power plants could buy and sell rights to pollute.

    It was “a radically different way to go about environmental regulation,” says economist A. Denny Ellerman of the Massachusetts Institute of Technology (MIT). “The lessons learned are pretty impressive.” The United States is now trying to spread those lessons worldwide. Indeed, in Europe, where acid rain reductions appear to be more expensive than in the United States, regulators are taking a close look at the U.S. model. A flexible system of emissions trading also serves as the crux of U.S. proposals for reining in greenhouse warming—although no one is sure whether such a system can be scaled up to work across many different countries. “We proved the concept,” says Joseph Kruger of the Environmental Protection Agency (EPA) in Washington, D.C. “If the acid rain program hadn't been such a success, we wouldn't be talking about trading greenhouse emissions.”

    A new flexibility

    The prospects for economical acid rain reductions by any means looked bleak in the 1980s, says Joseph Goffman of the Environmental Defense Fund in Washington, D.C. In the late '80s, when it was thought that sulfur dioxide emissions—then totaling 25 million tons a year—would have to be reduced by 10 million tons a year, he recalls, estimates of the cost were running from many hundreds to $1000 for every ton shaved off the total, or a cool $10 billion a year. Those high prices were based on complying with the standard type of “command and control” emissions regulations, in which regulators made all the decisions. In the 1977 Clean Air Act, for example, regulators decided on a control technology—a “scrubber” that strips the sulfur dioxide from the spent combustion gases before they go up the stack—and they also decided which plants needed scrubbers.

    Marked down.

    Pundit's predictions (pink) were way off, but even conservative estimates (blue) of acid rain costs have been dropping.CREDITS: SOURCE: R. SCHMALENSEE ET AL.; PHOTO: DAVID SCOTT SMITH

    Under a command-and-control scheme, “you've fixed the technology in place,” says Goffman. “You've eliminated innovation. We did this in the '70s and '80s because that was all we knew how to do. For a while it worked well,” until the easy, cheap reductions had been made. By the late 1980s, regulators had started to look for cheaper options.

    When Congress contemplated the next round of emissions cuts, the $10 billion price tag triggered sticker shock. Instead of instituting ever more draconian and expensive command-and-control regulations, Congress took a new tack in the 1990 Clean Air Act Amendments: It commanded reductions but let power plant operators figure out the cheapest way to control emissions. The reductions were to come in two steps. Starting in 1995, 110 mostly coal-burning plants out of thousands in the country—then emitting about 4 pounds of sulfur dioxide per million British thermal units (mBtu) of heat—would be cut back to only 2.5 pounds/mBtu. In Phase II, starting in 2000, more plants are to fall under the plan and emissions will be tightened to 1.2 pounds/mBtu. The total release expected in 2010 is 8.95 million tons per year, a reduction of 10 million tons per year from the amount projected to be released without controls.

    Congress made the rules even more flexible by authorizing a limited number of emission allowances, “right-to-pollute” coupons that could be bought, sold, or saved. Such trading with a cap on total releases means emitters are “strictly accountable for the end result,” says Kruger, “but they have flexibility in the way they get there.”

    Cost and effect

    But as the final Clean Air Act Amendments neared passage in 1990, just how much money the new rules would cost was a matter of sharp debate. At the high end, some lobbyists, columnists, and industry advertisements were touting vaguely documented figures of “$3 billion to $7 billion per year, with the price tag rising to $7 billion to $25 billion by the year 2000,” according to environmental policy analyst Don Munton of the University of British Columbia. The lower end of these estimates compares with the estimated cost of simply putting scrubbers on the 50 dirtiest plants. That was thought to cost $7.9 billion per year, according to a 1983 Office of Technology Assessment study, or $11.5 billion per year, according to an industry study (figures in 1995 dollars).

    More rigorous cost projections came in lower. These generally fell within the range of a 1990 study for the EPA made by ICF Inc. of Fairfax, Virginia, that found annual costs (in 1995 dollars) could be as low as $1.9 billion per year through to the 2010 goal or as high as $5.5 billion per year. But the lower figures were not widely believed at the time. When EPA testified to Congress just before passage that the annual cost in 2010 could be roughly $4 billion, notes Kruger, “we were roundly criticized for being overly optimistic.”

    It turns out that those figures weren't optimistic enough. Two groups of economists—Dallas Burtraw and colleagues at the Washington think tank Resources for the Future (RFF) and Anne Smith of Charles River Associates in Washington, D.C., Jeremy Platt of the Electric Power Research Institute (EPRI) in Palo Alto, California, and Ellerman—have recently compared those early analyses with actual costs. In 1996, after the first 2 years of the Phase I limits, emissions from participating power plants dropped to 5.4 million tons, 35% below the legal limit for those plants of 8.3 million tons. And it was done at a cost of about $0.8 billion per year, according to two independent estimates by Ellerman and by Curtis Carlson and colleagues at RFF.

    Phase I was expected to be cheaper than later reductions, but estimates of the long-term costs through 2010 have also been dropping (see graph, p. 1024). By 1995, ICF's estimate for the EPA had dropped to $2.5 billion per year. EPRI's 1997 estimate was down to $1.6 billion to $1.8 billion per year, and Carlson and colleagues' 1998 estimate is $1.0 billion—a far cry from many earlier estimates and below EPA's early projections.

    Why is acid rain reduction so economical, at least so far? Economists are still exploring the answer, but they agree that the biggest advantage was the overall flexibility of the program, which allowed power plants to exploit unexpected opportunities. The emissions trading system has been just one factor in this flexibility, these analysts conclude, but its impact is likely to grow in the years ahead as reductions become increasingly harder to achieve.

    The chief benefit of the trading system is that it puts free market forces to work, economists explain. “It's very much like [the way] a bank operates,” says Ellerman. Emitters have a checking account system, and the EPA limits the amount of “currency” in the system. Everyone is free to find the best buy in emissions reduction as long as they don't “overspend” their allowances. “You no longer have a bureaucratic nightmare” like that of command and control, he says.

    The allowance system broadens a power plant operator's options. An operator might install a scrubber—the cheapest available, as there are no regulations on types of scrubbers—or perhaps switch from a coal supply high in sulfur to a low-sulfur one, whatever option is cheaper per ton of emission reduction. Because allowances can be bought and sold, emissions can be cut wherever it's cheapest to do so—even at another company's plant in a different state. Each emitter just needs enough allowances to give to the EPA at the end of the year to cover the tons released. If a plant emits fewer tons than allowed, it can save leftover allowances for later.

    Trading has saved money, reducing costs by perhaps 30%, according to Burtraw and others, but it's by no means an ideal system. “The trading program has worked well, but I wouldn't say it has worked perfectly,” says Burtraw. Although increasing, trading has been light and largely limited to swaps between plants within the same company, perhaps because state regulatory commissions new to the system didn't steer utilities to the lowest cost option allowed by outside trading.

    For the most part, economists suspect that the trading system hasn't come up to speed because operators have had a choice of other unexpectedly inexpensive options. For one thing, “scrubbers turned out to be a lot cheaper than people thought,” says Ellerman. New instrumentation and controls reduced staffing requirements, and units fitted with scrubbers are in use more often than expected, reports economist Richard Schmalensee and his MIT colleagues. And although relatively few trades actually occur, the trading system reduces overall scrubber costs by doing away with the need for backup scrubbers: If a scrubber goes down, plant operators can buy allowances to cover the added emissions. Overall, Schmalensee found, the cost of scrubbing in phase I has been 40% lower than estimated in 1990.

    Sweeter rain.

    Thanks to controls, precipitation in much of the east is less acid, as seen in the percent change from 1983 to 1994.SOURCE: JAMES A. LYNCH/PENN STATE UNIVERSITY

    More unexpected savings came from fuel switching. Much of the Appalachian and Midwestern coal that fed the plants of the Ohio Valley—the biggest source of sulfur dioxide in the country—had a sulfur content of several percent. By switching to coal containing 1% sulfur or mixing low- and high-sulfur coal, plant operators could avoid scrubbers. In 1990, most observers believed that fuel switching would be limited. They expected that burning fuel with 1% sulfur would damage hardware—a prediction not borne out by experience—and that the price of low-sulfur coal would rise once the Clean Air Act upped demand.

    That hasn't happened yet, thanks to developments that, at least initially, were un- related to acid rain control. Low-cost, low-sulfur fuel had been available in the West for some time, notably in the Powder River Basin of Wyoming; the expense in the 1970s was in getting the coal to the East, where the big markets were. Then the Staggers Act of 1980 largely deregulated railroads. Coal transportation costs have fallen 35% since the 1980s, notes Burtraw. By 1990, the amount of low-sulfur coal burned had doubled, but the implications for acid rain control were underappreciated by most policy-makers and power plant operators, says Munton. In fact, Smith and her colleagues say, because plant operators shied away from the unknowns of the fuel-switching option in favor of more familiar scrubbing, phase I reductions cost significantly more than they had to.

    Although these external factors rather than trading have apparently dominated savings in phase I of acid rain control, most observers credit the innovative flexibility of the Clean Air Act Amendments with letting this mix of solutions develop. Once Congress gave plant operators complete freedom to cut emissions, “all the compliance vendors—low-sulfur fuel suppliers, scrubber manufacturers, and natural gas producers, for example—had to compete very hard to win,” says Goffman. “The more choice you give to more people, the better the outcome.” Burtraw agrees that “a big thing about the trading program is the flexibility that allows firms to take advantage of changes in prices and technology.”

    How that flexibility will work out in phase II remains to be seen, as there are new uncertainties in the offing. EPA is considering restrictions on fine atmospheric pollutant particles, some of which form from sulfur dioxide. Reduction of greenhouse emissions could also shrink sulfur emissions, if the United States adopts energy-conservation or fuel-switching measures. But the prospect of unknown steps has the power industry worried, says Platt.

    Meanwhile, Europeans have also been successful in reducing their sulfur dioxide emissions, halving them between 1980 and 1993, says EPA international liaison Rhona Birnbaum. But they have not fully embraced trading. Instead European countries have adopted diverse approaches ranging from limited trading to pure command-and-control regulations. No one has calculated the costs of this mixed approach to date, but estimates for Europe's ambitious 2010 goals—to cut sulfur emissions damaging sensitive ecosystems by 60%—are quite high: about $1100 per ton of sulfur dioxide, according to Mary Saether of the European Union in Brussels. As a result, Europeans are showing increasing interest in American-style allowance trading. “People come to the United States and want to know how this works and how it is generalizable,” says Burtraw.

    He notes that the United States succeeded in making the concepts of trading and flexibility hallmarks of the Kyoto agreement to reduce greenhouse gas emissions. And some of the solutions might be similar to those used in the acid rain case: As producers switched to low-sulfur coal, so they might switch to natural gas, which produces less warming per unit of energy produced. Technology and efficiency improvements, particularly in developing nations, might be a relatively cost-effective way to reduce greenhouse gas emissions.

    But the parallels are not perfect, Ellerman cautions. For starters, it's not clear that a trading system will work with a half-dozen greenhouse gases, where trades among different industries and across the world would be required. And a key factor in the greenhouse case is the stringency of the emission cap—the final figure of allowable emissions. If it's too low, flexibility is reduced along with the price competition it encourages. As Ellerman and his colleagues have written, emissions trading “is not a panacea that inevitably makes costs of emissions control simply disappear into thin air.” But for reining in pollution without choking industry, it looks like a good place to start.

  11. ENVIRONMENT

    Pollution Permits for Greenhouse Gases?

    1. Jocelyn Kaiser

    This week, as delegates from some 180 countries gather in Buenos Aires to figure out how to reduce greenhouse gases, they will spend much of their time pondering a strategy developed to keep down the costs of acid rain controls in the United States: trading emissions coupons in a free market (see main text). Although the notion of selling permits to pollute may seem odd, its success in reducing acid rain led the Clinton Administration to press for these so-called “flexibility mechanisms.” The Administration estimates that if the market operated perfectly, such trading could save around 90% of the cost of cutting greenhouse gas emissions, bringing the price down to between $14 and $23 per ton of carbon. Without trading, cutting emissions to comply with last year's Kyoto Protocol could cost the United States $54 billion to $60 billion a year, the White House says.

    But despite American high hopes, experts warn that setting up an international carbon trading program will be a delicate and difficult task. “It's not a trivial extrapolation from sulfur dioxide trading. This is a tremendously difficult challenge,” says Harvard University economist Robert Stavins. “There's a possibility of doing it right, but if it's done wrong, it won't save nearly what's been predicted.”

    The basic idea is that each country would have a sort of “checking account” of greenhouse gas emissions allowances, set as a percentage of how much it emitted in 1990. The protocol says countries can sell allowances if they have more than they need, or they can earn credits by helping reduce emissions in other countries. The United States, for example, might simply buy allowances from Russia, or it could take on a project such as upgrading coal-fired power plants in Russia in exchange for some of the Russian emissions allowances.

    Keeping the accounts straight may be tricky, however. Monitoring U.S. sulfur emissions required fitting only about 110 power plants with sulfur dioxide monitors, Stavins notes. But with greenhouse gas emissions, there are “millions” of sources in more than 100 independent countries; there are at least a half-dozen important greenhouse gases; and making sure a project really results in lower emissions may be a tremendous challenge, he says.

    A more political issue likely to be on the table in Buenos Aires is whether countries must make domestic cuts before they can swap permits internationally. That argument is “partly moral, partly practical,” says John Lanchbery, a policy officer at the Royal Society for the Protection of Birds in the United Kingdom, as some observers believe countries won't cut at home if they can just buy their way out abroad. Some countries also argue that there should be limits on trading by Russia and Ukraine, which will get a big break because permits are set to 1990 levels, before those countries' economies—and fossil fuel use—plummeted. But “if you wanted to get the most out of trading, you would have no cap at all,” Lanchbery says. A faction known as the Umbrella Group, which includes the United States, Japan, Russia, and other nations, opposes caps, while a bloc led by the European Union favors them.

    Then there's the Clean Development Mechanism (CDM), a complicated and controversial plan to help curb emissions in developing countries. This is an effort to plug what many see as a big gap in the Kyoto protocol: At present it doesn't set any emissions caps for developing countries. Under the CDM scheme, developed countries could earn credits by setting up emissions-reduction projects in developing countries—converting an Indonesian coal-fired power plant to natural gas, for example. But it will be a challenge to prove that countries aren't earning credits for “projects” that would have happened anyway. “It becomes a much squishier story,” says Michael Toman of Resources for the Future, a think tank in Washington, D.C.

    Also to be worked out, says Annie Petsonk of the Environmental Defense Fund's Washington, D.C., office, is how to punish countries that don't meet their emissions targets. “We don't have the equivalent of the Seventh Fleet to hammer them into compliance,” she says. The proponents of flexibility mechanisms are leaning toward a system in which permits would lose some value if the selling country exceeds its targets.

    Beyond the permit question, other major issues at Buenos Aires are expected to be whether developing countries should commit to voluntary emissions reductions, and how to account for carbon dioxide “sinks,” such as replanted forests (Science, 24 July, p. 504). But no decisions on this issue will likely be made until the Intergovernmental Panel on Climate Change, the scientific group whose findings led to the treaty, issues a report on sinks in May 2000, says Alden Meyer, director of government relations at the Union of Concerned Scientists. And as with flexibility mechanisms, the main outcome of Buenos Aires will likely be to set up working groups to hammer out issues over the next few years, says Meyer, who concludes: “We don't expect the drama of Kyoto, but there should be progress forward.”

  12. CANCER RESEARCH

    A Surprising Function for the PTEN Tumor Suppressor

    1. Karen Hopkin*
    1. Karen Hopkin is a free-lance writer in Bethesda, Maryland.

    The PTEN protein apparently exerts its effects by removing a phosphate from a lipid in one of the cell's key growth control pathways

    Last year, cancer researchers welcomed the discovery of the PTEN gene with great enthusiasm. Not only was it a new tumor suppressor, one of the growing number of genes whose loss or inactivation contributes to cancer development, but it appeared to be quite an important one: PTEN mutations have been linked to a variety of common human cancers, including breast, prostate, and brain cancer (Science, 28 March 1997, p. 1876). And unlike some tumor suppressor genes whose functions were complete mysteries when they were first discovered—the two breast cancer genes are examples—PTEN's structure provided an intriguing clue to how the protein might suppress tumor cell growth.

    Putting the brakes on.

    PTEN may inhibit cell growth by removing a phosphate from PIP3, thereby blocking its growth-stimulatory and apoptosis-blocking effects.

    ILLUSTRATION: K. SUTLIFF

    The early reports suggested that PTEN might be a tyrosine phosphatase, an enzyme that strips off phosphate groups attached to tyrosine residues in other proteins. The idea made sense because several oncogenes, which can lead to cancer when inappropriately activated, work by attaching those phosphate groups in the first place, thereby revving up the signaling pathways that tell cells to divide. A protein phosphatase might then be expected to reverse those growth-stimulatory effects. Indeed, cancer researchers had long expected that one or more of the enzymes would prove to be tumor suppressors, but before PTEN's discovery, they had not found any that seemed to fit the bill. Now, a flurry of new papers is showing that they are only half right about how PTEN works.

    The enzyme is a phosphatase—but its target is apparently not a protein. Instead, it's a fatty molecule, or lipid, that's tucked into the cell membrane—a completely new kind of target, as far as tumor suppressors are concerned. “It's kind of ironic,” notes Ben Neel of Beth Israel Deaconess Medical Center and Harvard Medical School in Boston. “Many of us went into the protein tyrosine phosphatase field looking for tumor suppressors. We finally find a tumor suppressor that looks good—and it turns out to be a lipid phosphatase.”

    The target lipid, called phosphatidylinositol-3,4,5-trisphosphate—PIP3 for short—is a key component of one of the cell's major growth control pathways, acting both to stimulate cell growth and to block apoptosis, a form of cell suicide that can keep damaged cells from proliferating. By stripping away one of PIP3's three phosphates, it appears, PTEN reins in the growth pathways and allows cell suicide to proceed, keeping cell populations in check.

    Conversely, loss of PTEN during tumorigenesis presumably keeps the PIP3 pathway inappropriately activated, allowing the mutated cells to grow unchecked when they should die. “I think the results are fascinating,” says cancer gene expert Bert Vogelstein of Johns Hopkins University School of Medicine in Baltimore. “The new data on lipids dramatically change our perspective and should open up new vistas in the study of oncogenesis.”

    What's more, knowing that PTEN suppresses proliferation by interfering with the PIP3 pathway may aid the development of treatments for cancers in which PTEN is mutated. Such therapies might also control cancers in which the PIP3 pathway is overactive for other reasons. It might be possible, for example, to design drugs that work by blocking critical steps in the pathway.

    The first inkling that PTEN might be a lipid phosphatase came in work reported last spring by Jack Dixon, Tomohiko Maehamahis, and their colleagues at the University of Michigan, Ann Arbor. Because the structure of PTEN resembles that of known tyrosine phosphatases, researchers looking for its targets first concentrated on phosphorylated proteins. But Nick Tonks of Cold Spring Harbor Laboratory in New York, a pioneer in the phosphatase field, says that they “had trouble finding any [protein] substrate that made biological sense.”

    Instead, Tonks and his postdoc Mike Myers found that PTEN preferentially strips phosphate groups from synthetic peptides that carry an unusual number of negatively charged, highly phosphorylated amino acid residues. Such sequences don't occur naturally in any proteins known to be phosphorylated by tyrosine kinases. But the finding prompted both Tonks and Dixon to look at other negatively charged molecules found inside the cell, including phospholipids.

    The search paid off: In the 29 May issue of the Journal of Biological Chemistry, Dixon's team reported that, in test tube studies, purified PTEN can remove a specific phosphate group from PIP3. Further, when the researchers genetically engineered human cells to produce higher than normal amounts of PTEN, they found that intracellular PIP3 concentrations were lowered. This suggested that the lipid phosphatase activity might play a role in the body.

    The discovery immediately pointed to a way in which the enzyme might help control cell growth. PIP3 is well known as an internal messenger for certain cell-growth stimulators, such as insulin and epidermal growth factor. Binding of these molecules to their receptors on the cell membrane activates an enzyme that generates PIP3 by adding a third phosphate to the messenger's predecessor, PIP2. PIP3 in turn activates other kinases in the signaling pathway, including one called Akt, or protein kinase B (PKB). Together, these enzymes encourage cells to enter and progress through the cell division cycle and also keep them from careening into apoptosis. By removing the phosphate from PIP3, PTEN could block this pathway, turning off the growth signal.

    The Dixon team did not show directly that PTEN's lipid phosphatase activity is what makes the enzyme a tumor suppressor, however. More recent work has filled that gap. One clue came from the mutant PTEN gene present in a few families with Cowden disease, a rare hereditary disorder whose victims are unusually susceptible to tumors.

    Tonks and his colleagues introduced either the normal PTEN gene or the Cowden mutant gene into cultured cells derived from a glioma, a type of malignant brain tumor whose uncontrolled growth may be at least partly due to PTEN gene inactivation. In a paper in press in the Proceedings of the National Academy of Sciences, the researchers report that the normal gene inhibits the growth of the cells. But the gene with the Cowden mutation has lost its ability to prevent the cells from proliferating.

    What's special about this mutant PTEN, says Tonks, is that it lacks lipid phosphatase activity, but its protein phosphatase activity remains intact. In a separate paper to appear in the 15 November issue of Cancer Research, Webster Cavenee of the Ludwig Institute for Cancer Research at the University of California (UC), San Diego, and his postdoc Frank Furnari present similar results. They, too, have looked at a handful of PTEN mutants, and in a test tube assay have found that every mutation that renders the protein useless as a tumor suppressor eliminates its lipid phosphatase activity. These observations suggest that the lipid phosphatase activity is essential for PTEN's ability to suppress cell growth—a conclusion buttressed by studies of mice in which the PTEN gene has been inactivated.

    In league with cell death.

    Cells with one PTEN gene (top) show typical signs of apoptosis in response to tumor necrosis factor α (TNF-α) and sorbitol, but these agents have no effect on cells lacking a functional PTEN gene (bottom). SOURCE: STAMBOLIC ET AL., CELL 92, 29 (1998)

    Two groups have produced such knockout mice: One, led by Pier Paolo Pandolfi of Memorial Sloan-Kettering Cancer Center in New York City, published its results in the August issue of Nature Genetics and the other, led by Tak Mak of the University of Toronto, describes its mouse in the 22 October issue of Current Biology.

    Both teams find that mice that can't make any functional PTEN die before birth and that animals that have only one good copy of the gene are more likely than control mice to develop cancers. Mak and his colleagues further report that the tumors, in this case thymic lymphomas, show precisely the abnormalities in the PIP3 signaling pathway that would be expected if PTEN works as postulated. They found, for example, that tumors from these mice show elevated concentrations of phosphorylated, that is, activated, PKB compared to normal thymus tissue. “The difference was night and day,” says Mak. “Phosphorylated PKB is just sky high in thymus tumors.”

    The same high concentrations of activated PKB—as well as increased amounts of PIP3 itself—show up in embryonic fibroblast cells from the mutant mice, Mak's group reported in the 2 October issue of Cell. The researchers also showed that these cells are resistant to cell death induced by such standard apoptosis triggers as ultraviolet radiation, tumor necrosis factor α, or heat shock—a change that might make them more prone to becoming cancerous. Conversely, genetically engineering the mutant fibroblasts with the normal PTEN gene restores their sensitivity to the apoptotic signals.

    Mak's mouse embryonic fibroblasts provide a “nice, clean system” for studying the function of PTEN, notes David Stokoe of UC San Francisco. But he adds, “at the end of the day, the reason people are studying PTEN is because of its role in cancer,” which is why a handful of researchers, including Stokoe, are studying PTEN in human tumor cells. In a report in the 22 October issue of Current Biology, Stokoe and his colleagues report that cultured glioma cells have elevated concentrations of PIP3 and activated PKB, just like Mak's fibroblasts. And in September at the CaP CURE conference in Lake Tahoe, Nevada, Charles Sawyers of UC Los Angeles School of Medicine described similar results with prostate cancer cells. “It's a perfect correlation,” he says. “In tumor cells that lose PTEN, the PKB pathway is superactive.”

    Further, researchers have found that adding a normal PTEN gene to prostate, glioma, or breast cancer cells reverses these biochemical changes. The teams doing this work include those of Sawyers, Stokoe, and Tonks, as well as those of PTEN's co-discoverers, Peter Steck of the University of Texas M. D. Anderson Cancer Center in Houston and Ramon Parsons of New York City's Columbia University.

    It isn't clear, however, whether PTEN normally suppresses tumors by keeping cells responsive to apoptosis signals or by helping to inhibit cell division. Mak's work with fibroblast cells from the PTEN knockout mice suggests that the former may be the case, and a group led by Parsons has similar evidence from cancer cells. In an upcoming paper in Cancer Research, Parsons and his colleagues show that adding the normal gene for PTEN to several breast cancer cell lines causes the cells to undergo apoptosis. Other researchers found that this also happens in prostate cancer cells.

    What's more, in a paper scheduled to appear in the 1 December issue of Cancer Research, Steck and his colleagues report that glioma cells made to express wild-type PTEN are more susceptible to anoikis—a form of apoptosis initiated when an epithelial cell becomes detached from its extracellular matrix. Tumor cells lacking PTEN may thus be better suited to breaking away from the primary cancer and spreading to distant sites in the body.

    But in a seemingly contradictory finding, Cavenee and Furnari found that although adding the normal PTEN gene to glioma cells suppressed their growth, it did not do so by boosting their rate of death by apoptosis. Instead, the UCSD researchers found that the gene apparently suppresses the growth of glioma cells by inhibiting their progression through the cell cycle, especially when the cells were deprived of serum nutrients. That makes sense, reasons Cavenee, because PTEN mutations often arise in large, late-stage tumors, when cells may be competing to survive in a nutritionally deprived environment. The change would then help them continue to grow anyway.

    Although PTEN's link to PKB signaling could explain a lot about its biological behavior, it's obvious that plenty of questions remain. One major issue concerns whether PTEN's effects are entirely due to its lipid phosphatase activity. Researchers so far haven't had much luck in finding protein targets, but they haven't eliminated the possibility that such proteins might be lurking somewhere in the cell.

    “We haven't reached the end of the story,” predicts Pandolfi, who says he has received dozens of requests for his knockout mice from researchers who want to determine whether their “beloved kinases” are PTEN targets. And that's just fine with Cavenee. “I think we'll see lots of disparate results before we understand what's going on,” he says. “To me that makes studying PTEN really interesting.”

  13. NEURODEGENERATIVE DISEASES

    Alzheimer's Treatments That Work Now

    1. Marcia Barinaga

    Behavioral interventions developed by social scientists can ease the pain of Alzheimer's disease for both patients and caregivers

    Alzheimer's disease is a ruthless decaying of the mind, devastating to those afflicted and to family members who witness their decline. Within the past few years, researchers have made some progress on treatments that might delay the relentless neurodegeneration, but prevention or cure is still out of reach. Millions of people suffer from the disease, and half a million of those in the final stages languish in U.S. nursing homes, incontinent, their bodies frozen by a severe stiffening called contractures, unable to speak or even recognize family members.

    While neuroscientists and geneticists search for a way to turn back the clock on the ravages of Alzheimer's, another avenue of research—behavioral research conducted by psychologists, social workers, and nurses—is already providing therapies to relieve some of the suffering of the patients and their caregivers. Such behavioral therapies are far from a cure, and they may not even arrest the underlying disease process. Nevertheless, they represent “an area that cannot be ignored, because we can have such a quick, practical impact on so many people,” says Zaven Khachaturian, a former associate director of the Neuroscience and Neuropsychology of Aging Program at the National Institute on Aging who is currently with Khachaturian, Radebaugh, and Associates, an international consulting group on Alzheimer's disease in Potomac, Maryland. Caregivers as well as patients stand to benefit, he notes (see sidebar).

    Over the years, Alzheimer's experts have learned that every patient goes through a predictable decline, from forgetfulness at the early stages to an inability to speak and walk as the disease runs its course. Research suggests that patients may lose some abilities faster than necessary because their caregivers underestimate what they can still do for themselves. This is leading to a “use it or lose it” approach to Alzheimer's, in which researchers gauge what patients can still be expected to do and then help them retain those skills. Studies have shown, for example, that behavioral therapy can slow or temporarily halt patients' loss of urinary continence and of their abilities to dress themselves and communicate their needs.

    Research done in the past decade also shows that behavioral strategies can reduce many disruptive behaviors common in Alzheimer's patients, such as screaming, wandering, or hitting. In the past, institutions have tried to control such problems by giving the patients antipsychotic drugs or physically restraining them—measures that can cloud the patients' minds even further or increase their agitation. The behavioral approaches instead seek to find the causes of the troubling behaviors and avoid triggering them. “What all this comes to is a new science of Alzheimer's management,” says one of the pioneers of the research, New York University (NYU) psychiatrist Barry Reisberg. The next major challenge is to disseminate what researchers are learning to families and community nursing homes outside the orbit of major research centers.

    Return to childhood

    Many of the recent advances in behavioral therapy arise from viewing Alzheimer's disease as a regression toward infancy. That idea is not new: Aristophanes and Shakespeare both compared old age to a second childhood. But Reisberg and his colleagues recently have established that the stages of Alzheimer's disease accurately mimic such a regression: Patients lose the ability to hold a job, handle finances, pick out clothes, dress and bathe, control their bladder and bowels, and speak, all in faithful reversal of the order those skills were acquired as a child.

    As they make this backward march through development, Alzheimer's patients can be assigned “developmental ages.” Researchers have found that by providing training appropriate to those ages, they can help the patients retain longer some of the skills they would otherwise lose.

    For example, a simple method originally developed to toilet-train retarded children helps Alzheimer's patients maintain continence longer. In the late 1980s, Jack Schnelle of the University of California, Los Angeles, showed that the method called “prompted voiding,” in which aides visit patients every 2 to 3 hours to offer to take them to the restroom, helped some incontinent patients retain bladder control. The technique is different from merely taking the patient to the restroom on a schedule, says psychologist Louis Burgio of the University of Alabama, Tuscaloosa, who has studied prompted voiding. By asking whether the patient needs to go, he explains, “it tries to use what is left of the patient's self-knowledge, so you don't make them overly dependent on staff.”

    Cornelia Beck, a nursing researcher at the University of Arkansas for Medical Sciences in Little Rock, has shown that a similar approach works for another basic activity—dressing. She suspected that patients were losing skills such as dressing and feeding themselves because they were not encouraged to use them. So she set out to see if they could be retrained. Rather than dressing the patients, aides in her study would suggest that an arm goes into a sleeve, or touch the patient's arm or mimic putting their own arm into the sleeve, to encourage the patients to do it on their own.

    After 6 weeks, 50% of the patients improved their ability to dress themselves by 1 to 3 points on an 8-point scale ranging from helplessness to independence; 25% improved by 4 to 6 levels. Patients who had been dependent on aides to dress them could now dress themselves, with guidance.

    Even communication can be improved with behavioral strategies. For example, some typical behaviors of Alzheimer's patients, such as repeated questions or nonsensical speech, appear to be failed efforts to communicate. Michelle Bourgeois, a speech pathologist at Florida State University in Tallahassee, developed a strategy to improve communication with Alzheimer's patients, using a “memory book.” This contained pictures of family members and nursing home aides and a schedule of daily activities, illustrated with a clock face showing the time and pictures of the activities.

    The aides spent time with the patients looking at the book and, when the patients asked repeated questions, gently referred them to the right page of the book for the answers. Use of the memory aid “results in less nonsensical vocalization and more appropriate types of conversations” between patients and nursing home staff, says Burgio, who collaborates with Bourgeois.

    An understanding approach

    In addition to looking at how to help patients retain skills, researchers are developing new ways to control problem behaviors. For example, NYU neurologist Emile Franssen and his wife and co-worker, nursing researcher Liduïn Souren, have found that some problem behaviors are the physical consequences of the disease itself.

    By studying 2400 Alzheimer's patients at various stages of the disease, Franssen identified infantile reflexes that appear in Alzheimer's patients as they decline and a muscle stiffness that he calls paratonia, which can eventually develop into crippling contractures. Both changes can cause problem behaviors. “If you move the limbs of a patient [briskly], paratonia increases,” says Franssen. “The caregiver may interpret that reaction as a willful resistance.”

    At least one of the reflexes—a strong grasping reflex—can also cause problems, for example when a caregiver tries to guide a patient out of a chair by the elbow and finds that the patient grabs the arms of the chair in an apparent refusal to get up. Patients may also reflexively grab a caregiver's hair or clothing. Franssen says that caregivers at nursing homes “often misinterpret [this behavior] as aggressive.” But it's not. “It is an inability to cooperate rather than an unwillingness,” says Franssen. Rather than struggling and upsetting the patient, the caregiver can release the grip by merely stroking the back of the patient's hand.

    Other troubling behaviors arise because Alzheimer's patients are agitated. They may scream, plead, pace, disrobe, rummage through people's possessions, hit, kick, or bite. “When I started [14 years ago], people treated agitation as either a psychotic sort of behavior or a nuisance that comes with dementia,” says psychologist Jiska Cohen-Mansfield of the Research Institute of the Hebrew Home of Greater Washington in Rockville, Maryland. “Their response was either psychotropic drugs, restraining, or ignoring. It made life pretty miserable for everybody.”

    Cohen-Mansfield suspected that the behaviors were driven by unmet needs. She had assistants watch patients around the clock, noting what triggered the behaviors. Patients tended to scream or moan, for example, when it was dark and they were alone. Thinking that this might reflect fear or loneliness, Cohen- Mansfield tried three interventions: Assistants would either stop by and visit with the patients at the problem time, play the patient a videotape of a family member talking to them, or play music they had once enjoyed. “It really made a difference,” she says. The patients responded to all three approaches; as a group, their screaming or moaning dropped by roughly half in response to one-on-one interactions or the videotape, and by one-third in response to music.

    In Cohen-Mansfield's study, the one-on-one interaction produced the biggest results, and in general researchers are finding that social interaction helps slow behavioral decline. Reisberg cites two striking cases: women with late-stage disease whose wealthy husbands have lavished professional attention and care on them. Normally people at their stage are bed-bound and withdrawn, but these women attend social events and appear to enjoy life. “They are doubly incontinent and say not a word, but they are happy,” he says.

    Back to school

    For families who can't afford costly private care, many parts of the country have day care centers for Alzheimer's patients, where they engage in developmental-age-appropriate activities, such as games or relearning daily living skills such as brushing their teeth. “The patients respond to the activities and the socialization,” says Reisberg. Medication can be lowered, the patients become less agitated, “and when they come back at the end of the day, they have a lot to say to their family. It's a lot like school.”

    With all the new behavioral interventions, there is one caveat researchers have learned: The success of a program depends absolutely on caregivers' diligence in carrying it out. Studies by Alabama's Burgio and UCLA's Schnelle have shown that nursing home staff members tend to drop new techniques unless they are continually urged to use them. For example, a follow-up of Bourgeois's memory book study found that once the researchers left, says Burgio, “the memory book use went down.”

    Burgio developed a program to combat this problem, adapted from motivational programs used in industry, which combines monitoring of the nursing home staff with incentives for good performance. In a carefully controlled trial, staff members who received his program consistently used the interventions they had learned for months, while those not in the program tended to drop the interventions when the training period ended. Burgio says he tells nursing homes that are interested in behavioral therapies, “if you aren't going to use a staff motivational program, don't even bother with the behavioral intervention, [because] it won't be used.”

    Indeed, the new behavioral methods face many hurdles. “It is a really long road,” says Teresa Radebaugh of Khachaturian, Radebaugh, and Associates, “to take something that is well tested, well described, carefully peer reviewed, and published, but done in a sophisticated setting … and get it out to a nursing home in a small town.” Burgio agrees: “A lot of people still believe [nursing homes] should be following a custodial model, not a treatment model. It will take another 10 years before people are really accepting of the treatment model.” But these researchers are committed to spreading their word and making life a bit easier for Alzheimer's sufferers and their caregivers.

  14. NEURODEGENERATIVE DISEASES

    Caregivers Need Healing, Too

    1. Marcia Barinaga

    Many family members of Alzheimer's patients want to keep their loved ones out of nursing homes. But the burden on these caretakers is overwhelming: They have a higher than average risk of serious illness, and psychologist Linda Teri of the University of Washington, Seattle, found that 75% of family caregivers show signs of major depression. So some researchers are looking for ways to relieve the caregivers' burden.

    One easy way for family caregivers to get relief is to take the patient to a day care center a few times a week, says psychologist Steven Zarit of Pennsylvania State University in University Park. He and his colleagues reported in the September Journal of Gerontology and Social Sciences that caregivers who did so showed significantly less stress, anger, and depression than a control group. But, says Zarit, “studies of how many [family caregivers] use formal services of any kind show that the numbers are very low.” Social worker Lisa Gwyther of Duke University explains that Alzheimer's caregivers often put off using services even when they know about them. “They delay use longer, far beyond what a professional would recommend.”

    But when families are strongly encouraged to use services, the results can be dramatic. New York University epidemiologist Mary Mittelman enrolled 206 spouses who were caregivers for Alzheimer's patients into a study and gave half of them a package of support services, including training to manage behavioral problems, support groups, and counseling for the entire family. Those in the control group got help when they asked for it but were not in a structured program. Mittleman's results, published in The Journal of the American Medical Association in December of 1996, showed that caregivers receiving the support package delayed placing their spouses in nursing homes by an average of a year, compared to the control group.

    Just learning to manage the patient's problem symptoms can give caregivers a big boost. That's where the behavioral strategies psychologists are now developing (see main text) can come in. Teri, for example, had caregivers recall activities that the patient used to enjoy and then adapt a few of them to the patient's present cognitive level. For example, a crossword puzzle lover may still be able to do children's crosswords. After 9 weeks of the program, the patients and caregivers both scored significantly better on depression scales. “Not only did the patients get better,” notes Teri, “but the caregivers got better too.”

  15. ASTRONOMY

    Meteor Shower Sets Off Scientific Storm

    1. Andrew Lawler*
    1. Andrew Lawler is a staff writer for Science currently on a Knight Science Journalism Fellowship at the Massachusetts Institute of Technology.

    The return of comet Tempel-Tuttle has triggered a debate over whether its debris poses a threat to the world's satellites

    Boston—During a cold and clear night in 1833, hundreds of people here rushed outdoors to watch falling stars so thick they looked like a light snowfall. This past February the comet responsible for that spectacular meteor storm—comet 55P/Tempel-Tuttle—swung around the sun after a 33-year absence from the solar system. And on 14 November, Earth will start plowing through a fresh stream of particles that boiled off its surface and were left behind. But this time the show is of interest to more than just idle skywatchers: Its intensity is a matter of some debate among scientists and great concern to those who operate satellites that may encounter the comet's trail.

    The annual displays, called the Leonid meteor showers, were recorded over 1000 years ago by Chinese astronomers and are often most dazzling just after the return of Tempel-Tuttle. But unlike 33 years ago, today the space above the protective atmosphere is filled with satellites full of sensitive electronics that provide global communications, warn of missile launches, and gather research data.

    Just how seriously to take the danger from Tempel-Tuttle's dust is a matter of dispute between NASA, on the one hand, and the U.S. Air Force and Canadian astronomers on the other. “Could everyone lose their pagers and their television connections? All that is possible,” says Lt. Col. Don Jewel, deputy chief scientist for the U.S. Air Force Space Command. But some NASA researchers say the concerns are overblown. “People hear a storm is coming and there is all this excitement—far more than the threat deserves,” says Donald Yeomans, an astronomer at NASA's Jet Propulsion Laboratory in Pasadena, California.

    Part of the disagreement about risk reflects different interests. Whereas NASA researchers are eager to study the little-known composition of the comet, Air Force and Canadian officials are more concerned about its effect on the global satellite network. So each group is going its separate way in monitoring the 1998 Leonids.

    The comet has a path roughly intersecting Earth's orbit. Based on that trajectory, the best viewing site for the peak night of showers on 17 November will be in Asia. Teams of Air Force and Canadian astronomers will set up camp in the Gobi desert of Mongolia and the Australian outback to gather real-time data and alert satellite operators if the shower turns into a storm. Meanwhile, NASA will fly airborne observatories to observe the sun's glow off meteoroids, the persistent meteor trains, and the neutral atom and particle debris.

    Most satellite owners, including NASA, aren't taking any chances, recalling the stray meteoroid that is believed to have knocked out the European Space Agency's (ESA's) research satellite Olympus in 1993. NASA will turn the Hubble Space Telescope's optics away from the stream of particles and ground its shuttle, while Mir cosmonauts likely will retreat to their escape capsule as the Leonids pass. Meanwhile, the Air Force has readied a 400-page classified plan that outlines steps to avoid meteoroids, which are the physical objects seen as meteors—shooting stars—from Earth. (Those that hit the ground become meteorites.) European and Japanese officials likewise are taking precautions. Some commercial satellite operators also plan to reorient their spacecraft away from the speeding particles.

    History and complex mathematical models are the only guides to predicting the density of the Leonids, which typically produce 15 to 20 visible meteors an hour for a few days. Most of those particles are tiny, and, because the comet debris and Earth meet nearly head on, these meteoroids hit the upper atmosphere about four times faster than most stray meteoroids. During the last Leonid storm, in 1966, the skies above the western United States were pelted for a few hours with an estimated 150,000 meteors per hour, which observers likened to the density of a steady rain.

    But predicting meteor rates is like guessing how many raindrops will fall during a downpour. The comet's exact trajectory and debris stream are hard to pinpoint, and the gravity of other planets such as Jupiter can pull the stream away from an intercept with Earth. A slight perturbation can mean the difference between a few dozen and thousands of meteors an hour.

    Estimates for this month's event, expected to peak for about 2 hours on the evening of 17 November, vary widely. Whereas NASA astronomers like Yeomans expect between 200 and 5000 meteors an hour, Peter Brown, an astronomer at the University of Western Ontario in London, Ontario, and a few others put the high end closer to 10,000. Still, that number is an order of magnitude lower than Brown and some colleagues were predicting a few years ago before they obtained new data on the comet's trajectory. And Brown says his models suggest that the 1999 shower could surpass this year's levels if Earth passes more directly through the stream the next time around.

    Both Yeomans and Brown admit such estimates are in part guesswork. “I don't think anyone has an inside track,” says Brown. “It is a bit of a crap shoot,” adds Yeomans. In 1899, for example, astronomers were ridiculed when the event turned out to be a dud after they had predicted a fantastic storm to rival that of 1866. Few people were watching when a storm did unexpectedly appear the following year. And historical data are sketchy—the lack of a 1933 record of a Leonid storm, for example, may have been due to cloudy skies above much of America and Europe.

    If making accurate predictions of the rate of meteors is hard, forecasting their effect on satellites whirling through orbit is harder still. But it is no mere academic matter: Billions of dollars in hardware and services, not to mention national security, are at stake. In 1966, there were only a handful of relatively primitive satellites; today there are about 650. All satellites, whatever their orbit, are at risk, as they orbit above the atmosphere, which burns up most comet debris.

    The high speed of the Leonids—about 72 kilometers per second—means that each particle can pack quite a wallop. But an actual hit could do less damage than the sudden magnetic field that the particles can generate, which could play havoc with the electronics covering most spacecraft. ESA engineers believe that's what sank the Olympus spacecraft. Some U.S. military satellites are built to withstand debris or the effects of a nuclear explosion, but most research and commercial spacecraft tend to have largely unprotected systems.

    A joint press release by NASA and the U.S. Space Command on 5 October characterized the threat as “elevated but not serious,” although some government officials say the wording reflects an effort by NASA and the White House to minimize the threat. “They don't want to make a fuss,” says one government official involved in the discussions. NASA astronomers point out that the threat to humanmade objects is still small—the greatest threat, a 1% to 5% probability of a hit, they say, is to the Advanced Composition Explorer perched at L1, a gravitational balance point between Earth and the sun—and that two NASA satellites in orbit during the 1966 storm survived without a scratch.

    Despite the press release's wording, the Air Force is putting up nearly half the $800,000 needed for the Mongolian and Australian expeditions, with Canada, Europe, and Japan providing the rest. The teams will use sophisticated radar to detect the mass of Leonids in all weather and optical telescopes to provide more detailed images. The data will be transmitted to the University of Western Ontario for analysis and then, within minutes, to a U.S. Air Force Web site (not accessible to the public to prevent overloading the system). If the shower strengthens, operators can move quickly to batten down their satellite hatches. “We want to be ready if [a storm] does happen,” says the Air Force's Jewel.

    NASA officials say that, although they support the data-collection effort, they are skeptical of the attempt at providing real-time warnings. “Going to Australia and Mongolia is a great way to enjoy the display, and getting real-time data is a noble goal,” says Nicholas Johnson, head of the orbital debris office at NASA's Johnson Space Center in Houston. “But the equipment has not been adequately tested.” Brown admits that the technology is new but says the system was tested successfully last month in Australia. Instead, NASA plans to fly airborne observatories over the eastern Pacific to analyze the composition of the meteoroids and their effect on the atmosphere. “It's important to keep the science of the Leonids in mind, too,” says the mission's principal investigator, Petrus Jenniskens of NASA's Ames Research Center in Mountain View, California.

    Most communications satellite operators have kept a low profile during the debate. “This is an environmental hazard we are prepared for,” says Ahmet Ozkul, a satellite operations specialist at Intelsat, a Washington-based international communications consortium. Although Intelsat will not buy real-time data, Ozkul says, it plans to rotate the sensitive solar panels of its spacecraft away from the showers without affecting operations.

    And where will Yeomans be on the night of 17 November? “I'm going up to the San Gabriel mountains,” he admits. With Jupiter likely to pull the comet away from Earth's orbit in its next two encounters, he says, “this is a once-in-a-lifetime opportunity.”

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution