News this Week

Science  03 Apr 1998:
Vol. 280, Issue 5360, pp. 32

    Death by Dozens of Cuts

    1. Marcia Barinaga


    Every cell contains the makings of its own demise: a set of proteins that can kill it from within. Once unleashed, these proteins direct other molecules to shatter the cell's nucleus and chop up its chromosomes, digest the internal skeleton that gives the cell its shape, and break up the cell itself into small fragments, ready for disposal.

    This self-immolation, called programmed cell death or apoptosis, is essential for the good of the organism. Cells must die, for instance, as tissues are sculpted during embryonic development or when they have been infected with harmful viruses. But either too much or not enough apoptosis can be catastrophic. Many cancers, for instance, are hard to kill because they fail to respond to apoptosis signals, while in neurodegenerative conditions such as Parkinson's disease, or following the oxygen starvation caused by stroke, excess apoptosis may kill off brain neurons. Now researchers are learning about the inner core of the death machinery—a family of protein-cleaving enzymes known as caspases—and are getting hints about how science might someday use this knowledge to intervene when apoptosis goes wrong (see sidebar).

    They have found, for example, that caspases act at two levels to mete out death. Initiator caspases pronounce the death sentence. They are activated in response to signals indicating that the cell has been stressed or damaged or has received an order to die. They clip and activate another family of caspases, the executioners, which go on to make selected cuts in key proteins that then dismantle the cell. Researchers have also identified proteins that keep the caspases in check except when they are needed—a necessity for enzymes that pack the destructive wallop that these do. “There has been exponential progress,” says apoptosis researcher John Reed, of the Burnham Institute in La Jolla, California. “The picture has begun to fill in.”

    The first hint that proteases, as protein-cleaving enzymes are called, play a central role in apoptosis came about 6 years ago from Robert Horvitz's lab at the Massachusetts Institute of Technology (MIT). Horvitz's team had found a mutation in the roundworm Caenorhabditis elegans that prevents normal cell death from occurring during embryonic development. The mutant gene, ced-3, turned out to encode a protease that is closely related to a mammalian enzyme called ICE, which until then was known only for its role in promoting inflammation.

    Causal chain.

    Cytochrome c released from mitochondria binds to Apaf-1, allowing it to bind and activate caspase-9. This triggers apoptosis, marked by stereotyped features such as the membrane blisters on these cells.


    Soon after, in 1994, Junying Yuan of Harvard showed that artificially activated ICE causes cell death in cultured mammalian cells. ICE itself is apparently not normally involved in apoptosis because it does not respond to natural cell-death signals, but Horvitz's and Yuan's findings touched off a search that turned up a whole family of ICE-related proteases, at least seven of which normally participate in apoptosis.

    These enzymes were named caspases (for cysteine-containing aspartate-specific proteases) because they all contain the amino acid cysteine in their active sites and clip their protein targets next to the amino acid aspartate. Among those targets are the caspases themselves, which start out as inactive proteins called zymogens that must be clipped to become fully active.

    Recognition that the zymogens contain the exact same sequence of amino acids that the caspases are designed to snip suggested, says Reed, that “if you could get a caspase activated somehow, a cascade of proteolysis would ensue,” with active caspases cutting and activating more caspases. Investigators have accumulated evidence showing that the enzymes are indeed turned on in this way, in a sequence of cutting that begins with initiator caspases and ends with proteins that kill the cell.

    That left the question of how the first caspase gets turned on. It now appears that some initiator caspase zymogens can cut and activate each other, even though they have at most only one-fiftieth of the protein-splitting ability of the active enzymes. Reports within the past several months from Vishva Dixit's team at Genentech Inc. in South San Francisco, working with Guy Salvesen at the Burnham Institute, David Baltimore's group at MIT, and Donald Nicholson's at Merck Frosst Canada Inc. in Pointe Claire-Dorval, Quebec, have all shown that molecules of some caspase zymogens brought into close contact in the test tube become activated.

    The finding could explain how Fas, a cell surface receptor that causes cells to commit suicide when it receives the right signals from the immune system, does its work. Three years ago, Dixit's team, then at the University of Michigan Medical School in Ann Arbor, and David Wallach's team at the Weizmann Institute for Science in Rehovot, Israel, both found that Fas can draw caspase-8 zymogen molecules together. It now seems that this proximity allows the caspase precursors to activate one another.

    But apoptosis isn't always triggered by receptors receiving specific signals from outside the cell. During development, for example, cells commit suicide if they are deprived of growth factors. Cells mortally damaged by radiation, oxidizing agents, or other harmful chemicals often undergo apoptosis rather than dying a less controlled death that is potentially more damaging to the organism. And some cells kill themselves when they just sense that things aren't going right, such as when control of the cell-division cycle has gone haywire. Such a diversity of death triggers means drawing caspase zymogens to a receptor can't be the only way to activate them. Some additional ways are now coming to light.

    The mitochondrial connection

    In 1996, Xiaodong Wang and his colleagues at the University of Texas Southwestern Medical Center at Dallas made a surprising discovery about the identity of one caspase activator. Working with a “cell-free” apoptosis system—an extract prepared from a cultured human cell line—they found that a critical protein component of mitochondria, the organelles that produce most of the cell's energy, can activate one of the executioner caspases, caspase-3. The idea that this mitochondrial protein, cytochrome c, would moonlight as an apoptosis trigger was “totally unexpected,” says Wang, and too bizarre to be believable at first.

    But by late last year, Wang's team had a fix on just how cytochrome c triggers cell death. It binds to a protein they discovered called apoptotic protease activating factor-1 (Apaf-1). That binding allows Apaf-1 to link up with and somehow activate caspase-9, an initiator caspase which then activates caspase-3. Wang says he doesn't yet know how Apaf-1 turns on caspase-9, but his “favorite hypothesis” is that it acts as Fas appears to: by bringing together multiple caspase zymogens, which then activate each other.

    Cytochrome c makes sense as an apoptosis trigger, says cell death researcher Hermann Steller of MIT, given that various forms of cell damage can harm mitochondria, even causing their outer membranes to rupture. The release of cytochrome c under such conditions could serve as “a stress sensor,” says Steller, telling the cell it has received a fatal insult, and so “it is time to die.”

    Other death triggers may also discharge cytochrome c. Cell biologist Donald Newmeyer of the La Jolla Institute for Allergy and Immunology, Sally Kornbluth of Duke University Medical Center in Durham, North Carolina, and their colleagues have found that after caspase-8 is activated by the death receptor, Fas, it causes mitochondria to dump their cytochrome c. “Even though caspase-8 can activate other caspases directly,” says Newmeyer, “this cytochrome-c pathway is more efficient. It works at much lower levels of caspase-8.” Consequently, it can amplify a death signal that on its own is too weak to cause death.

    The executioners' targets

    Once the executioner caspases are activated, they start cutting other proteins. So far, researchers have identified more than two dozen of these caspase targets, but until recently their link to cell death in most cases was unclear or at best indirect. But recently researchers have found caspase targets whose cleavage clearly triggers specific well-known steps in apoptosis.

    For example, one hallmark of apoptosis is the disintegration of the cell's nucleus: First the chromosomes are chopped up, and then the nucleus itself breaks into small pieces. Last April, Wang and his co-workers reported that they had found a protein from cultured human cells that once cut by caspase-3 triggers chromosome breakup like that seen in apoptosis. The researchers also found that the uncut protein is ordinarily paired with another protein. But they didn't know the role of the other protein, or how the caspase action leads to the breakup of the DNA.

    Then, early this year, Shigekazu Nagata and his colleagues at the Osaka University Medical School in Japan provided an answer. They found the same protein pair in mouse cells and showed that the caspase target's partner is a DNA-cleaving enzyme known as an endonuclease. When the caspase target is clipped, it frees the endonuclease to enter the nucleus and start chopping DNA. “That is a pretty amazing story,” says the Burnham Institute's Reed. “This is the first time you can link an endonuclease directly to the caspases.”

    Another key element in apoptosis—changes in the cell's outer membrane—has been linked recently to two caspase targets. Cells dying by apoptosis go through predictable membrane changes: They lose their normal shape, becoming more rounded and forming blisterlike bumps on their surfaces. Last year, Lewis Williams of Chiron Corp. in Emeryville, California, David Kwiatkowski of Harvard Medical School in Boston, and their colleagues found that caspase-3 cuts gelsolin, a protein that normally binds to the actin filaments that help give a cell its shape. One of the gelsolin pieces then severs actin filaments, says Williams, an effect the team confirmed by injecting caspase-cut gelsolin fragments into cells. Cells treated this way round up and “look like they are undergoing apoptosis,” Williams says.

    After losing their shape, cells dying by apoptosis break into membrane-encased “apoptotic bodies” that are gobbled up by roving scavenger cells. Last spring, Gary Bokoch and Thomas Rudel of The Scripps Research Institute in La Jolla identified one caspase target that is necessary for formation of the bodies: an enzyme called p21-activated kinase-2 (PAK2) that regulates the activity of other proteins by adding phosphate groups to them.

    Bokoch and Rudel found that caspase-3 cuts and activates the enzyme. Then, by engineering cells to make a form of PAK2 that can't be activated, they confirmed the crucial role of the enzyme in the dismantling of the cell. “Even though [the engineered cells] still underwent DNA fragmentation” in response to appropriate apoptotic stimuli, Bokoch says, “they didn't fragment into apoptotic bodies.” PAK2 helps regulate the cell's internal skeleton, but just how it causes dying cells to break up is not known.

    Controlling the death machine

    Given the incredible power of caspases to direct the dismantling of cells, cells need to keep the powerful enzymes under control at times when it's not appropriate to die. Two proteins that seem to aid this cause in multiple ways are Bcl-2 and Bcl-x, which, among other things, block the release of cytochrome c from mitochondria. How they do this isn't entirely clear, but they may act by countering ionic imbalances that can make mitochondria burst. The Bcl proteins seem to have other ways to foil cell death—for example, they may directly bind Apaf-1 to prevent caspase activation—and their amounts in a cell help determine how vulnerable it is to apoptosis. In fact, excess Bcl-2 can convert normal cells into cancer cells that refuse to die upon receiving death commands, and anti-Bcl-2 therapies have shown success in clinical trials for some types of cancer.

    Another set of proteins called inhibitors of apoptosis (IAPs) seems to apply the brakes to apoptosis by inhibiting caspases directly. IAPs were discovered by Lois Miller's team at the University of Georgia, Athens, in 1993. The researchers found that viruses deploy these proteins to keep host cells alive while the viruses replicate and spread. In 1994, Alex MacKenzie's group at the University of Ottawa found the first cellular IAP, a protein that inhibits apoptosis in nerve cells. Since then, researchers have found five more IAPs in mammalian cells alone.

    Several teams, including MacKenzie's, Reed's, and Nicholson's at Merck Frosst Canada Inc., have evidence that IAPs bind to caspases and block their activity. But this may be too simple a picture; like the Bcl proteins, IAPs may work in more than one way. Miller's team found that IAPs apparently can arrest apoptosis before the caspases are involved. “There is no point in jumping to conclusions yet,” says Miller. “It is a complex story.”

    Indeed, the same might be said about virtually all aspects of caspase function and control. Researchers wonder, for example, what other proteins besides cytochrome c may trigger caspase activation. They puzzle over how the cell maintains its checks on the enzymes—then turns off these safeguards when it is time for the cell to die. And of course they are looking for more levers like Bcl-2 that will enable them to manipulate apoptosis for disease treatments. So although the learning curve has been steep over the past year, it looks like researchers in this field won't run out of big questions anytime soon.

    Additional Reading


    Caspase Work Points to Possible New Therapies

    1. Marcia Barinaga

    Death is a part of life, and in the life of any healthy organism, some cells are marked to die by programmed cell death, known as apoptosis. But sometimes cells refuse the order to die, resulting in cancer. Other cells die when they shouldn't, causing neurodegenerative problems such as Parkinson's disease or contributing to the brain damage of stroke. Now, researchers hope that what they are learning about inhibitors of the protein-splitting enzymes called caspases, at the core of the cell's death machinery, will lead to better therapies for some of these conditions.

    In the case of cancer, their goal is reactivating the responses to stress and other signals that normally trigger cell death. Last year, cancer biologist Dario Altieri and his colleagues at Yale Medical School discovered one reason why cancer cells are deaf to these signals. They found that tumor cells, but not normal adult cells, contain high levels of a protein they call survivin, which blocks apoptosis. “We haven't found one tumor that is negative for survivin,” says Altieri.

    The researchers do not yet know how survivin works, but it may inhibit the caspases, as its structure reveals that it is related to the IAPs, a group of proteins known to block caspase action. But whatever survivin does, Altieri says, “the implications are enormous in terms of susceptibility to therapy.” Radiation and chemotherapy are both designed to stress the cell in ways that would normally induce apoptosis. If researchers can find a way to inactivate survivin, cancer cells might be made more susceptible to such treatments.

    Saving cells from apoptosis, in contrast, could limit the damage in neurodegenerative diseases and stroke. Until recently, some researchers worried that blocking apoptosis could make matters worse if the injury that triggered the apoptosis in the first place—for example, the lack of oxygen caused by a stroke—had irreversibly damaged the cell. Cells undergoing apoptosis don't just fall apart but succumb in a neat and tidy way, breaking apart into self-contained packages to be scarfed up by patrolling scavenger cells. If apoptosis were blocked, damaged cells might be left with no alternative but to die in a way that spills the cell's contents, triggering harmful inflammation and spreading the damage even further.

    “That is something the field has been wondering about over the past year or so,” says Donald Nicholson, of Merck Frosst Canada Inc. in Pointe Claire-Dorval, Quebec. But some recent experiments, he says, provide hope that when apoptosis is blocked, cells can sometimes recover to lead healthy lives.

    A report last September by George Robertson at the University of Ottawa and his colleagues supported this idea by showing that mice whose brains were engineered to make extra IAP proteins lost fewer neurons when they had strokes. This suggests that the IAPs protected some neurons from apoptosis, and that those neurons were able to overcome the shock of the stroke. Robertson says his group is now testing the surviving neurons to make sure they can still function normally.

    Hermann Steller's team at the Massachusetts Institute of Technology has shown something similar: They started with fruit flies that had retinitis pigmentosa (RP), a leading cause of blindness in humans. In humans and flies, RP is caused by a mutation in the light-sensitive protein rhodopsin. Over time, the alteration stresses the retina's photoreceptor cells, triggering apoptosis. But when Steller's team engineered some of these flies to make a potent caspase inhibitor called p35, the retinal neurons survived and continued to function normally.

    Steller says his and Robertson's experiments show that “in some degenerative conditions, the cell is a little too wimpy, a little too sensitive. It perceives a problem and dies too readily.” In those cases, says Steller, “if you block apoptosis, you get a permanent, long-term rescue.”


    Physicists Find the Last of the Mesons

    1. David Kestenbaum

    Physicists say they have finally unearthed the last of a set of subatomic particles called mesons. The discovery of the Bc meson, announced on 5 March at the Fermi National Accelerator Laboratory (Fermilab) in Batavia, Illinois, places the capstone on a pile of discoveries going back 50 years and inks the final entry in the “periodic table” for these particles. Physicists expect that the new meson's mass and lifetime will also sharpen their understanding of the force binding atomic nuclei.

    Buried inside the protons and neutrons that form the nucleus are building blocks called quarks. A proton, for instance, harbors three quarks—two “up” and one “down”—bound by the so-called strong force. The same force can also hitch quarks together in pairs to make short-lived mesons.

    Over the years, physicists studying the debris from particle accelerators have found 14 of the 15 possible mesons formed by different combinations of five kinds of quarks. (A sixth quark, the celebrated “top” discovered 4 years ago at Fermilab, is so massive and unstable that it decays before it can pair up and form a meson.) But no experiment had produced enough Bc mesons (which contain a “bottom” quark and a “charm” quark) for them to be observed. Because these quarks are relatively heavy, they are difficult to create alone, much less in a mixed pair.

    This year, because of a recent detector and accelerator upgrade, the roughly 450 physicists at the Collider Detector at Fermilab (CDF) experiment finally had enough data to corner this rare beast. Helping wade through data from trillions of collisions between high-energy protons, two students—Prem Singh of the University of Pittsburgh and Jun-Ichi Suzuki of Tsukuba University in Japan—found what looked like the tracks of about 20 Bc's: a lighter meson and another particle from the Bc's decay. “They were pretty excited at the beginning,” says Fermilab physicist Jonathan Lewis, but after 2 years of refining their work, “they were pretty beleaguered.”

    The data allowed the researchers to measure some of the newborn's vital statistics, including its lifetime. At a half a trillionth of a second, it may be shorter than some theories predicted. One theory, for example, suggested that the charm quark would be so tightly bound that the meson would be slow to decay. A precise measurement of the meson's lifetime and mass, Lewis says, will help physicists settle that question.

    However, the small number of observations has also left some room for doubt about the reality of the new meson. “I'm skeptical,” says Sheldon Stone of Syracuse University in New York, who recalls “discovering” what turned out to be a mirage in a small data sample. But CDF puts the odds against the finding's being a random fluctuation at a million to one. “That's the same chance as being hit by lightning,” Lewis says, “which is my personal definition of an acceptable risk.”


    Inbreeding's Kiss of Death

    1. Jocelyn Kaiser

    The hemophilia that plagued Europe's royal families in the 1800s is a clear example of how mating with first cousins and other close kin can cripple a gene pool by allowing recessive genes to emerge from hiding. Less accepted, however, is the notion that inbreeding can drive a small, isolated population of animals or plants to extinction. Many biologists have thought that natural events—widespread flooding from an El Niño, for example, or disease outbreaks—would swamp any genetic effects.

    But a report in this week's issue of Nature suggests that inbreeding may be a more potent force than previously reckoned. After studying fragmented populations of a single butterfly species, a team led by population biologists Ilik Saccheri and Ilkka Hanski of the University of Helsinki has found a strong correlation between a population's genetic diversity and whether it went extinct. This link held up after ecological factors that also influence extinction, such as weather and population size, were taken into account. “Our study demonstrates that inbreeding can contribute to extinction in a natural population,” says Saccheri. The finding, he adds, bolsters the idea that genetic diversity must be considered when drawing up plans to protect endangered species.

    The issue of whether a meager gene pool can lead to extinction in already fragmented populations has provoked “a hell of a lot of controversy,” says Richard Frankham of Macquarie University in Australia. Although some biologists have argued for the power of inbreeding, a persuasive argument of late, he says, has been that climatic events and random fluctuations in population size are far more important in the wild.

    But that's not the conclusion suggested by new data from Finland's Åland islands, home to a large Glanville fritillary butterfly “metapopulation”—many small, fragmented populations transiently connected when individuals fly between them. To see whether genetic diversity plays a role in extinction, the Finnish team in 1996 collected adult females from 42 populations and analyzed seven of their enzymes and one genomic DNA section for variants, or polymorphisms. After watching seven populations wink out in the last year, the team found that those populations had at least 28% less genetic variation than the survivors. Using a statistical model they had developed to predict extinction risk, which incorporates factors such as a population's size and isolation and habitat size, the researchers found that inbreeding accounted for as much as 26% of the differences from population to population in extinction rates.

    The study is “as close as you'll get to direct evidence” that inbreeding figures in extinction, Frankham says. The findings, he and others say, suggest that wildlife managers should focus scarce resources on those threatened populations with larger gene pools within a species. Says Frankham, “This is going to be absolutely critical as we deal with fragmented populations.”


    Transferred Gene Helps Plants Weather Cold Snaps

    1. Elizabeth Pennisi

    Last month, a cold front swept across the United States, leaving in its wake damaged peach trees, strawberries, blueberries, and other crops. The losses, combined with crop damage from local flooding, added up to $200 million in Georgia alone. But if work by plant molecular geneticist Michael Thomashow and his colleagues pans out, farmers may one day be able to rest easy when a sudden freeze sets in.

    On page 104, Thomashow's team, located at Michigan State University in East Lansing, reports that it has created a new, cold-hardy strain of the small plant Arabidopsis. The researchers did this by genetically engineering Arabidopsis—the plant scientist's version of the lab rat—to overproduce a protein that activates at least four other genes that help the plant withstand the damaging effects of freezing temperatures. Usually, those genes come on gradually when the plant is exposed to slowly declining temperatures. But in the new strain, they are active all the time, enabling it to survive sudden temperature drops to as low as −8 degrees Celsius—4° colder than the normal killing temperature for Arabidopsis. “The dream … has been to take a gene and put it into plants to make them hardy,” comments Tony Chen, a plant physiologist at Oregon State University in Corvallis. “This is the first time that has been successful.”

    Plant physiologists caution that a relatively simple genetic manipulation like the one used on Arabidopsis may not be enough to generate crop plants that can endure a sudden freeze, because not all plants may have the same repertoire of cold-tolerant genes. Still, they say that making crops such as corn or soybeans even a little hardier could make a big difference in helping them weather the sudden cold snaps, early or late in the growing season, that often cause the biggest crop losses. “Even though the [Thomashow team's] work was done in Arabidopsis,” says Charles Guy, a plant physiologist at the University of Florida in Gainesville, “it has profound implications for agriculture.”

    Researchers started coming across cold-tolerance genes almost 30 years ago. At first, however, they didn't know how the genes worked or whether they could protect plants against freezing as well as against cold. Indeed, 2 years ago, when Thomashow's group engineered Arabidopsis with an active form of one of that plant's cold-regulated (COR) genes, the researchers found no such protection for the whole plant—although isolated chloroplasts did fare a little better. That might have been at least partially explained by the fact that it takes a large number of the genes to really do the trick. Both Arabidopsis and wheat contain at least 25 cold-tolerance genes, for example. But if so, that would present an obstacle to genetic engineering of cold-tolerant plants because achieving stable transfer of that many genes is not feasible.

    Plants coordinate the expression of their cold-regulated genes, however, turning them all on together as temperatures drop. And in 1994, three research teams, including Thomashow's, made a discovery that suggested it might be possible to take advantage of that synchrony. Genes are activated by proteins called transcription factors, which bind to regulatory sequences, and the teams found that the known cold-regulated genes carry the same regulatory sequences—an indication that they are all turned on by the same transcription factor.

    At that time, the factor had not been identified, but Thomashow, working with Michigan State plant geneticist Eric Stockinger and plant molecular biologist Sarah Gilmour, found the gene for the transcription factor in late 1995. Thomashow then realized, he recalls, that the discovery of the gene “opened the door to using this transcriptional activator to enhance the freeze tolerance of crops” by turning on the whole battery of COR genes at once.

    In the current work, his group tried to achieve that by attaching the transcription factor gene, called CBF1, to a regulatory sequence that ensures it will be active all the time. They then inserted the modified gene back into Arabidopsis plants. As the researchers expected, these transgenic plants began producing the COR proteins even under normal conditions. When they then froze and thawed leaves from the modified plants, they found “a dramatic enhancement in freezing tolerance,” says Thomashow.

    To see how well the COR proteins were protecting the cell membranes, the researchers measured the ability of the membranes in the thawed leaves to regulate the flow of ions in and out of the cell. They found that the leaves from the transgenic plants did as well as leaves from plants that had had the chance to acclimate to cold weather. In addition, the team reports, whole plants that had been frozen and then thawed survived, as did plants that were first allowed to acclimate to dropping temperatures. The work “is really a tour de force for the field,” notes geneticist Gary Warren of Imperial College, London. “[Before], we didn't know for sure that cold-regulated genes were even necessary for freezing tolerance, and now he's gone and shown they are sufficient.”

    Recent research is also clarifying how the genes protect against freezing. At least some code for proteins that seem to protect plant cell membranes against the disruption that results when freezing or other dehydrating conditions deplete the water molecules normally found in the space between the membrane and the tough cell wall. As molecular biologist Fathey Sarhan of the University of Quebec in Montreal reports in April's The Plant Cell, cold-related proteins in wheat build up in this gap and seem to help keep the membrane intact.

    What's more, cold-tolerance genes in important crops may be amenable to similar genetic manipulations. Sarhan says his group has data from wheat “that support the [Thomashow] result that most cold-regulated genes could be controlled by a master switch.” They haven't found the switch yet but think that it could reside in one or more genes located in a small region of the wheat chromosome 5.

    These demonstrations of the importance of cold-tolerance genes have spurred Thomashow's university, Michigan State, to file for a patent on the CBF1 gene and its use in generating hardier plants. His group is now seeking commercial support for pursuing these efforts in crops. It's not a sure thing, however. Warren points out that in crops that lack freezing tolerance, such as citrus, adding an activated CBF1 gene may not have the same effect it does in Arabidopsis, because they may have lost all or part of their repertoire of cold tolerance genes. But if the approach does work, perhaps farmers will no longer have to watch their strawberries be nipped in the bud by a late freeze.


    Magnetic Brain Imaging Traces a Stairway to Memory

    1. James Glanz

    Los Angeles—“And we forget because we must/And not because we will,” declared the poet Matthew Arnold. We are all familiar with the dimming—sometimes gradual, sometimes almost instantaneous—of visual memories. But poetic insight could not have revealed what Samuel Williamson of New York University (NYU) and his colleagues found when they used a novel brain-imaging technique to trace in real time how the human brain processes and stores a visual stimulus. The team discovered that visual memories—some vanishingly brief and some longer lasting—form in many different places along the processing pathway.

    By monitoring the subtle magnetic fields that sprout from the skull, Williamson and his colleagues Mikko A. Uusitalo and Mika T. Seppä of the Brain Research Unit at the Helsinki University of Technology in Finland were able to watch how neuronal firing marched along the entire visual pathway—the first time that's been done for the human brain. The result was a motion picture of the neural processing of a visual stimulus—and an unexpected discovery. “In every location where you have a response to a [visual] stimulus, it establishes a memory,” said Williamson when he presented the work at a meeting of the American Physical Society here 2 weeks ago. The data imply, says Williamson, that each site has a distinct “forgetting time,” ranging from tenths of a second in the primary visual cortex—the first stage of raw processing—to as long as 30 seconds farther downstream.

    “I think it's a plausible hypothesis,” says Susan Courtney of the Laboratory of Brain and Cognition at the National Institute of Mental Health in Bethesda, Maryland. What's more, because the forgetting times Williamson has clocked become increasingly long, some researchers suggest that he may even have uncovered a stairway into long-term memory—a progression leading to the storage of more permanent representations. “It's the beginning of a very interesting study of the transition from signals into symbols in the brain,” says Jack Cowan, a mathematician and brain researcher at the University of Chicago.

    Earlier work, some of it by Courtney, had already implicated some of the regions with the longer persistence times in “working memory,” the short-term storage that holds information for immediate use (Science, 27 February, p. 1347). Courtney had mapped these memory areas with a technique called functional magnetic resonance imaging, or fMRI, which traces surges in blood flow to working parts of the brain. But although fMRI can make accurate spatial maps of widespread brain activity, its time resolution—generally a few seconds—means it can't capture fast changes in isolated patches, which is what researchers need to do if they want a more detailed picture of how the brain is working.

    The technique Williamson used—variously called magnetic source imaging (MSI) and magnetoencephalography (MEG)—handily picks up these rapid activity changes (Science, 27 June 1997, p. 1974). In MSI, the subject's head is surrounded by an array of hypersensitive magnetic detectors called SQUIDs, for superconducting quantum interference devices. SQUIDs consist of tiny loops of superconductor interrupted by an insulating gap. They can pick up the minute magnetic fields generated by neurons firing in the brain, because magnetic fields change the rate at which electrons “tunnel” across the gap, altering the current. “These SQUIDs are the most sensitive detectors for magnetic fields ever made,” says J. A. Scott Kelso of Florida Atlantic University in Boca Raton. In brain studies, he says, the SQUIDs pick up changing fields a billionth as strong as Earth's magnetic field.

    Working with three subjects, Williamson's team put this sensitivity to work in monitoring the cascade of firing set off when each subject glimpsed a checkerboard pattern for about a tenth of a second. In quick succession, over less than half a second, about a dozen patches lighted up like pinball bumpers, starting with the primary visual cortex in the occipital lobe at the back of the brain. The excitation darted up both sides of the brain, touching other cortical areas, such as the right prefrontal cortex and the left parieto-occipito-temporal junction. “The cortex is processing information, and that processed information goes back down into the thalamus”—a routing switchboard deep in the brain—“and comes popping back up to another place,” says Williamson, who is in the Department of Physics and the Center for Neural Science at NYU.

    The team members then did another round of tests in which they showed the checkerboard twice, with a varying time interval between the displays, to see whether the first stimulus had left any kind of impression along the way. For very brief intervals—10ths of a second—only the areas of initial processing in the back of the brain fired on the second flash, while the others were silent. Williamson interprets this response pattern to mean that the primary visual cortex had already shunted the information off and “forgotten” it, while areas further downstream still “remembered.” But as the interval was increased to 10, 20, or even 30 seconds, the downstream areas began firing on the second flash, with a strength finally approaching that of the initial pop. The memories decayed with the simplicity of a capacitor discharging electricity—exponentially with time—and the later an area's place in the processing queue, the longer its memory time was.

    “That's an interesting clue as to the nature of the higher order processing” of visual information, says Cowan. “It's an important result,” adds Eric Halgren, a neurophysiologist at the University of Utah in Salt Lake City, who is especially impressed with the remarkably simple falloff of the memories within the complicated network of the brain. He nevertheless counsels “some caution,” because it can be difficult to trace an MSI signal to a specific site in the brain. Williamson says these ambiguities are most troublesome for widespread activity deep in the brain, not the shallow patches he monitored in the visual system.

    Kelso, whose own MEG studies of magnetic activity synchronized with motor tasks will be published soon, adds that the technique's spatial ambiguities are worth wrestling with for the sake of the timing information, which is critical for unraveling the workings of the brain. “If you tell me where the game is, that's useful information,” says Kelso. “But if you want to know the rules of the game, you need to look at the time-dependent dynamics. Williamson in this particular task is looking at those kinds of things, and I think that's what's very exciting here.”


    Planetary Scientists Sample Ice, Fire, and Dust in Houston

    1. Richard A. Kerr

    Houston—The annual Lunar and Planetary Science Conference held here last month drew a crowded field of almost 1000 presentations, thanks to the renewed flood of data from planetary probes. Talks ranged from the planetary scale to the microscopic: Researchers modeled the scorching surface of Venus, calculated the age of Europa's ice-covered ocean, and analyzed motes of comet dust swept to Earth.

    Venus's Wild Greenhouse

    With scorching surface temperatures of 500 degrees Celsius, Venus has always served as the classic warning of what can happen to a planet with a runaway greenhouse. But the climate on Earth's sister planet today may be mild by historical standards. Computer model results reported at the meeting suggest that greenhouse warming on Venus might have sometimes intensified so much that it reshaped the planet's surface, wrinkling its plains and perhaps even softening its rocks and erasing some of its geologic history.

    “We absolutely have to pay attention to the atmospheric issue” to understand Venus's surface, said planetary geologist James Head of Brown University after hearing the talks. If venusian climate works as some models suggest, he says, “a lot of the global changes we see in the geology could be explained by global climate changes.”

    The only way to explore this climate history is to build a computer model that can simulate changes in Venus's atmosphere and how they would have affected its surface temperature. When Mark Bullock and David Grinspoon of the University of Colorado, Boulder, did so, they found that venusian climate gyrated wildly after each of the great outpourings of lava that have punctuated its history. The most recent outburst hundreds of millions of years ago spread about 100 million cubic kilometers of lava and created the flat, low-lying plains that cover more than half the planet.

    Bullock and Grinspoon found that an eruption episode would have cooled the climate at first, as the water and sulfurous gases released from the lava thickened the clouds and obstructed sunlight. But this cooling would be short-lived, because sulfur is quickly removed from the atmosphere when it combines chemically with surface rock. Then, the heating effect of the water would take over. Water, a powerful greenhouse gas, could moisten the bone-dry atmosphere and strengthen the already intense greenhouse effect of carbon dioxide. In one scenario intended to mimic the most recent plains formation, surface temperatures rose by 60°C, until the water finally leaked out of the top of the atmosphere into space.

    Heat wrinkles.

    A runaway greenhouse effect may have started a heat wave so intense that it wrinkled this 400-kilometer swath of venusian plain.


    “I'm prepared to believe [that venusian] climate is unstable and subject to temperature variations of 100°C or more,” says planetary geophysicist Sean Solomon of the Carnegie Institution of Washington's Department of Terrestrial Magnetism, who has looked at how such climate extremes might have left their mark on the solid rock of Venus. In the meeting's next talk, he reported calculations showing that as a pulse of greenhouse warming seeps down into the uppermost crust, the rock expands, creating a hefty compressive stress of 500 bars that could buckle the surface the way summer heat buckles pavement. Once the warming penetrates further and causes the deep crust to expand, the effect at the surface might switch from buckling to a modest stretching.

    This stress roller coaster is just what's needed to explain the curious wrinkling of Venus's lava plains, says Solomon. In the Magellan spacecraft's radar images, these plains look like a wind-rippled sea. But geologists had trouble explaining how the roughly parallel ridges formed more or less simultaneously around the globe not long after the plains themselves appeared, as geological indicators suggested. The buckling of the crust would create uniform ridges, and a cycle of eruptions, warming, and tectonic stress would fit the timing, notes Solomon. “If climate works this way,” agrees Head, “it in fact could be explaining wrinkle ridges.”

    More speculatively, Solomon raises the possibility that earlier, larger surges of volcanic activity could have produced even fiercer heating—fierce enough to have erased some venusian geology. If a powerful eruptive episode occurred before the formation of the present plains, it might have heated the surface to roughly 750°C—enough to soften surface rock, he says. Such hot rock, although still brittle to a hammer blow, could ooze over millions of years into a featureless surface, obliterating features created by previous tectonic activity. Subsequent cooling could have set up the crustal stresses that created the distinctive extensional cracks or “ribbons” found on the oldest terrains, such as the much deformed highlands formed some 500 million years ago.

    Solomon's scenario would mean that climate alone is responsible for most of the cracks and ridges seen on the oldest parts of Venus's surface. But researchers aren't ready to write off traditional geologic processes. An episode of extreme heat may have helped smooth the surface where the ribbons formed, agrees geologist Vicki L. Hansen of Southern Methodist University in Dallas, but she attributes the ribbons and subsequent crustal folding to the hot plumes of rock she believes raised the ancient highlands. Venus has not erased all of her early history, Hansen says. “As a planet, she's just starting to talk to us.”

    Harvesting Comet Dust

    Planetary geochemists dream of getting their hands on the primordial material that went into making the planets, and comets have always seemed the most likely place to find unaltered remnants of this ancient material. So for 25 years researchers have been collecting interplanetary dust, which filters into Earth's upper atmosphere from space. Some of this dust presumably came from passing comets, although researchers have never been able to prove that specific particles are motes of comet dust. Now, a new analysis has linked interplanetary dust particles (IDPs) collected in the upper atmosphere in 1991 to a particular comet, Schwassmann-Wachmann 3.

    “It would be amazing if these particles were from Schwassmann-Wachmann 3,” says IDP pioneer Donald Brownlee of the University of Washington, Seattle, “and it looks like they are.” If the connection holds up, a comet-IDP connection would be proven, and earth-bound researchers would have a way to gather and compare dust from different comets by sampling the stratosphere at predictable times of the year.

    About 40,000 tons of IDPs fall into Earth's atmosphere every year, and in June and July 1991, NASA sent high-flying converted U-2 spy planes to gather some of this rain of dust. Many of the particles collected had the fragile, highly porous structure expected of comet dust. The late Alfred Nier and Dennis Schlutter of the University of Minnesota, Twin Cities campus, found that the particles also had an unusual composition: They were relatively low in helium, but the ratio of the helium-3 isotope to helium-4 was the highest ever found in any IDP.

    This odd helium signature prompted Scott Messenger of the National Institute of Standards and Technology and Robert Walker of Washington University in St. Louis to try to identify the dust's source. At the meeting, they reported that they saw several clues pointing to Schwassmann-Wachmann 3. The helium isotopic ratios remain a puzzle, but they reasoned that the low overall helium content meant that the particles had been drifting freely in space only a few years; otherwise, the solar wind would have saturated them with helium. For the dust to make such a quick trip from its source to Earth, it must have been shed from a comet and then swept up when Earth passed through the dust stream a short time later. The dust was also only minimally altered by the heat of atmospheric entry, suggesting that a comet moving relatively slowly with respect to Earth must have been the source.

    Using these criteria plus the dates of collection, Messenger and Walker narrowed the field to a single candidate. Only Schwassmann-Wachmann 3's dust stream supplies slow-moving dust to Earth in late May, they found, just in time for the particles to sink down to the stratosphere for collection in June and July.

    “It's very exciting,” says planetary geochemist Robert Pepin of the University of Minnesota, who adds that “it looks like pretty good circumstantial evidence.” Further chemical and isotopic analysis of the 1991 collection, as well as additional atmospheric collections timed to catch dust from specific comets, could firm up the comet-IDP link. A final check will come in 2006. That's when the Stardust spacecraft, to be launched next February, will return a dust sample from comet Wild 2, and the CONTOUR spacecraft will fly by and analyze dust from Schwassmann-Wachmann 3. By then, researchers may already have plenty of that comet's dust on hand for a close, leisurely study of the stuff the world is made of.

    Historical mote.

    Porous dust particles like this one rain down on Earth and may be ancient material from comets.


    Dating Europa's Ocean

    From the looks of the many images sent back by the Galileo spacecraft, giant ice floes slipping around on a subterranean ocean formed parts of the tortured surface of Europa. But did the cracking and crumpling happen recently enough that the ocean—and any life it might support—is still there? That question sparked a nagging internal debate among Galileo team members. Now an analysis of the clock used to gauge the age of planetary surfaces has settled the debate to most researchers' satisfaction, indicating that Europa's face—and ocean—are young indeed. The new work makes “a pretty strong case for there being water down there now,” says Galileo project scientist Torrence Johnson of the Jet Propulsion Laboratory in Pasadena, California.

    At issue is the origin of the impact craters that pock Europa's face. Planetary geologists use crater counts to date surfaces, because older terrains have been exposed longer to the steady rain of asteroid and comet impactors and so have more craters. Just how fast that rain has been falling in the jovian system has been debated among Galileo geologists calculating the ages of the jovian satellite surfaces. Clark Chapman and his colleagues at the Southwest Research Institute (SWRI) in Boulder, Colorado, argue that although asteroid impacts are rare on Europa, comets hit it often enough to have created most of its craters. “There's no way you can get away from relatively young ages for Europa,” says Chapman. That would make Europa “currently active,” he says, and its ocean still liquid.

    But Chapman's Galileo colleague Gerhard Neukum of the German space research agency DLR in Berlin favors asteroids as the dominant cratering agent on Europa. He notes that asteroids dominate cratering on Earth's moon and the size distribution of craters there resembles that of Europa's sister satellites Ganymede and Callisto, so he argues that the far less frequent impacts of asteroids created most of Europa's craters. Neukum concludes that Europa's surface is ancient, 1 billion to 3 billion years old. “I'm in the minority, but I don't care,” he says. “If Europa is very young, you would have a very young age for Ganymede. I don't buy it.”

    Stepping into this fray are the planetary physicists who calculate how impactors get from their sources—the asteroid belt between Mars and Jupiter and the comet clouds beyond Pluto—to their targets. At the meeting, Kevin Zahnle and Luke Dones of NASA's Ames Research Center in Mountain View, California, and Harold Levison of SWRI in Boulder presented their analysis, which is based on the latest computer simulations of how asteroids and comets from the Kuiper Belt just beyond Pluto would reach Europa. They also include the most complete measurements of comet size and abundance, based on the comets seen in the inner solar system.

    These researchers conclude that comets could have formed 90% of europan craters and asteroids almost none of them. This means that on average, the surface of Europa would be somewhere between 2 million and 50 million years old, says Zahnle, compared to several billion years for Callisto, which is saturated with craters. And the youngest parts of Europa would be a mere 500,000 years old—too young for an existing ocean to have frozen solid.

    The new analysis has changed some minds on the Galileo team. “I thought Gerhard [Neukum] had a plausible argument and should be heard, [but now] the assumptions he makes just aren't supportable,” says planetary geologist Alfred McEwen of the University of Arizona, Tucson. “I am now convinced Kevin Zahnle and Clark Chapman have it right.” The crater analysis makes a strong circumstantial case that an ocean is now sloshing beneath Europa's ice. But direct confirmation of liquid water will likely have to wait for another spacecraft, sometime in the next millennium.

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution

Navigate This Article