Probing the Biology of Emotion
- Christine Mlot
Neuroscientists using an array of new methods to explore the anatomy and chemistry of emotion are finding that intense emotions can leave a long-lasting physical imprint in the brain
After he cracked the origin of species, Charles Darwin turned to another of life's mysteries: the nature of emotions. In his 1872 book The Expression of the Emotions in Man and Animals, Darwin took an outside-in approach, scrutinizing sulky monkeys, snarling dogs, faces of the insane, even his own wailing infant. He found that different species have common ways of expressing certain emotions, reinforcing his belief in the shared ancestry of animals.
A century later, researchers have picked up on Darwin's cross-species approach, but they've turned the study of emotions inside out. In rats, monkeys, and people, neuroscientists are looking beyond the surface expression of emotion into the pockets of brain from which they arise. By recording the activity of single neurons and analyzing brain chemistry in rats and other animals, and scanning brain activity in humans, they are beginning to map the neural circuits that send emotional messages. And although their field is in its infancy, they are poised to integrate these studies into an understanding of the biological basis of emotion. “Emotion is now tractable at a mechanistic level,” says neuroscientist Richard J. Davidson of the University of Wisconsin, Madison. “It's been a huge advance, just enormous.”
Increasingly, researchers are finding that intense emotions, particularly at key times in early life, can trigger not only behavioral changes but long-lasting physical changes in the brain. These persist long after the emotions themselves have passed and shape emotional responses later in life. This inside-out approach is also lending new insight into another favored Victorian notion—that of emotional temperaments. Individuals who are fearful or resilient not only have characteristic behaviors, they have distinct patterns of brain activity, too.
The new findings have sparked excitement—and more research. In the past month, scientists have gathered at three major meetings on the science of emotions.* And many young clinicians are eager to enter the field, hoping for better ways to diagnose and treat emotional disorders, which afflict an estimated 15 million adults in the United States. Meanwhile, spending by the National Institute of Mental Health (NIMH) on the neurobiological basis of emotion has been relatively modest, but it's growing, reaching $6.3 million in fiscal year 1997. “We think the funding future is very bright,” says psychiatrist Ned H. Kalin, who directs the 2-year-old Health Emotions Research Institute at the University of Wisconsin, Madison. Scientifically, “emotion is way behind in many respects,” admits Antonio R. Damasio of the University of Iowa College of Medicine in Iowa City. “But it's catching up.”
The anatomy of emotion
Emotions were long the province of behavioral scientists, while neuroscientists typically focused on cognitive or sensory functions such as vision. Emotions were considered “too vague and difficult to quantify,” says Damasio. “It's not as bad as being a sex researcher,” adds Stanford University psychiatrist David Speigel, who pioneered studies of stress and cancer survival, “but it comes close.”
When researchers did begin to probe the neurobiology of feelings, they first focused on anatomy. They have traced emotional messages to such areas as the prefrontal cortex, just behind the forehead, and the ventral striatum, deep in the brain. But one of the most important emotional sites, as shown over the last 15 years by New York University neuroscientist Joseph LeDoux and others, is the amygdala—an almond-shaped structure in the center of the brain that is a key station in the processing of fear. Over the last several years, work in rats has identified finer and finer areas within the amygdala that are part of the neural fear circuits, says LeDoux. Researchers have shown, for example, that a cluster of neurons called the lateral nucleus brings the fear message in from the senses, and another cluster, the central nucleus, sends it out to other brain structures. Researchers are now pinpointing the neural fear connections at even finer scales.
Meanwhile, less detailed imaging studies show that when humans feel fear, the amygdala becomes active. “It's the hot area,” says Yale University psychologist Elizabeth Phelps. She, LeDoux, and others report in a study in press in Neuron that when human subjects who have been conditioned to associate a visual cue with a shock see the telltale cue, blood flow to the amygdala increases. In another new study from Damasio's lab, patients with damaged amygdalas rated faces with negative expression to be as trustworthy as faces with positive expression. Without the amygdala to issue a warning, these patients apparently don't feel the usual wariness sparked by a stranger, Damasio said at the NIMH meeting.
Now that this wave of research has marked off some of the brain territory crucial to emotions, other researchers are studying the biochemical events that take place there. They are detailing how intense or even mundane emotional experiences leave their marks on the chemistry of the developing mammalian brain. For example, Trevor W. Robbins's group at the University of Cambridge has been comparing adult rats subjected to two different kinds of early life stress. One set of rats is stressed by being raised in solitary cages after weaning. The other group is stressed early by repeated separations from their mothers but then grow up among other rats.
The two different stressors produced almost opposite behavioral syndromes, Robbins said at the Madison meeting. Isolation-reared rats appear frenzied, becoming overexcited in response to food cues and a new environment, and they are unusually sensitive to amphetamines—similar in some ways to the attention dysfunction seen in schizophrenia and other diseases. In contrast, maternally deprived rats seem to be less responsive to their environment and exhibit dull reactions, similar to human mood disorders, he says.
These behavioral syndromes correspond to different changes in the rats' brain chemistry. By inserting microprobes into rat brains and sampling neurotransmitters, Robbins's group found that the isolation-reared rats had higher levels of dopamine—a neurotransmitter thought to go awry in schizophrenia—in certain areas of the brain known to be involved in addiction and motor control. The maternally deprived rats, in contrast, were found in post-mortem tissue analyses to have reduced levels of the mood-mediating neurotransmitter serotonin—malregulated in depressed humans—in parts of the brain that process emotions and memory, Robbins reported at the Madison meeting.
In contrast, researchers have shown that intense mothering—presumably an emotionally positive experience for the infant rat—also has a powerful effect on brain development. Graduate students have long noticed that when rat pups are handled often by people, they grow up to be relatively less anxious and more resilient. The key, Michael Meaney of McGill University in Montreal, Paul Plotsky of Emory University in Atlanta, and their colleagues found last year, is the intense attention—licking, grooming, and nursing—that the rat mothers lavish on the pups each time they are returned to the nest (Science, 12 September 1997, pp. 1620 and 1659). The team found that a certain subset of “good mothers” gave even nonhandled pups this extra attention, and these pups showed similar beneficial effects.
And in the 28 April Proceedings of the National Academy of Sciences, researchers report neurochemical changes that correspond to these behavioral differences. Rats with especially attentive mothers have more receptors for neurotransmitters that inhibit the activity of the amygdala and fewer for corticotropin-releasing hormone, a stress hormone. Those changes in receptor numbers could explain why the adult animals display more equanimity in novel environments, says Plotsky.
Even subtle factors in a young animal's environment can color the emotional life of the adult. For example, in a study of 28 lab rhesus monkeys, Kalin and his colleagues identified a birth-order effect. Later born monkeys had lower levels of cortisol—a neurally controlled stress hormone—than first- or early-borns, the researchers report in the February Behavioral Neuroscience. Noting that experienced mothers have different mothering styles, Kalin suspects that the emotional state of the mother, who might be calmer with later borns, somehow affects the offspring's hormone levels.
Although no one is ready to make a direct leap from rats or monkeys to humans, the point is clear: Emotional events in young mammals can have major, long-lasting effects on the neurochemistry of the developing brain—and therefore on mood and behavior. “We're talking about the effects of very early experience on the adult brain, when most of the very early [hormonal] stressor effects have waned,” says Robbins. “They are lasting effects.” Adds Plotsky, “you can do something fairly mundane in the first days of the animal's life, and somehow this changes how that animal responds to its environment for the rest of its life.”
But it's not only environmental effects—both extreme and subtle—that color emotional responses. Studies in both animals and humans support the idea that individuals carry certain dispositions throughout their lives. For example, Kalin has found that some infant monkeys are abnormally fearful, exhibiting a startled “freeze” behavior with very little provocation and having high baseline cortisol levels. And in humans, decades of study by Harvard University psychologist Jerome Kagan and his colleagues are revealing what look like innate, lifelong temperaments. Kagan's group examined 450 baby boys and girls, first at 16 weeks, then again at 14 months, 21 months, 4 years, and 7 years, by testing their response to cues they could see, hear, and smell, such as a cotton swab dipped in alcohol.
They found that 20% of the 16-week-old infants fell into a test category Kagan calls “high reactive”: The tests made them fretful and agitated. Another 35% responded with little distress and low motor activity. Over time, some of the high reactives began to respond normally, while others began to show extreme shyness. None became a bold, fearless child, says Kagan. By age 7, about one-third of the high reactives had developed extreme fears compared with 10% of the others, Kagan said at the NIMH meeting.
Brain imaging complements these behavioral studies by showing a consistent package of brain activation that dovetails with temperamental differences. In Kalin's study, the abnormally fearful rhesus monkeys also had relatively more right frontal brain activity, as recorded by electroencephalograms.
Davidson finds a similar asymmetry in people. People who are negative or depressed according to standardized psychological tests tend to show more baseline prefrontal activity on the right, he says. And the happy-go-lucky folks who are more likely to bounce back when life throws a curve ball tend to show more activity in the left prefrontal cortex.
He speculates that the prefrontal cortex modulates the emotional activity of the amygdala. People with more left prefrontal cortex activity can shut off the response to negative stimuli more quickly, he says. “Being able to shut off negative emotion once it's turned on is a skill that goes with left activation.” He adds that it's not yet known whether such temperaments are inborn or a product of very early life experiences.
Indeed, Davidson and others caution that they have far to go in explaining the full biological basis of our passions. LeDoux calls the state of the science of emotions “infantile,” as the only emotion for which the neural hardware and software is well understood is fear, and even that has mostly been parsed in the rat. “Things have not entirely coalesced into a coherent picture,” agrees NIMH director Steve Hyman. He hopes to help it develop, by pushing neuroimagers to test hypotheses about the neural circuits, and by “goosing cognitive neuroscience to start considering emotion.” Darwin would no doubt approve and sympathize. Understanding the origin of emotional expressions remains a great difficulty, he wrote, and “it deserves still further attention.”
↵* Wisconsin Symposium on Emotion, Madison, 17–18 April; Society for Neuroscience 1998 Brain Awareness Symposium, Chicago, 24 April; National Institute of Mental Health and Library of Congress, “Discovering Our Selves: The Science of Emotion,” Washington, D.C., 5–6 May.
Unmasking the Emotional Unconscious
- Christine Mlot
What—or where—is the unconscious mind? That question has long been the province of psychotherapists, but now neuroscientists are exploring the nature of awareness (Science, 3 April, pp. 59 and 77), and emotion researchers are joining in. A handful of clever—if controversial—imaging studies offer what may be a glimpse of the elusive unconscious mind at work by revealing different patterns of brain activity when people react to conscious and unconscious emotional stimuli.
Most such emotion studies are based on a method perfected by neuropsychologist Arne Öhman of the Karolinska Institute in Stockholm. Researchers flash a fearful or angry face before subjects for several milliseconds, then flash a neutral face for a longer period. They also measure subjects' skin conductance—a reflection of sweat gland activity and a sign of nervousness. The neutral face apparently masks subjects' awareness of the negative face, as they report seeing only the second image.
But the split-second glimpse of the negative face doesn't go completely unnoticed, as a team led by neuroscientists Paul Whalen and Scott L. Raush of Harvard Medical School and Massachusetts General Hospital in Boston reported in January in the Journal of Neuroscience. They used Öhman's “masking” method while scanning subjects' brains with functional magnetic resonance imaging, which reveals areas of high oxygen uptake. When the fearful face was flashed, the brain showed activity almost exclusively in the amygdala—a structure known to store emotional memory, especially fear. When a positive, happy image was flashed, the signal from the amygdala was reduced, showing that it was less involved.
A new study by Öhman and neuroscientists John Morris and Ray Dolan of University College London adds another twist. Researchers first conditioned subjects by repeatedly showing them the angry face followed by an obnoxious noise, training them, in classic Pavlovian fashion, to have a stronger reaction to the face. Then they did the masking experiment, using positron emission tomography (PET) scans.
The difference in brain activity when the subjects were aware and unaware of the face, Dolan told the 4th annual Wisconsin Symposium on Emotion last month in Madison, was “dramatic.” When subjects did not report seeing the angry faces, they still registered an increase in skin conductance—and the right amygdala lit up in the PET scan. When subjects were aware of the threatening cues, the left amygdala showed activity, suggesting that the left side is involved with conscious response and the right with the unconscious mind.
“What these data are suggesting is that conscious awareness of a target stimulus … can modulate the associated neural response,” Dolan said in his talk. Noting that language is mostly a left-hemisphere function, he speculated that it may help define consciousness.
But others, such as neuroscientist Richard Davidson of the University of Wisconsin, Madison, aren't quite convinced. “I would regard [the finding] as preliminary,” he says. “We don't [know] at this point what the neural substrates are of emotional consciousness.” Still, he and others are intrigued. What the masking experiments show, says Whalen, is that “some part of your body knows that something's out there even when you don't. That's interesting.”
Biggest Extinction Looks Catastrophic
- Richard A. Kerr
The most profound ecological disaster in the history of the planet struck at the end of the Permian period 250 million years ago, snuffing out about 85% of the species living in the ocean and 70% of the vertebrate genera on land. But devastating as this event was, until recently most paleontologists believed that the dying was long and slow, lasting 8 million years or more. And most looked to such causes as gradual sea level fall and climate change. Then last year new dates from Chinese rocks shrank the final pulse of marine extinctions to less than 1 million years (Science, 7 November 1997, p. 1017). Now more dating of the same rocks squeezes the disaster even further—and suggests a catastrophic cause, perhaps even a comet or asteroid impact.
The new results, reported on page 1039, show that a shift in the ratio of carbon isotopes recorded in marine rocks—an event intimately tied to the extinctions—lasted perhaps as little as 10,000 years. “It's the final nail in the coffin of those who say the extinction was prolonged,” says paleontologist Paul Wignall of the University of Leeds in the United Kingdom.
The telltale rocks, near the village of Meishan in southern China, are beds of ancient marine sediments that record the disappearance of marine animals and, at the same time, a huge spike in the ratio of carbon-13 to carbon-12. The isotopes, taken from the rock itself, offer a more continuous record than fossils, which are subject to the vagaries of preservation. So geochronologist Samuel Bowring of the Massachusetts Institute of Technology, paleontologist Douglas Erwin of the National Museum of Natural History in Washington, D.C., paleontologist Jin Yugan of the Nanjing Institute of Geology and Paleontology in China, and their colleagues applied a well-established dating technique—based on the clocklike radioactive decay of uranium to lead—to volcanic ash layers scattered through the rock beds. The team's dates showed that the isotopic drop and a partial recovery took 165,000 years at most, and possibly as few as 10,000 years.
Such a dramatic, rapid shift in oceanic carbon isotopes requires an equally dramatic explanation—perhaps that the microscopic plants that maintain a normal carbon isotopic ratio in the ocean were suddenly nearly wiped out, say Bowring and colleagues. That makes falling sea level, for example, an unlikely driving force, notes Erwin.
The researchers speculate that the ultimate culprit was volcanism—the massive eruption of the Siberian Traps, which began at or within a few hundred thousand years of the boundary and ended in less than a million years. The global haze of sulfur particles from the eruption—the largest ever on land—may have caused a sudden chill by reflecting sunlight, or massive carbon dioxide emissions might have led to prolonged greenhouse warming, the team speculates. Or perhaps these direct volcanic effects induced indirect effects, such as a sudden overturning of the ocean, that both killed off species and triggered the isotopic spike.
It's even possible that a huge impact was the culprit, say Bowring and Erwin. If the impactor was a comet, its considerable load of organic material, which would have contained isotopically light carbon, might have directly produced the spike. For now, the ultimate cause remains a mystery. But “whatever happened,” says Erwin, “it happened very quickly.”
Exploding Stars Flash New Bulletins From Distant Universe
- James Glanz
In just the past few months, astronomers have glimpsed an extraordinary new picture of the universe in the glare of the cosmic flashbulbs called supernovae. Everyone from theoretical physicists to philosophers of science is struggling with the startling implication that emerged after observers laboriously discovered and studied scores of these distant, exploding stars: A mysterious repulsive force has been at work over billions of years, counteracting gravity and speeding up the cosmic expansion rate (Science, 27 February, p. 1298; 30 January, p. 651). Now the light of these same supernovae is adding some intriguing new details to this picture.
After further analyzing their observations of how fast these beacons are rushing away from Earth, the two teams that made the original discovery are now ready to report a cascade of new findings about how the universe behaves both in our own cosmic neighborhood and over the largest scales. They have found evidence that we might live in a “Hubble bubble”—a region that is expanding slightly faster than the universe as a whole. They have also picked up clues to just what kind of energy might be filling space and causing the acceleration and have offered a preliminary assessment of the universe's total density of energy and matter. “We have a tool that can be used to approach cosmology from another angle,” says Saul Perlmutter of Lawrence Berkeley National Laboratory and the University of California, Berkeley, the leader of one of the teams. Perlmutter's team has squeezed yet another finding from the supernova data: large-scale confirmation that time itself runs slower when objects—in this case, the supernovae—are traveling at a large fraction of the speed of light because of the expansion of the universe.
Both the Perlmutter group and the other team, the High-z Supernova Search Team led by Brian Schmidt of the Mount Stromlo and Siding Spring Observatory in Australia, stress that the data are far from conclusive for most of these claims. But the findings testify to the power of so-called type Ia supernovae as cosmic probes. These exploding white dwarf stars all blow up with nearly the same brightness, acting as “standard candles,” whose apparent brightness as seen from Earth can be translated into distances. The supernovae can be seen across most of the visible universe, at distances corresponding to earlier times in cosmic history.
By plotting the distances of the supernovae against the speed at which expansion is carrying them away from Earth—easily found from the redshift, or stretching, of their light—astronomers can see how cosmic expansion has changed over time. For nearby supernovae, that plot is nearly linear, implying no change in the expansion rate, or Hubble constant. Farther away the line subtly bends in a direction that shows the expansion has accelerated since the light was emitted.
Last year, the Perlmutter team concluded from such “Hubble diagrams” that the universe is expanding roughly uniformly around us on scales of billions of light-years. But Idit Zehavi and Avishai Dekel of The Hebrew University in Israel and Adam Riess of Berkeley and the High-z team noticed a slight shift in the diagrams at a few hundred million light-years or so from Earth. They say the shift may indicate that our region is expanding about 6% faster than the universe at large.
The location of the shift caught their eye: It corresponds to the distance of several large agglomerations of galaxies, including the so-called Great Wall, discovered in the 1980s by Margaret Geller and John Huchra of the Harvard-Smithsonian Center for Astrophysics (CfA) in Cambridge, Massachusetts. Zehavi, Riess, and Dekel, along with CfA's Robert Kirshner, think the gravitational pull of the mass concentrated at the borders of our cosmic neighborhood might help speed up cosmic expansion locally by tugging galaxies outward toward the Great Wall, resulting in a more tenuous region in which our own galaxy sits.
The group's paper, to be published in the Astrophysical Journal, should be regarded as “a preliminary discovery paper,” says Dekel. “It may still be a statistical fluke.” But if the 6% bubble doesn't burst, says Edwin Turner of Princeton University, then techniques that measure the Hubble constant using nearby objects, such as variable stars, “might be giving the right value but not the true, global value.” That might help explain what some astronomers regard as a persistent, nagging discrepancy between those techniques and ones that measure the constant from distant beacons like supernovae. Wendy Freedman of the Carnegie Observatories in Pasadena, California, who specializes in measuring the Hubble constant, isn't yet persuaded that the Hubble bubble can explain the differences but is still intrigued. “There are very few methods that have the kind of promise this does for attacking the problem,” she says.
Peter Garnavich of CfA and his colleagues on the High-z team are looking much farther out along a Hubble diagram based on 16 supernovae recently analyzed by Riess and others. The exact shape of the curve in the diagram should reflect just what kind of energy is at work on large scales, giving a boost to the expansion. Garnavich and his colleagues are comparing the data to the curves expected from the cosmological constant—an effect first postulated by Einstein—and from other forms of background energy, which theorists have named quintessence or X-matter. Although no one knows just what physical processes might produce these forms of energy, they would behave differently. The cosmological constant would deliver an unchanging push, while quintessence and X-matter could have varied over time, and energy from quintessence could actually flow and bunch up, affecting different parts of the universe differently.
So far, says Garnavich, the unrelenting push of the cosmological constant fits the data best. But the handful of distant supernovae observed so far “certainly doesn't do much in restricting what exactly the [form of the] quintessence is.” The most that can be said, he explains, is that one form of quintessence seems to be ruled out: defects in the fabric of space, called light nonabelian strings, that might have been left over from the big bang. The Perlmutter group is now analyzing 40 supernovae, which could give a clearer picture of the mysterious energy.
But whatever form the acceleration energy takes, there appears to be just enough of it to combine with matter and give the critical density of mass and energy that is predicted by leading theories of the big bang. To gauge the total, Garnavich, with CfA's Saurabh Jha and others, added the supernovae data to observations of the cosmic microwave background radiation, often referred to as the big bang's afterglow. Slight ripples in the background reflect conditions in the early universe and yield clues to basic cosmic parameters. The result is just the right density to make the universe geometrically “flat”—the kind of universe predicted by the simplest versions of inflation, the theory of how a sort of spark in the primordial nothingness could have set off the big bang.
Everything that researchers have concluded so far from these distant beacons rests on a crucial assumption: that the redshifts actually are caused by universal expansion. Most cosmologists don't question this assumption, but a few mavericks have proposed alternative explanations for the reddening of distant objects—for example, a sapping of the photons' energies as they traverse great distances.
Type Ia's offer a way to distinguish among these possibilities, because the physics of the explosions force them to brighten and dim on a predictable schedule. That “light curve” should appear to be stretched out for supernovae rushing away from Earth, because the light carrying news of later and later events would have to travel longer and longer distances.
By examining the light curves of about 40 supernovae, Berkeley's Gerson Goldhaber and others in the Perlmutter group found spectacular confirmation that they really are speeding away from Earth: Events that actually take a month on Earth were stretched to almost 7 weeks for the most distant of the supernovae. Although no one was surprised by the result, says Goldhaber, it's one more example of the light a standard candle can shed on the cosmos.
- ATOMIC PHYSICS
On the Trail of Supercharged Hydrogen
Hunters of exotic atoms usually make their forays to the furthest reaches of the periodic table, where they hope to bag big game by creating heavier and heavier elements. But a trophy may await physicists at the other end of the table, near ordinary hydrogen. New calculations suggest that with a laser's light touch, physicists may be able to create a hydrogen atom carrying two or more extra negative charges. If the feat can be done, it would open up new research avenues for using light to manipulate atoms.
As any chemistry student could tell you, hydrogen, the simplest atom, consists of an electron orbiting a one-proton nucleus. Researchers can vary hydrogen's charge by stripping the lone electron to leave a positively charged nucleus or by adding an electron to give the atom a negative charge. A hydrogen atom with two electrons repels additional electrons; that's why one would not expect to stumble across a hydrogen ion sporting any more than two. “Such ions don't exist in nature,” says Harm Geert Muller of the Institute of Atomic and Molecular Physics in Amsterdam. Indeed, in a recent experiment, Lars Andersen at Aarhus University in Denmark shot free electrons at hydrogen ions with two electrons (H−) but was unable to forge a beast bearing three. “We simply couldn't create such an ion,” says Andersen.
Muller and colleague Ernst van Duijn, however, may have found a new way to foist two or even three extra electrons onto a hydrogen atom, to create H2− or H3−. The trick is to use intense laser beams, which contain powerful electric fields, to steer the extra electrons into wide orbits, essentially spreading out their charge. The electrons then “are able to take turns in occupying positions near the nucleus,” says Muller.
It took some fancy computational footwork to arrive at that conclusion. In calculations compiled in Van Duijn's Ph.D. thesis, published by the institute last month, the Dutch duo developed a new way to calculate the diffuse shape of a hypothetical multielectron hydrogen ion. “We developed a computation method that specifically could deal with the shapes such an ion would take,” says Muller. Their formulas showed that polarized laser light, whose photons vibrate in a preferred plane, could push the electrons into wider orbits. Such orbits would minimize the repulsive forces between electrons, allowing more than two to orbit the same proton. Some experts, however, are skeptical that this electron swarm would stick around a proton long enough for researchers to detect exotic hydrogen as an integral ion. “It could turn out to be an unrealizable phenomenon,” says Chris Greene of the University of Colorado, Boulder.
Muller and Van Duijn are plotting a strategy to prove their calculations right. The required lasers are available, says Van Duijn, but the challenge “is to get electrons and protons together in a laser beam.” Lasers powerful enough to forge the ion only deliver ultrashort pulses, lasting up to 10−12 seconds. The brief illumination rules out the possibility of shooting free electrons at negative hydrogen ions. “Because of the short laser pulses, you have a very low probability for collisions,” says Muller.
A way to skirt this problem may be to start out with a larger molecule, such as methane, and use a laser like a sniper to remove its electrons. This would trigger a “Coulomb explosion” in which repulsive forces rip apart the stripped-down, positively charged methane. “The trick will be to choose a laser of such an intensity that it allows three electrons to end up around one of the ejected protons,” says Muller. One might then look for H2− or H3− with a photoelectron spectroscope, which could shoot photons into the ions and measure specific energies of electrons ejected by multielectron hydrogen. “This experiment is definitely on our Christmas list,” says Muller. If they succeed, he adds, exotic hydrogen ions may be useful for, among other things, generating soft x-rays for probing molecular structure.
- MOLECULAR IMAGING
New Probes Open Windows on Gene Expression, and More
- Robert F. Service
A few days before Christmas 1895, Wilhelm Röntgen snapped a couple of spooky-looking pictures that changed the world of medicine. The images, taken with newly discovered x-rays, revealed the bones of his wife's hands as a set of shadowlike features. Medical imaging has been on fast forward ever since. In recent decades, researchers have added numerous imaging tools, such as positron emission tomography (PET) and magnetic resonance imaging (MRI), that offer more detailed insights into the inner workings of the body. Yet, for all the prowess of these techniques, they show fundamentally the same kind of thing as Röntgen's x-rays: general anatomical features, such as organs, tissue masses, and metabolically active tissues in the brain. Useful as they are, such images can't answer crucial questions, such as whether a tumor is malignant or benign.
Now researchers around the world are furiously competing to launch a new age in medical imaging that looks beyond general anatomy into the molecular workings of tissues. By developing clever probes that give off a detectable signal when they encounter a specific molecule, such as the product of a particular gene, researchers hope to pin down a tissue's exact metabolic state. Investigators have already used this strategy to track the transfer of genes in gene-therapy experiments and map the distribution of an animal's own proteins. Down the road, they hope to build on these successes to perform a variety of feats, from imaging the effectiveness of cancer therapy to mapping when different genes get turned on during development—all without removing tissue with a scalpel and testing it in the lab.
“This really marks a new paradigm shift that's taking imaging to the next level,” says Michael Phelps, a PET imaging expert at the University of California, Los Angeles (UCLA), who is developing novel molecular probes to track gene therapy and cancer treatment. “We are at the edge of a revolution,” adds Elias Zerhouni, a radiologist and biomedical engineer at Johns Hopkins University in Baltimore.
This revolution is being fomented by advances such as the unmasking of genes involved in cancer and other diseases and the exact shape of the proteins for which they code. That knowledge, in turn, is allowing chemists to tailor specific new molecular imaging probes that can put one gene or protein in the spotlight while all else remains dark. To turn on the spotlight, this revolution relies on tried and true imaging techniques such as PET, which tracks gamma rays from tiny amounts of radioactive elements injected into the body.
In the standard PET technique, researchers add a radioactive tag to glucose, which brain cells take up when they are metabolically active. A PET scan can then trace which parts of the brain are busy during specific tasks, such as reading or listening to music. In a newer PET variant, neuroscientists add the radiolabels to organic compounds that bind selectively to particular types of receptors that decorate the outside of nerve cells in the brain. Using this approach, researchers have mapped the distribution of nerve cells that use dopamine and serotonin.
Now they hope to reproduce this level of selectivity throughout the body. In 1995, for example, Ronald Blasberg, Juri Tjuvajev, and their colleagues at Memorial Sloan-Kettering Cancer Center in New York City used a laboratory relative of PET—a radioactive scanning technique known as autoradiography—to track the success of a gene-transfer experiment. The Sloan-Kettering team used conventional gene-therapy techniques to introduce into mouse tumor cells a gene from the herpes simplex virus (HSV) that codes for an enzyme called thymidine kinase (TK). HSV-TK adds phosphates to thymidine—a structural component of DNA—and closely related molecules. The researchers then injected the animals with a thymidine analog called FIAU, tagged with radioactive iodine. FIAU readily enters and exits cells. But when it encounters HSV-TK, the enzyme tacks on phosphates and the molecule gets trapped in the cell, where it accumulates and produces a strong enough gamma ray signal to be detected. When Blasberg's team killed the animals and scanned them with an autoradiography machine, areas that expressed the HSV-TK gene lit up.
Autoradiography has a distinct drawback: It doesn't work with live animals, because it can only pick up signals through thin slices of tissue. But the Sloan-Kettering team—as well as another led by Michael Phelps and Sam Gambhir at UCLA School of Medicine—has reported at recent meetings that it has used PET to image the expression of the transplanted gene in live animals. The UCLA researchers, along with another group from Harvard Medical School in Boston, have also developed a slightly different technique in which a transplanted gene induces cells to express molecules on their surface that bind to a radioactive probe.
Phelps, who presented his team's latest work last month at the American Chemical Society meeting in Dallas, says the goal is to use these tracer genes to mark the expression of a therapeutic gene. The idea is to splice the tracer gene into a stretch of DNA alongside the therapeutic gene and a promoter that causes both to be expressed at the same time. Where the tracer gene shows up, “you know you have expression of your therapeutic gene as well,” says Phelps. Gene-therapy pioneer Jack Roth of the University of Texas M. D. Anderson Cancer Center in Houston says that type of information is just what the struggling field of gene therapy needs. “We'd love to know what cells the gene is going into and where it's being expressed,” says Roth.
Looking to RNA
Researchers are now working on a PET-based technique that would have far broader applications: imaging the expression of native genes. The goal is to develop radioactive tags that would home in on and bind to specific messenger RNA (mRNA) molecules—the chemical signals that turn on cellular production of a protein. If it works, says Sudhir Agrawal, who heads discovery research at Hybridon, a Cambridge, Massachusetts-based biotech company, the approach “would change the face of diagnostic imaging, allowing doctors to be able to tell if patients are expressing particular genes.” By looking for declines in the activity of genes that enable cancer cells to proliferate, says Agrawal, doctors could measure the effectiveness of therapy.
Hybridon and other biotech companies are already working to develop so-called antisense RNA molecules that would bind to mRNAs from such genes, blocking the production of the proteins for which they code (Science, 23 May 1997, p. 1192). Phelps and other imaging researchers are hoping to piggyback on this technology, by attaching radioactive labels to the antisense molecules. Progress is slow, but it's beginning to pick up. In the April issue of Nature Medicine, a team led by Bertrand Tavitian at the French biomedical agency INSERM in Orsay showed that it could radiolabel antisense molecules and track them through the body.
Still, no one has yet shown that they can actually image the antisense once it has bound to its target inside cells, says Phelps. Antisense molecules are quickly broken down by enzymes in the body, and the molecules are charged, which makes it difficult for them to cross cell membranes to find their targets. Both problems keep the antisense RNA from accumulating inside cells to levels high enough to be picked up by an imager. In Dallas, Phelps reported some progress: By doctoring the backbone of their antisense probes, they made the RNA less susceptible to enzymatic degradation. But as for the rest of it, he adds, “we're still not there yet.”
Although imaging mRNA directly remains a work in progress, approaches to mapping the distribution of proteins are further along. Researchers at the California Institute of Technology (Caltech) in Pasadena and at Harvard Medical School, for example, have a scheme that relies on MRI. MRI normally detects the particular magnetic signal from hydrogen atoms, which makes it easy to spot hydrogen-rich water molecules. The technique maps the outline of tissues in the body by looking for differences in water content.
To enable MRI to home in on molecules other than water, the Caltech and Harvard teams took advantage of a standard trick for enhancing the contrast of MRI scans: injecting compounds containing paramagnetic metal ions such as gadolinium(III). These ions contain unpaired electrons that interact with neighboring water molecules to boost their magnetic signal.
The Caltech researchers, led by chemist Tom Meade, surrounded gadolinium with bulky organic groups that prevent it from interacting with water. One of these organic appendages is bound to the gadolinium with a sugar molecule that is vulnerable to the enzyme β-galactosidase. When the enzyme is present, it severs the bond to the sugar, water molecules move in to interact with the gadolinium, and the MRI signal is turned up. In tissues where β-galactosidase is absent, the signal is unchanged.
At a brain imaging meeting held last week at the National Institutes of Health in Bethesda, Maryland, Meade and his colleagues reported that they can map tissues with and without β-galactosidase in living mice. “There's no reason why this can't be applied to detect any enzyme,” Meade says, and his team is now developing a set of probes to detect other types of enzymes. They are planning to use their probes to try to track embryonic development in action, to see at just what point different enzymes get turned on in cells.
Ralph Weissleder and his colleagues at Harvard and Massachusetts General Hospital in Boston, meanwhile, are developing a less specific technique for boosting the MRI signal. Like some PET scan researchers, Weissleder and his colleagues are turning to genetic engineering for help. At another molecular imaging meeting last February in Bethesda sponsored by the National Cancer Institute (NCI), Weissleder and his colleagues reported that they had coaxed mouse tumor cells to express a modified cell membrane protein that continually pumps paramagnetic iron particles inside cells. Cells expressing this protein lit up in the imager.
At the same NCI meeting, Weissleder reported that he and his colleagues have also made progress in using light instead of radiation or magnetic resonance to detect the signature of specific enzymes associated with tumors. The key is a novel set of probes that fluoresce only when they react with the target enzymes. Weissleder's team starts with a molecule consisting of 10 to 20 chromophores—common organic dye molecules—closely spaced along a backbone of lysine groups. Normally the chromophores fluoresce when hit with infrared light. But in this case they are spaced so tightly that the excitation energy shuttles from one chromophore to another and is eventually emitted as heat instead of light. When the compound is injected into the body, however, it circulates until it encounters enzymes called hydrolases, which snip bonds between lysine groups in the backbone. This sets the chromophores free, allowing them to fluoresce. Because tumors are rich in hydrolases such as cathepsin, Weissleder and his colleagues found that the chromophores selectively light up tumors that had been grafted onto mice.
“It's a very interesting approach that's just in its infancy,” says Meade. And because it uses just infrared light, which does not damage cells, it has “enormous potential,” he adds. He and others note that because the technique relies on light, it can track enzymes only to a depth of a few centimeters in tissue. But Weissleder points out that the snakelike endoscopes already used by surgeons for minimally invasive surgery should allow the technique to be used throughout the body.
Indeed, each new imaging technique has its own strengths and weaknesses. PET, for example, can pick up the faintest signals, but with only 40 or so PET centers worldwide, it will be hard for doctors and patients to use it clinically. MRI is more widely available and can pick out finer features than PET, but for now it remains less sensitive. “No one technique seems to have all the answers,” says Weissleder. But it's too soon to tell which niche each one might fill. For now, he adds, “it's all still new.”