News this Week

Science  17 Apr 1998:
Vol. 280, Issue 5362, pp. 376
  1. NEUROPHYSIOLOGY

    Listening In on the Brain

    1. Marcia Barinaga

    ARMED WITH MANY ELECTRODES AT ONCE, NEUROPHYSIOLOGISTS ARE UNCOVERING SYNCHRONY IN NEURAL FIRING THAT MAY REPRESENT A NEW WAY OF ENCODING INFORMATION

    If you try to record a symphony using a single microphone that can pick up only the notes played by one musician at a time, you are likely to get an extremely limited impression of the music. Your recording will miss virtually all the symphony's melodic and rhythmic form. That's the position some neurophysiologists say they find themselves in when they try to understand how the brain works using traditional methods of recording brain activity one neuron at a time.

    Those techniques have provided—and still are providing—a wealth of knowledge about how brain neurons encode information by varying their firing rates. But as long as 3 decades ago, some neuroscientists, eager to see what they were missing, started recording from multiple electrodes at once. After a slow start, that approach is finally coming of age. Recent results gleaned from such experiments suggest that the brain encodes information, not just in the firing rates of individual neurons, but also in the patterns in which groups of neurons work together.

    One of the more tantalizing—and controversial—of these discoveries is that neurons frequently fall into step with one another, forming ensembles that play the same tune, as it were, firing in relative synchrony for brief periods, before some neurons drop out of synch, perhaps to join another ensemble. What's more, studies of systems as diverse as the motor cortex of monkeys and the olfactory system of honeybees indicate that these changing patterns of synchrony correlate with specific behaviors such as a monkey's preparation to move its hand or a honeybee's discrimination of odors.

    These findings suggest that the patterns contain information, a situation that if true “would be quite remarkable,” says Eberhard Fetz, a neurobiologist at the University of Washington, Seattle, because it would “dramatically increase” the brain's information-encoding capacity. But, adds Fetz, who studies synchronous neural activity in monkeys, “a lot of people think there is no need for [a synchrony code], because the firing rates themselves provide enough [information-encoding] possibilities.” Indeed, the case for a role for synchrony is far from proven, says neuroscientist Ad Aertsen, of Albert Ludwigs University in Freiburg, Germany, who designs methods for analyzing data from multiple-electrode recordings. “There is still a lot of need for é efforts to relate [the patterns] to behavior and see what kind of information they might contain.”

    Tuning in to the ensembles

    Single-electrode experiments have been so successful that neuroscientists have had little incentive to take on the technically challenging task of recording from many neurons at once, says neuroscientist Moshe Abeles, of Hadassah Medical School in Jerusalem: “Most researchers felt they could get a lot of exciting results with a single electrode.” Nevertheless, a handful of laboratories, led by George Gerstein at the University of Pennsylvania—who began his multielectrode experiments in the 1960s—have risen to the challenge. The small field moved slowly at first, as researchers solved technical problems and developed ways to handle the data. But by the 1980s, these pioneers established that neurons often fire together for brief periods—their action potentials mirroring each other to within a few milliseconds.

    That synchronous firing suggested that neurons can form a functional group, or ensemble, says Gerstein. And those ensembles are not fixed. “In behaving [animals] or situations where you vary the stimulus, these synchronizations occur dynamically,” he says. “Assemblies come and go.” That fluid response to changing situations “was really a surprise,” adds Abeles. It suggested that the changing patterns may encode information.

    But if they did encode information, what was its function? A clue came in the late 1980s from work by neurobiologist Wolf Singer and his colleagues at the Max Planck Institute for Brain Research in Frankfurt, Germany. Singer's team showed that neurons in the visual cortex of cats that respond to different images on a screen, such as two bars, would fire together when the bars moved with the same speed and direction, as if they were part of the same object. But when the bars appeared to be parts of different objects, the neurons fired at the same rates as before but not in synchrony.

    Those findings piqued neuroscientists' interest because they suggested that synchronous firing may help solve the “binding problem”—the brain's need to somehow link neural signals that are related to each other, such as all the cues from neurons responding to different visual aspects of an object (Science, 24 August 1990, p. 856).

    More recent experiments by the Singer team suggest that synchrony may determine not only how the brain perceives stimuli—for example, as parts of the same object—but even whether it perceives them at all. This conclusion comes from studies on strabismic cats, whose eyes are misaligned like those of a cross-eyed person, so that each eye looks at a different part of the surrounding world. The cats don't perceive both conflicting images at once; the brain ignores the image from one eye or the other, so the cat only perceives the image from one eye at a time.

    The researchers used mirrors to reflect a different moving image on a screen to each of a cat's eyes, and they monitored the cat's eye movements to determine which image it perceived. When only one eye was presented with an image, that is what the brain perceived. But the researchers could snatch perception from that eye by presenting a rival image to the other eye. By altering the comparative contrast of the images, Singer's team found conditions under which a particular image always won. Then they used electrodes to record the activity of neurons in the cats' visual cortex that respond to signals from the eye watching the original image, while using the rival image to control which eye would win the battle for perception.

    When they looked at just the firing rates of the neurons, they could get no clue about whether the image was being perceived by the brain, Singer says. The neurons' firing rates didn't change at all when perception was shifted away. But, he adds, “there was a very clear predictor of perception, and this was the change in synchronicity. When it went up, we knew this eye was the winner; when it went down, we knew this eye was the loser.” That, says Singer, suggested that synchronous firing “selectively raises the saliency” of the signal and “predisposes that activity for further joint processing” by higher areas of the visual system. The experiment is intriguing because it provides “an example where synchrony does something that firing rates don't,” says Fetz.

    Choreographing a move

    Researchers studying other brain regions are finding apparent uses for synchrony as well. In the motor cortex, where neurons fire to direct movements, researchers have found patterns of synchronous activity that seem to help prepare for a move. In one example, Alexa Riehle of the CNRS Center for Research in Cognitive Neuroscience in Marseille, France, in collaboration with Aertsen, trained monkeys to watch a computer screen for a cue, and then, when they received a go command, to touch the screen where the cue had appeared. In each trial, the go command came either 600, 900, 1200, or 1500 milliseconds after the place cue, an arrangement the team chose so that the animals would learn to anticipate when to expect the go signal. They wanted to see how that anticipation—“a pure cognitive signal,” as Riehle calls it, with no input from the outside world—would be represented in neural activity.

    The researchers simultaneously recorded the activity from up to seven neurons in each monkey's motor cortex while it performed the task. The neuronal firing rates jumped when the go signal told the monkey to touch the screen. That was expected, as activity in the motor cortex controls movement.

    Smells good.

    Firing patterns of individual neurons in the antennal lobe of a locust show transient pairwise synchrony (open boxes at right) in response to an odor. Such patterns help a honeybee, shown at left with extended proboscis, recognize an odor.

    MICHAEL WEHR AND GILLES LAURENT

    But more interesting, Riehle says, were the patterns with which the neurons fired relative to each other during the waiting period, when the animals anticipated a go signal that didn't come. Although the neurons didn't change their firing rates at the moments of anticipation, says Riehle, “we detected a significantly higher amount of synchronization than expected by chance” surrounding those moments. The synchrony formed a changing and predictable pattern. Among the seven neurons she studied in any trial, neuron number 1, say, might briefly fire in synchrony with neuron 3, then fall out of step with 3 and join in with neuron 5.

    What's more, the synchrony was most intense in those trials in which the monkeys showed the shortest reaction times upon receiving the true go signal, suggesting that the monkeys were highly motivated and attentive. “We think of it [the synchrony] as … the pretuning and preshaping of the networks that will be involved … in the execution of the movement,” says Riehle. As in Singer's experiments, the synchrony seems to be performing a binding function, in this case linking together the neurons that will need to act together to execute the movement.

    Recent experiments by John Donoghue and his colleagues at Brown University in Providence, Rhode Island, suggest that synchrony is involved not only in binding together neurons during movement planning but also in encoding information for actually making the movements. Donoghue's team trained monkeys to use a mechanical drawing arm to move a cursor on a computer screen to the location where a cue appeared.

    When Donoghue's group recorded from up to 21 neurons at once in a monkey's motor cortex while it did the task, they found that, on average, 30% of the neurons fired in synchrony at any one time. A mathematical analysis also showed that the synchrony patterns could predict the direction of the monkey's arm movements. “We know [the synchrony pattern] is carrying information about direction,” says Donoghue. “But we don't know whether that information is available to the brain to be used for anything. All we know is it is there.”

    The signature of a sniff

    One case in which the brain definitely seems to be using the information encoded in synchronous firing is in the olfactory system of insects. Gilles Laurent at the California Institute of Technology in Pasadena and his colleagues study the part of an insect's brain called the antennal lobe, where the neurons process smells. In locusts, the lobe has about 1000 neurons, and about 100 of them fire action potentials in response to any odor. Not only is information contained in the pattern of activated neurons, Laurent says, but there is also information in the neurons' timing with respect to one another. “It is very clear that the temporal patterns are stimulus specific,” he says. “Knowing the precise timing of the spike gives you additional information about the odor.”

    As in Donoghue's case, the existence of that information doesn't necessarily mean the brain uses it. However, Laurent's team was in a position to address that question directly. That's because his group had discovered that a neurotoxin called picrotoxin abolishes the synchronous firing of neurons in the antennal lobe in response to odors, without altering their firing rates. Armed with the ability specifically to get rid of the synchrony, Laurent's team was ready to ask whether synchrony is essential to an insect's ability to distinguish among odors.

    To do that experiment, his team used honeybees, which can be trained to recognize odors associated with a sugar-water reward. Working with Brian Smith at Ohio State University, the researchers trained picrotoxin-treated and control bees to associate an odor with a reward. Bees show they have learned an odor by sticking out their tongues in expectation of the reward. Both groups of bees did equally well.

    But a difference between the groups emerged when the researchers tested whether the bees could discriminate the familiar odor from other odors. Both groups could do this just fine when the familiar odors were very different from one another. But when the odors were very similar, the picrotoxin group failed. That led Laurent to conclude that the timing of the spikes in relation to each other is important for fine discrimination of odors. The Laurent team's study, says Aertsen, “really addresses the issue: Can you show that if the synchrony isn't there, the system doesn't work.”

    But although “the olfactory system is certainly a specific place where cofiring of neurons means a lot,” says Hadassah's Abeles, he would like to see more evidence in other systems that the brain is really using the information encoded in the synchrony. Singer argues that his binocular rivalry experiments make a good case that it is. “Since it is such a strong predictor of perception, I can't imagine that the brain ignores synchrony,” he says. “It is inconceivable. It would be like saying that the brain ignores firing rate.”

    Despite arguments such as Singer's, some researchers remain skeptical about the function of synchronous firing in the brain. It may just be a product of neurons changing their firing rates at the same time, argues neuroscientist Michael Shadlen of the University of Washington. Like a group of people who all start to run at once, their first few steps may be nearly in unison.

    Researchers who study synchrony often find no change in the firing rates of neurons when they synchronize, but Shadlen notes that changes in firing rate can only be seen when they are time locked to something the researcher controls, such as the appearance of a cue on a computer screen. That's because the only way to measure firing-rate changes accurately is by averaging the response of neurons over many trials. If the neurons are changing their rate in response to something the experimenter doesn't control, such as when a monkey happens to make up its mind, says Shadlen, that event may change with each trial, and consequently the rate change may be invisible to the experimenter.

    In that case, says Shadlen, studying synchrony would still be important, because it can reveal mental events researchers otherwise would have no way of detecting. “I'm saying synchrony lets us find the rate changes, and they are saying synchrony is the code,” says Shadlen. Regardless of who is right, the study of the synchronous firing of neurons is sure to help researchers record the music of the brain for years to come.

    Additional Reading

  2. INFRARED ASTRONOMY

    A Water Generator in the Orion Nebula

    1. James Glanz

    The discovery is too distant for seaside vacations and too tenuous to be considered a natural resource. But when, like an orbiting dowser, instruments aboard the Infrared Space Observatory (ISO) picked up an intense glow from a cloud near the Orion Nebula, astronomers knew they had found a concentration of water that makes the Pacific Ocean look like a drop in the bucket. The measurement, which will be reported in the 20 April issue of Astrophysical Journal Letters, turned up the highest concentration of water ever seen outside the solar system.

    The find confirms theories of how shock waves heat such clouds and induce oxygen to combine with hydrogen and form water vapor. “There was no observational confirmation until now,” notes Bruce Draine of Princeton University, who says the field would have been “dumbfounded” if ISO had come up dry when it pointed its instruments at Orion. In two press releases, the team suggests that the find also offers clues to how water collected in the solar system and on Earth—a speculation that leaves researchers like Draine feeling skeptical.

    Most previous detections of water outside the solar system have relied on naturally occurring masers—the microwave equivalent of lasers. Because even minute amounts of water can give large maser emissions, pinning down the concentrations has proved impossible. Instead, ISO's Long Wavelength Spectrometer captured infrared emissions from water vapor caused by thermal agitation. (The measurements couldn't be made from the ground because of the veil of water in Earth's atmosphere.) “We have eight different wavelengths at which water is emitting,” says David Neufeld of The Johns Hopkins University. “That gives us a completely unequivocal signature that it's water we're seeing.”

    The team—Neufeld, Martin Harwit of Cornell University, Gary Melnick of the Harvard-Smithsonian Center for Astrophysics (CfA), and Michael Kaufman of NASA Ames Research Center—pointed the instruments to an area in the cloud about a third of a light-year across, near which lies a hot, young star in a nursery of stellar formation called Orion BN-KL. Such stars throw off powerful winds and jets, driving shock waves when they collide with surrounding material. According to theory, the shocks heat the gas, speeding up chemical reactions that cause all of the free oxygen to combine with hydrogen atoms—by far the most abundant species—into water.

    At about 2000 kelvins, the shocked gas seen by ISO is well above the threshold for that process to take place, says Neufeld. The precise levels of water that were measured—about 20 times denser, compared to the hydrogen, than ISO had ever seen before—“are in essentially perfect agreement with theory,” he says. “With the radio masers you are seeing little, tiny diamonds” of emission, says James Moran of CfA. “Now it seems that this exists on a scale 100,000 times larger.”

    “It's an exciting discovery,” agrees CfA's Alex Dalgarno, the editor of Astrophysical Journal Letters. But even though Neufeld calculates that the shocks in the cloud are creating water at a rate fast enough to fill Earth's oceans 60 times a day, Draine and Dalgarno cast doubt on the more speculative idea that water from such a cloud might find its way into a forming planetary system. “This at least raises the possibility, [but] how would the water survive the formation of the solar system?” asks Dalgarno.

    Shocks within the forming system are a more likely candidate, says Draine. Neufeld agrees that the hypothesis of shocks as the source of our water should be considered “just as a general notion” and not solely as a way of sprinkling the solar system with primordial water. But for observers who have been thirsty for a good look at water in space, that could be enough to tell them the surf's up.

  3. MICROBIOLOGY

    A Possible New Approach to Combating Staph Infections

    1. Evelyn Strauss
    1. Evelyn Strauss is a free-lance writer in San Francisco.

    In the past 2 years, infectious-disease specialists have begun seeing one of their worst nightmares come true. They may be losing their last line of defense against the dangerous pathogen Staphylococcus aureus, which causes infections ranging from skin abscesses to such life-threatening conditions as pneumonia, endocarditis, septicemia, and toxic shock syndrome. Roughly one-third of the strains currently isolated from patients who acquire S. aureus infections while hospitalized are resistant to all antibiotics but one, vancomycin—and now resistance to that antibiotic has begun cropping up. But new research suggests an approach to combating S. aureus that may sidestep the organism's ability to develop resistance.

    On page 438, Naomi Balaban, an infectious-disease researcher at the University of California, Davis, and her colleagues report that they can decrease the incidence and severity of S. aureus infections in mice by blocking the activity of a protein called RAP, which controls the production of toxins and other proteins that make the bacterium pathogenic. The work is still in its early stages and many questions remain, but it suggests that disabling RAP—perhaps by vaccinating people with the protein so that the immune system takes it out of commission, or by developing drugs that prevent it from doing its job—might keep the microbe from spreading within the body before the patient's immune system flushes it out.

    Unlike conventional antibiotics, such drugs would interfere with disease without killing the organism directly, and in the absence of such selective pressures, strains resistant to anti-RAP therapies may be less likely to develop. “This opens a whole new strategy for treating or preventing one of the most serious hospital infections we contend with,” says Julie Gerberding, an infectious-disease specialist at the University of California, San Francisco. “Methods that prevent disease without selecting for drug resistance could really help us—and we're going to need help soon.”

    Balaban discovered RAP several years ago while a postdoctoral fellow in Richard Novick's lab at New York University (NYU) School of Medicine. Her work there suggested that the protein helps S. aureus shift from the early stage of infection, in which the bacteria attach to host cells, to its later phase in which the organism spreads through the body, producing toxins and other molecules that destroy host tissues and cripple immune cells that might otherwise kill it. The bacteria secrete RAP, which has a molecular weight of 38,000, and as they multiply, the protein gradually builds up outside the microbial cells until it reaches a critical concentration.

    At this point, it apparently sends a signal to the bacteria that culminates in the production of another regulatory molecule called RNAIII. (RAP stands for RNAIII activating protein.) This RNA then turns on the genes that produce toxins and other proteins needed for S. aureus to cause disease. Because RAP plays such a critical role in S. aureus pathogenicity, Balaban thought it would be a good target for S. aureus prevention strategies.

    In one set of experiments, the researchers inoculated mice with RAP purified from supernatants of S. aureus cultures. The vaccinated animals developed antibodies to RAP, and these mice fared much better than controls when the researchers injected S. aureus under the animals' skins. Seventy percent of the controls developed skin lesions, compared to 28% of those inoculated with RAP, and more of the control mice died than did the RAP-inoculated ones.

    Because it takes weeks to mount an immune response, such a vaccine would most likely benefit those whose risk of infection can be anticipated, such as diabetics and dialysis and surgery patients, as well as firefighters and military personnel. But another strategy might help those who are already sick.

    While Balaban was at NYU, she discovered that a mutant—and nonpathogenic—S. aureus strain produces a small peptide that blocks RAP's ability to turn on RNAIII production. Exactly how it does this is unclear, but recently the Balaban team determined the amino acid sequence of the peptide, called RIP for RNAIII-inhibiting peptide, and found that it resembles a stretch of RAP near the larger protein's amino terminus. This suggests, Balaban says, that RIP might bind to the same molecule that RAP does when it triggers RNAIII production, preventing the larger molecule from acting.

    To see whether RIP inhibits infection by a pathogenic S. aureus strain, the team added the peptide to S. aureus and then injected the mixtures into mice. A smaller percentage of the animals injected with the RIP-pretreated bacteria developed lesions, compared to mice that received control bacteria. Jean Lee, a microbiologist at Harvard Medical School in Boston, points out, however, that “there's a big leap between preincubating the bacteria with RIP and treating the animal after it has an infection.”

    If RIP and RAP act as postulated, it may be possible to design small-molecule drugs that block RAP activity and thus S. aureus pathogenicity. But Balaban's former colleague Novick has questioned whether RAP does act as the researchers originally proposed. He and his colleagues have since identified a much smaller peptide that activates RNAIII and have suggested that Balaban's 38-kilodalton protein may have been contaminated with the real activator—the peptide.

    In response, Balaban says her current work shows that an S. aureus strain lacking the gene for the peptide Novick identified still produces RAP that activates RNAIII perfectly well. She suggests that the peptide Novick identified and RAP both activate RNAIII, and they likely function in the same pathway. Otherwise, interfering with RAP alone would not diminish S. aureus pathogenicity. Novick agrees that his experiments do not rule out this possibility.

    Several additional uncertainties also remain, such as whether anti-RAP strategies will be effective against other conditions caused by S. aureus, such as septicemia, and even whether the bacterium will find its way around treatments directed against RAP. “I bet on the bacteria in all cases,” says Barbara Iglewski, a microbiologist at the University of Rochester School of Medicine and Dentistry in New York. “I'm sure they'll figure out a way around this [vaccination strategy].” Still, she says such new strategies are needed: “The fact remains that we've run out of good drugs and we need alternative approaches. Let's try to keep one step ahead of the bacteria.”

  4. MEETING BRIEFS

    Anthropologists Probe Genes, Brains at Annual Meeting

    1. Ann Gibbons

    Salt Lake City—More than 900 anthropologists, geneticists, and primatologists gathered here from 31 March to 4 April for the 67th Annual Meeting of the American Association of Physical Anthropologists. There, they put their heads together to discuss brain evolution and how human mate choice marks our genes.

    Sizing Up Ancient Brains

    Hominid brain evolution is supposed to have started out slowly and steadily in our apelike ancestors, picking up steam only when our genus, Homo, appeared 2 million years ago. But one surprisingly large ancient skull has lately put a wrench in this model: the massive cranium of a 2.6 million year old South African australopithecine known as Mr. Ples. One published report put his cranial capacity at 600 cubic centimeters (cc)—bigger than the brains of some of the first members of our own genus. “The implications for models of brain evolution were profound,” says Glenn Conroy, a paleoanthropologist at Washington University School of Medicine in St. Louis, Missouri.

    But at the meeting, after using modern biomedical imaging tools to measure Mr. Ples's cranium, Conroy and colleagues revealed a new estimate for his brain size: about 513 cc. That's decidedly larger than the roughly 385-cc brains of chimpanzees but smaller than estimates for early Homo brains—about 640 cc—and much smaller than modern human brains at about 1350 cc. Mr. Ples's revised brain size puts the slow, early expansion model back on firmer footing. But Conroy warns that it still may be vulnerable: He suspects that the brain capacity of several hominids has been overestimated—and that brain size didn't expand until later in human history. If so, “this has dramatic implications for the big picture of brain evolution,” says paleoanthropologist Dean Falk of the State University of New York, Albany. “We'll have to rethink the early stages.”

    Paleoanthropologists' view of hominid brain evolution starts about 3.5 million years ago, with members of A. afarensis, the species that includes the famed “Lucy.” Although Lucy and her brethren walked upright, their brains averaged about 413 cc, only slightly larger than a chimpanzee's. But between 3 million and 2 million years ago, two new species of more robust australopithecines—A. africanus in south Africa and A. boisei in east Africa—appeared, and their brain vaults were thought to be slightly larger. Excluding Mr. Ples, the largest of his kind, A. africanus brains have been reported to average about 440 cc; A. boisei brains were thought to be about 463 cc, with some later members of the species weighing in as high as 500 to 530 cc.

    This gradual increase spawned the leading model of early brain evolution. Researchers concluded that our ancestors' brains began to expand with A. africanus, who therefore may have had the capability to make tools or use rudimentary language. But they thought the major leap in brain size didn't come until 2.3 million years ago, with the appearance of Homo habilis, which has an average brain capacity of 640 cc, says paleoanthropologist Phillip Tobias of the University of Witwatersrand in Johannesburg, South Africa. The data behind this model stem from only a few dozen fossils, however, so one skull could change the picture. If Mr. Ples (also known as STW 505) indeed had a cranial capacity of 600 cc, one of the traits thought unique to Homo—big brains—would also be found in australopithecines.

    To settle the issue of the size of Mr. Ples's brain once and for all, Conroy traveled to South Africa to perform computerized tomography (CT) scans of the skull. Working on his computer in St. Louis, he then reconstructed a virtual three-dimensional model of the skull and calculated its internal volume. The answer of 513 cc surprised him, so he joined forces with anthropologists Horst Seidler and Gerhard Weber at the University of Vienna, who create physical models in resin from computer models. They got similar measurements. And with their computer-generated resin model, they also used the traditional method of measuring the volume of water inside the brain case—and again got the same result. “It's very reassuring that virtually identical results were given by the old and new methods,” says Tobias, a co-author on Conroy's paper.

    These results suggest to Conroy that it's time to reexamine more specimens. Mr. Ples has the biggest brain case of any australopithecine, but published brain size estimates for some early hominids are “similar to, or even larger than, Mr. Ples's is now”—and so may be overinflated, he says. He is reevaluating another skull of A. africanus, known as STS 71, and thinks it also is smaller than its published estimate. “It may be that australopithecines have a lower mean brain size than previously thought,” says Conroy. That would force a rethinking of the “entire early picture of brain evolution,” says Falk.

    Others, such as Columbia University neuroscientist Ralph Holloway, who has estimated brain size in many early hominids, are pleased to see Mr. Ples's measurements corrected but think it's premature to say that other earlier estimates are wrong. “I don't think it's going to have much of an impact,” says Holloway. But Tobias notes that as CT scans allow researchers to estimate capacities for crania that are full of material, there will likely be more measurements—and perhaps more surprises.

    Indian Women's Movement

    The marriage choices of most people in India—who today number almost 1 billion, or one-sixth of the world's population—were controlled for more than 3000 years by the strict rules of the Hindu caste system. Now, by studying men of different castes, a group of researchers in Utah, India, and Arizona has found that those rules have left a clear mark on the genes of modern Hindus. The researchers traced maternal and paternal ancestry in the same men by analyzing markers on the Y chromosome—which is inherited only through the paternal line—and mitochondrial DNA (mtDNA), which is inherited maternally. The results indicate that women sometimes married up and ascended the social ladder into higher castes. But men tended to stay in the castes into which they were born, says Lynn Jorde, a human geneticist at the University of Utah in Salt Lake City, and co-author of the report presented at the annual meeting.

    Although researchers have tried for years to use blood groups and genes to track differences among Hindu castes, this study is one of the first to show the impact of social rules on the genome. “It is one of the clearest applications of molecular genetics to an anthropological question about mate choice and population substructuring,” says Rebecca Cann, an evolutionary biologist at the University of Hawaii, Manoa. The work offers genetic proof that “humans choose mates according to certain rules … [which are often] different for men and women.”

    In their study, Jorde, University of Utah pediatric geneticist Michael Bamshad, and research specialist Scott Watkins worked with anthropologists Bhaskara Rao and J. M. Naidu of Andhra University in Vishakhapatnam, India. The team collected blood samples from 300 unrelated men from 12 populations spanning the Hindu caste hierarchy and set up a molecular genetics lab at Andhra University.

    The researchers compared the DNA sequences of men of different castes, measuring how many differences there were in the same 400-base-pair segment of mtDNA and in seven markers, or segments, of their Y chromosomes. The mtDNA showed a slight blurring of caste lines. Men in closely ranked castes had similarities in their maternally inherited mtDNA, but there were few similarities between the mtDNA of men in the highest castes, such as Brahmins, and those in the lowest castes. “The genetic distances between upper and lower castes are much greater than [those] between upper and middle castes and [between] middle and lower castes,” says Jorde. This gradient means, says Jorde, that these men's maternal ancestors had moved between adjacent ranks, mixing the genes between closely related castes. And historical records and strict social rules make it clear that women must have moved up, rather than down. “You get this ladder effect, where women tend to move to a caste of the next higher rank but [don't make] dramatic leaps from the lower castes to the very highest,” says Jorde.

    The distribution of markers on the Y chromosome showed a very different pattern. Men in the highest castes didn't share any more genetic markers with men in middle castes than they did with men in lower castes, suggesting little crossing of caste lines. In other words, “the men are stuck,” says Jorde. The study confirms a pattern found in cultures worldwide—that women can move up in social rank, because higher ranking males will marry lower ranking females, but that low-ranking males have the least choice in mates, notes molecular anthropologist Mark Stoneking of Pennsylvania State University, University Park.

    The study also gives the geneticists confidence that they can detect historical and social events in the genome. The effects of caste were still evident even though the system was outlawed in the 1960s. And the study shows an Asian origin for people in most castes, but the DNA of people in the upper castes has some similarities to that of Caucasians, which fits historical records that say the caste system was imposed by Caucasians sweeping in from the northwest. “It should make us optimistic about the power of genetic studies to reveal history,” says Jorde.

  5. PLANETARY SCIENCE

    Fiery Io Models Earth's First Days

    1. Richard A. Kerr

    Houston—To planetary geologists, the closest thing in the solar system to biblical fire and brimstone can be found on Jupiter's moon Io. Scorching eruptions pit the sulfur-laden surface of Io, which is the most volcanically active body known. Now this planetary hell has gotten even hotter. Using sophisticated instruments aboard the Galileo spacecraft orbiting Jupiter, researchers observed a surface temperature of about 1800 kelvins at the site of a particularly powerful eruption, they reported last month at the annual Lunar and Planetary Science Conference here.

    “This is probably the highest temperature volcanism ever seen anywhere,” says planetary geologist Ashley Davies of the Jet Propulsion Laboratory (JPL) in Pasadena, California. “It's really exciting,” because such high temperatures imply a sort of volcanism that has not been common on Earth for billions of years. “We can use Io as a volcanological laboratory to test our models of terrestrial volcanism,” Davies adds.

    The devil's playground.

    A new volcanic hot spot at Pillan Patera produced record high temperatures and spewed dark debris (circular deposits, upper right) over an area the size of Arizona.

    NASA/JPL

    Io's behavior has made it the oddball of the solar system since the two Voyager spacecraft flew by in 1979. Thanks to Jupiter's gravitational kneading, Io burns with unusually hot inner fires, evidenced in Voyager images of umbrella plumes of sulfur dioxide shooting hundreds of kilometers above volcanic calderas. And the rest of Io's surface looked odd too, coated with sulfur of various pale yellow hues. Researchers assumed that lavas of elemental sulfur at about 700 K were reshaping the surface.

    By the mid-1980s, increasingly sophisticated infrared observations from ground-based telescopes observed surface temperatures on Io as high as 1450 K—too hot for sulfur, which melts at much lower temperatures. That suggested basaltic lavas enriched in iron and magnesium, which melt at higher temperatures and are similar to the magmas that feed Earth's midocean ridges.

    Now Io's perceived temperature has taken yet another jump. Planetary scientists Alfred McEwen and Laszlo Keszthelyi of the University of Arizona in Tucson and their colleagues reported at the meeting that Galileo's Solid State Imaging instrument recorded hot spots, including the caldera Pillan Patera, that reach temperatures in excess of 1600 K and probably as high as 2000 K. The Near-Infrared Mapping Spectrometer on Galileo confirmed a temperature for Pillan of 1825 K, reported Davies. About 30 hot spots—many of which are associated with sulfur dioxide plumes—glow in the infrared at temperatures greater than 1000 K, McEwen reported.

    “Something very hot and very vigorous is going on” at Io's volcanic hot spots, says Davies. He envisions fiercely hot magma glowing through cracks in a thin solid crust atop churning lava lakes, lobes of flowing lava breaking out from encrusted volcanic material, or even lava spouting like a fountain from a rift. Whatever form the eruptions take, the magma is like nothing seen on Earth for a long time, says planetary scientist Dennis Matson of JPL. The temperatures of basaltic magma top out at 1500 K, so Io's magmas are probably ultramafic—so rich in magnesium and iron that their melting points approach 2000 K, he says.

    Because an ultramafic composition is the result of repeated cycling that concentrates magnesium and iron, this fits planetary scientists' latest view of Io as the scene of the most intense geological processing in the solar system. Jupiter has driven enough heat through the moon to melt every bit of it 40 times over, estimate Keszthelyi and McEwen. To find a time on Earth that was in any way comparable, “you're looking at a period before conti nents formed,” says Matson, a time that is not preserved in Earth's geologic record. The Earth of more than 3.8 billion years ago would still have been hot from its formation and heated further by frequent comet and asteroid impacts. It may have been destroying and recycling its crust so rapidly—as Io appears to be doing now—that continents couldn't form.

    Scientists using Io to understand the mysteries of the early Earth will get a windfall if Galileo makes its first really close pass by Io, scheduled for October 1999. Assuming Jupiter's fierce radiation belts haven't killed or crippled the spacecraft by then (Science, 13 March, p. 1628), planetary geologists will get a good look inside the gates of hell.

  6. RENEWABLE ENERGY

    A Record in Converting Photons to Fuel

    1. Robert F. Service

    It's the ultimate in clean energy: Generate fuel from water using only the power of sunlight, and when the fuel burns, it gives off nothing but water. Outlandish as it sounds, the dream was accomplished decades ago by using solar energy to split water into its components, oxygen and hydrogen—a powerful fuel that can be used to run everything from power plants to cars. But as a commercial proposition, the process has been a nonstarter because it's so inefficient and expensive. The two steps involved—generating electricity from sunlight and using it to split water—normally take place in separate devices, and energy is lost in between.

    Now on page 425, researchers at the National Renewable Energy Laboratory (NREL) in Golden, Colorado, have come up with a single device that accomplishes both tasks and has set a world record in efficiency for converting photons to fuel. The new solar-powered water splitter, built by NREL chemists John Turner and Oscar Khaselev, converts about 12.5% of the energy in sunlight to gaseous fuel—nearly double the previous record achieved by a conventional two-step process.

    Solar water splitter.

    Two different semiconductors produce electrical charges with just the right energies to catalyze the split.

    SOURCE: KHASELEV et al.

    Marye Anne Fox, a chemist at the University of Texas, Austin, calls this efficiency “impressive.” Fox's UT colleague Adam Heller adds that the new device avoids one pitfall common to conventional solar water splitters: It doesn't require the additional external energy that others need to get the job done. The NREL device “is a stand-alone cell that is nicely efficient and no longer needs external energy,” says Heller. “It's a nice milestone.” Even so, it's not about to catalyze a wholesale switch from fossil fuels to hydrogen, because the semiconductors at the heart of the new devices are expensive; they are currently used only for specialized applications, such as powering satellites.

    Splitting water to create gaseous hydrogen and oxygen is quite simple. Stick a pair of metal electrodes into water, apply a voltage across them, and presto, oxygen gas is liberated at one electrode and hydrogen gas at the other. The process, known as electrolysis, is commonly used to produce pure hydrogen for making everything from food oils to computer chips. But it's expensive and requires fossil fuels to generate the electricity that powers the process. So energy researchers have long dreamed of using solar energy to drive the electrolysis.

    The basic principle of generating electricity from sunlight is, again, well known. When photons from sunlight strike normally static electrons in some semiconductor materials, they kick the electrons into a higher energy level, allowing them to roam about. Left behind are electron vacancies, or “holes,” that act like positive charges that can also migrate through the material. Additional semiconductor layers on either side of the absorbing layer then channel the electrons and holes in opposite directions, creating an electric current that can perform work or be stored in a battery. But, unfortunately, combining this so-called photovoltaic effect with electrolysis in a single device isn't simple.

    First, there's a compatibility problem. Solar cells must sit in water in order to split it into hydrogen and oxygen, but semiconductors that are efficient light absorbers are often unstable in water. Then there's the energy problem. A water molecule splits into hydrogen and oxygen atoms only if each atom absorbs electrical charges packing very precise—and different—amounts of energy. In conventional electrolysis, the metal electrodes carry electrical charges with a wide energy range, allowing those with just the right amount of energy to catalyze the split. But semiconductors are more finicky. Charges in these materials can exist only at well-defined energy levels, and “nature has not been kind to us in this instance,” says Turner. The only semiconductor materials that produce electrical charges at just the right levels to generate both hydrogen and oxygen are very poor absorbers of sunlight.

    To overcome these problems, Turner and Khaselev constructed a sandwichlike device that pairs the talents of two different semiconductor materials. One—made from gallium indium phosphide—absorbs ultraviolet and visible light and produces mobile electrons with the right energy to produce hydrogen. The other—made from gallium arsenide—absorbs infrared light and produces holes with the right amount of energy to produce oxygen. Gallium indium phosphide is stable in water, so it can be used directly as an electrode. But the unstable gallium arsenide layer is sealed with a transparent epoxy coating to protect it, and the holes are shuttled to a separate platinum electrode.

    Although the new device appears to be efficient and stable, Turner estimates that it would produce hydrogen at three times the cost of the cheapest method for bulk production of hydrogen, in which hydrogen atoms are stripped from natural gas by superheated steam. Turner and his colleagues are now trying to engineer cheaper semiconductors to perform the water-splitting reaction. If they succeed, the energy of the future may finally find its way to the present.

  7. HEART DISEASE

    Signaling Path May Lead to Better Heart-Failure Therapies

    1. Marcia Barinaga

    Like the waistband in your favorite old pajamas, overstressed hearts often lose their elasticity, becoming saggy and stretched out. Hearts in this state, called congestive heart failure, are ineffective at pumping blood and are prone to arrhythmias that cause sudden death. With 500,000 new cases of heart failure diagnosed each year in the United States alone, cardiologists who treat the problem are eager for new and better drugs. They may soon get their wish.

    In a paper published in today's issue of Cell, biologist Eric Olson of the University of Texas Southwestern Medical Center at Dallas and his colleagues report a big step toward uncovering the underlying pathology of congestive heart failure. They have found an internal signaling pathway in cardiac muscle cells that, when pushed into overdrive, can cause heart failure in mice. What's more, the team showed that the immunosuppressive drug cyclosporin A inhibits the pathway, blocking heart failure in the animals.

    “It is quite exciting,” says Seigo Izumo, a cardiologist at Harvard Medical School in Boston. “This paper begs for the examination of this [approach] in a different animal model, and if that is promising, in the human condition.” But, he cautions, “it is too early to become too optimistic,” because no one knows yet whether faults in this pathway are a common cause of human heart failure.

    Heart failure develops when the heart tries to compensate for abnormal stress on its muscular wall by growing bigger. That enlargement “starts out being OK,” and boosting the heart's power, says cardiologist Michael Bristow, of the University of Colorado Health Sciences Center in Denver, “but it goes too far.” The muscle cells get long and thin, and like spent elastic, they fail to contract properly, resulting in sluggish blood flow, shortness of breath, and buildup of fluids in tissues. Fibrous deposits form on the heart's walls and can cause arrhythmic beating and sudden death.

    A variety of stresses can cause this heart muscle hypertrophy, as it is called, ranging from a congenitally weak heart muscle to a narrowed aortic valve, high blood pressure, or death of parts of the muscle in a heart attack. Most of these causes seem to have one thing in common: They raise calcium levels in heart cells. How that might effect hypertrophy was unclear, however.

    That's where Olson's work comes in. While studying a protein called GATA4, a DNA binding protein that turns on heart cell genes during hypertrophy, his team found that GATA4 binds to a protein called NF-AT3. That was exciting, Olson recalls, because NF-AT3 belongs to a family of proteins that regulate genes in response to calcium levels in activated T cells of the immune system. The discovery, he says, “suggested how [calcium and hypertrophy] could be interlinked.”

    In T cells, a calcium-sensitive enzyme called calcineurin activates NF-AT proteins by removing a phosphate group they carry. This allows the proteins to enter the nucleus and regulate genes. The researchers thought the same thing might happen when calcium levels rise in stressed heart cells, allowing NF-AT3 to go to the nucleus, link up with GATA4, and turn on the genes for hypertrophy.

    Two experiments in cultured heart cells suggested they were right. Such cells respond to hypertrophy-triggering hormones such as angiotensin II by turning on genes, growing larger, and beefing up their contractile machinery. The team found that the immunosuppressant drugs FK506 and cyclosporin A, which inhibit calcineurin's action, blocked these changes in cultured heart cells. They also showed that NF-AT3 and GATA4 together turn on one of the genes that comes on in heart cells during hypertrophy. Those results mean that, at least in these cultured cells, NF-AT3 is activated by calcineurin and works with GATA4 to turn on genes.

    The next key issue was whether this signaling system triggers heart failure in animals. To find out, the researchers created mice carrying a mutant calcineurin gene that causes their hearts to make a form of the enzyme that turns on at birth and keeps the NF-AT3 pathway active regardless of calcium levels. To the group's surprised satisfaction, the mice “recapitulate every physiological and pathological aspect of human heart failure,” says Olson, “including extensive fibrosis, arrhythmias, and sudden death.” What's more, mice treated with cyclosporin A from an early age showed no signs of disease.

    “This gives strong hope” that drugs such as cyclosporin might halt or even reverse congestive heart failure in humans, says cardiologist Michael Schneider of Baylor College of Medicine in Houston. But he and others say that hope should be tempered with caution. Although calcineurin action is clearly responsible for the heart failure in the Olson team's mice, says Izumo, “an unknown question is how much of a role this pathway plays in other models of hypertrophy or failure, and especially in the human condition.” One way to address that question, he says, is to test the effect of cyclosporin A in other mouse models of the condition, many of which are based on genetic mutations in heart proteins.

    If cyclosporin works in these animals, it may warrant a test in humans, says Izumo. Heart failure is serious enough to justify the risks of immunosuppression from cyclosporin, he argues, and as the drug is already in use for organ transplants, heart failure trials could begin without a lengthy federal approval process.

    In the longer term, the discovery may lead to better drugs for heart failure, says Jerry Crabtree of Stanford University Medical Center, who studies NF-AT signaling in the immune system. Because GATA4 is a heart-specific gene regulator, screening for small molecules that break its interaction with GATA4 might lead to heart-specific drugs with few side effects, says Crabtree. “I would be very, very surprised if some drug company didn't immediately jump on this.”

    An important issue is whether blocking the calcineurin pathway merely halts or actually reverses the hypertrophy. People diagnosed with heart failure generally have advanced disease, says Bristow, so “what we really need is to be able to turn the clock back.” But even if that can't be done, Bristow notes that blood tests being developed can detect cardiac hypertrophy in its early stages and will soon be used to screen people who are at risk. “I guarantee within the next 5 years this will be a viable approach,” he says. If that prediction comes to pass, calcineurin- or GATA4-blocking drugs may be part of the prescription for people in the early stages of heart failure.

Log in to view full text