News this Week

Science  19 Sep 1997:
Vol. 277, Issue 5333, pp. 1758
  1. BIOPHYSICS

    Mastering the Nonlinear Brain

    1. James Glanz

    By applying concepts from mathematical physics, researchers hope to understand the collective dynamics of billions of neurons—and perhaps control them in epilepsy

    CHICAGO—Wearing shorts and a blue T-shirt with the word “adidas” printed over his heart, George K. sits cross-legged on a hospital bed as his mother looks on, waiting for him to have an epileptic seizure. A bundle of multicolored wires emerges from a huge bandage on the boy's skull and runs to a computer, where rows of jagged traces on a screen record the electrical activity of his brain. The stamp of a seizure on those traces could reveal whether cutting out a piece of George's brain might safely cure his intractable epilepsy.

    Abstract topography.

    The high points on this landscape are “unstable periodic orbits”—repetitive patterns where a nonlinear system like the brain briefly lingers.

    P. SO

    Traces like those scrolling from George's brain are also feeding a new interdisciplinary field of research, which might one day make such surgery unnecessary. Recordings of epileptic seizures, along with other studies of electrical activity in human and animal brains, are linking neuroscience with a rarefied branch of mathematics called nonlinear dynamics. This discipline was born as theorists tried to make sense of the complicated rhythms of everything from wildly swinging pendulums connected by springs, to the patterns formed by chemical reactions on a metal surface, to wave trains steepening and crashing on a beach. Now a coterie of neuroscientists, biophysicists, and mathematicians is finding that the same concepts can also help them understand the collective dynamics of billions of interconnected neurons in the brain.

    Unlike traditional neuroscience, which often focuses on the details of the brain—neurotransmitters, receptors, and neurons, alone or in small groups—nonlinear dynamics aims to identify the large-scale patterns that emerge when neurons interact en masse. Studies of epilepsy dominate the work, in part because the widespread, convulsive firing of neurons in epileptic seizures offers such a clear case of collective dynamics. But some neuroscientists think these studies ultimately could shed light on the workings of the normal brain as well. “We're at an extremely interesting time in terms of looking at potential interfaces between the biological sciences, clinical medicine, and mathematics,” says Michael Mackey, a mathematical physiologist at McGill University in Montreal.

    Therapy in dish?

    Electric fields control seizurelike activity in a slice of rat brain.

    SCHIFF

    In the meantime, there's a practical goal: finding a way to control seizures without major surgery or drugs. “All of this is very clinically motivated,” says John Milton, a neurologist, mathematician, and director of the Epilepsy Center at the University of Chicago, who is also George's physician. “If you could avoid both the side effects of drugs and cutting out pieces of brain, that would have a tremendous impact.” As he and others picture it, a computer chip would receive traces like those coming from George's brain, detect the approach of a seizure, and apply spurts of current or electric fields to specific regions of the brain, using electrodes inside the skull to nudge the dynamical system away from catastrophic firing. Indeed, one protocol for testing these ideas in human epileptic patients, led by Steven Schiff of the Children's National Medical Center in Washington, D.C., has already been approved.

    Before the approach comes into its own, though, two camps of researchers will need to agree on the fundamental dynamical structure of the brain. One analysis of brain recordings, by physicists Radu Manuca, Robert Savit, and collaborators at the University of Michigan, Ann Arbor, suggests that for all its complexity, the epileptic brain might behave as a “bistable” entity—like a ball that can be knocked into one of two cups. Controlling this system could involve preventing it from jumping wildly between the two states. A second effort, by Schiff and a number of collaborators, portrays the brain's electrical state as wandering over a subtler dynamical “landscape” of peaks and valleys. Controlling this system might involve nudging the state up some of the peaks and balancing it there, like a beach ball on a walrus's nose.

    Mathematical playground

    The techniques now being applied to the brain were first developed to identify statistical or “global” properties in systems of interacting particles, whose details were far too complicated to understand. For mathematicians schooled in these systems, epilepsy, with its millions or even billions of misfiring neurons, offers “just the ideal playground,” says Peter Jung, a mathematical physicist at Ohio University in Athens.

    The clinical motivation is also plain. Out of about 3 million epileptics in the United States, half a million suffer from forms of the disease that are not well controlled by drugs. Many of those, including young George, have what is called focal epilepsy, in which seizures originate from a “focus” of damaged tissue in the temporal lobe. If electrical measurements like his show the focus is not too close to regions of the brain governing speech or locomotion, these patients are candidates for surgery to remove it—a procedure requiring a year of preparation and costing up to $200,000. Electrical control, if it worked, would be vastly cheaper, quicker, and more widely accessible.

    But which model of the dynamical brain should it be based on? Laboratory evidence for the simplest possibility—bistability—dates back to the early 1980s, when John Rinzel, now at the Center for Neural Science of New York University, and the late Rita Guttman showed that the firing of a single neuron can jump between two different states. These researchers stimulated a squid giant axon—a long process extending from a neuron—with an electrical current, and noticed that the axon fired repetitively at high values of the current and shut off at low values. At some values in between, though, slight perturbations in the current could make the axon jump between the firing and quiet states—in other words, push it between two bistable states.

    The drastic jump meant the system's response was not “linear,” or simply proportional to some parameter like the strength of the push. Recent modeling suggests that when such neurons are linked in a network, says Rinzel, they can behave like dogs in the backyards of a neighborhood: If one dog barks, all the rest may start up, but once they fall silent, most of them may sleep through the afternoon. “The network itself would be bistable,” says Rinzel.

    The University of Chicago's Jack Cowan, Jennifer Foss, Milton, and several others have suggested that “waves” of excitation could spread through such a bistable network, just as the dogs on one block may already have grown tired while those two blocks away have only started barking. Such waves haven't been seen directly in human brains during seizures, but measurements made by Milton's group through grids of electrodes reveal correlations in the firing of distant neurons, which could result from these excitation waves.

    The most dramatic hints of bistability, however, emerge from studies that apply complex mathematical tools to search for subtle coherence in data recorded by electrodes implanted deep in epileptic brains. Ordinary linear measures of synchrony might show whether two sine waves, say, are locked in phase, like synchronized swimmers. Nonlinear measures can pick up much more general relationships of arbitrary wave trains—like noticing that all the scattered swimmers plying different strokes in the late stages of a medley are in the same race. Such work, by Manuca, Savit, and others, including University of Michigan epileptologist Ivo Drury, identified a particular kind of nonlinear synchrony in epileptic brains.

    What changed in concert across these brains was the probability that each site would switch between one of two different states, which could be spiky firing patterns, noisy fluctuations, or smooth wave forms, depending on the location of each probe in the brain. But the discovery that the switching probability changed in synchrony means “there could be bistability [across] many regions of the brain,” says Savit. As the group reports in a paper submitted to Mathematical Biosciences, both states always seem to be present in epileptic brains, although not in normal brains, and a clinical seizure only occurs at certain switching probabilities. A seizure might then be thought of as a wavelike disturbance that kicks regions of the brain from one state to another. The finding could ultimately help neurologists determine when to apply a pulse to change the switching rate and head off a full-blown seizure, says Milton.

    Landscape of the brain

    If Schiff, Paul So, and Bruce Gluckman of the Children's National Medical Center and George Washington University in Washington, D.C., and Timothy Sauer of George Mason University in Fairfax, Virginia, are on the right track, controlling seizures with electrodes could be a trickier proposition. As this group sees it, the brain's state can roll like a ball over an entire dynamical terrain, corresponding to various firing patterns, and occasional, irregularly timed jolts would be required to keep the system trapped in regions corresponding to nonseizing behavior, where the healthy brain ordinarily resides.

    This approach got its start several years ago in a striking experiment by Schiff, Bill Ditto of the Georgia Institute of Technology in Atlanta, and Mark Spano of the Naval Surface Warfare Center in Silver Spring, Maryland, on slices of rat brain that had been chemically induced to display seizurelike firing (Science, 26 August 1994, p. 1174). The team plotted the time interval between any two voltage bursts from the neurons (call it Xn) against the immediately preceding interval (Xn-1) and found what appeared to be “unstable periodic orbits,” or UPOs: At places where Xnwas, say, some particular multiple of Xn-1, the system would linger, then roll away to other firing patterns like a marble falling off a saddle. The shortest orbits are those in which Xn = Xn-1, while others take longer to repeat a pattern of bursts.

    By delivering precisely timed electrical jolts through implanted electrodes, the team was able to control the system, balancing it on one of the saddles or knocking it off prematurely. Still, says Schiff, “I came away from that experiment terribly ill at ease with the simple, seat-of-the-pants method we used to identify the mathematical saddles.” Some outside critics also took the group to task for what was essentially an eyeball approach, and James J. Collins of Boston University later published a paper in Physical Review Letters showing that random noise and linear dynamics, not UPOs, could be enough to explain the results.

    Such criticism, along with work by Frank Moss of the University of Missouri, St. Louis, on detecting UPOs in noisy biological data, motivated the team to develop a more rigorous method. The team based its method on mathematical ideas put forth by Predrag Cvitanovic, of Northwestern University and the Niels Bohr Institute in Copenhagen, who has shown that the locations of the shortest UPOs, and the system's trajectory as it “rolls” near them, give a disproportionate amount of information about the underlying dynamics and perhaps the anatomical connections of the neurons themselves. It's as if you wanted to reconstruct a three-bumper pinball machine just by listening to a typical game, says Cvitanovic. The rapid, briefly periodic bounces within the triangle of bumpers—a short UPO—would yield better information about how they are arranged than would longer bounces from bumpers to wall. So and colleagues accordingly devised a mathematical transformation to zoom in on the behavior of the system around the shortest UPOs. “They have come up with very strong methods for identifying the UPOs,” says Collins.

    Although Schiff declines to comment on human data because a new paper on the topic will soon be under review, a team member confirms that the technique has now enabled the group to identify UPOs in human epileptic tissue. Schiff thinks that the UPOs might even be recognizable in normal brains.

    Still, investigators are far from pinning down the precise nature of the brain dynamics that lead to seizures. An experiment led by Gluckman, for example, has shown that seizurelike firing in rat brains can be shut down entirely just by applying a dc electric field of sufficient strength—a simple switching action that could support bistability. And last July, the U.S. Food and Drug Administration approved an electrical device marketed by Cyberonics Inc. of Webster, Texas, that reduces the frequency of seizures in some intractable epileptics. In standard operation, that device simply applies a pulsed stimulus to the vagus nerve in the neck every 5 minutes, says J. Walter Woodbury of the University of Utah, who performed early testing of the device with his late brother Dixon. But no one knows just why the device works.

    In spite of the uncertainties, Schiff and his colleagues plan to test the new approach in several patients a year for the next 5 years. Taking advantage of the electrode grids routinely used to monitor the brains of epileptics who are candidates for surgery, the researchers plan to apply slight electrical jolts to see if they can avert seizures. The protocol, says Schiff, is designed to test the whole range of approaches to analyzing the brain's nonlinear dynamics—from bistability to the chaotic, wandering UPOs—to see which of them is best at anticipating and stopping seizures. He may find that the two approaches don't exclude each other, says Mackey of McGill: “I suspect what the two [groups] are talking about may just be different sides of the die, so to speak.” Depending on how brain data are analyzed, he says, seizing and nonseizing regions of the UPO landscape could look like separate bistable states.

    That conclusion would be fine with Milton, who applauds Schiff's work and hopes to do similar human studies. “I'm much more interested in seeing the patient get better than worrying about some esoteric scientific principle,” he explains. “The final analysis is you want people to get better.”

  2. BIOPHYSICS

    Sharpening the Senses With Neural 'Noise'

    1. James Glanz

    The science of nonlinear dynamics (see main text) would seem to be far from the practical concerns of Casey Kerrigan, a physical rehabilitation specialist at Harvard Medical School in Boston. Kerrigan helps stroke patients, along with diabetics and elderly people with “peripheral neuropathy”—a deadening of sensation in the extremities—cope with their condition and relearn simple tasks.

    But that effort has led her straight into a collaboration for exploring the nonlinear effect called stochastic resonance (SR). This counterintuitive effect relies on “noise”—any random, or stochastic, background fluctuation—to make a system sensitive to an otherwise undetectable signal. In the last year, a series of experiments has shown that sensory stimulation consisting of mechanical or electrical noise can sharpen everything from the sense of touch to proprioception—the ability to perceive where a limb is in space. Not only do these results offer insights into the normal workings of the nervous system, but they also open new strategies for rehabilitating patients like Kerrigan's, she says. “The sensory loop is so essential [in rehab],” says Kerrigan. “They use the feeling to relearn a motor task. This could really help.”

    The theory of SR, developed 15 years ago by mathematicians and physicists, describes how an optimum level of noise can boost a signal over a threshold of detection. To see this effect, says Boston University bioengineer James J. Collins, “you need a dynamical system with a threshold; you need a weak signal; and you need noise. And that's it.” Consider a coin resting in one of two indentations on the dashboard of a car winding through a mountain road. Forces on the coin might not be enough, by themselves, to push it from one receptacle to another. But if the road is bumpy enough, the lateral component of this “noise” could sometimes allow the regular forces to nudge the coin across. If the road is too rough, though, the coin can move whether the car is turning or not, and the “signal” of the curves gets drowned out.

    Similar effects have turned up in physical systems like noisy lasers, superconductors, and electronic circuits. Now add sensory neurons, whose cloud of dendrites “integrate” or add up stimuli until they reach a threshold and the neurons fire. Several years ago, Frank Moss of the University of Missouri, St. Louis, and co-workers monitored how sensory cells on a crayfish's tail fan respond to a weak pressure signal, such as that generated by the approach of a distant predator, in the presence of noise produced by, for example, random water currents. Moss found the classic SR rise and fall as he cranked up the noise, suggesting that the turbulent surroundings of these animals might sharpen their perceptions. More recent experiments have revealed the same sensitivity hump in cricket neurons responding to wind currents and in slices of rat brain subjected to electric fields.

    Now Collins and co-workers have demonstrated SR in humans. Subjects rest a finger on a computer-controlled indentation that pulses up and down. Without noise, its action is imperceptible. Then the experimenters add either mechanical or electrical noise to the indentation, and the subjects are asked when they can feel the regular movements. The percentage of correct responses rises, then falls, as the noise gets stronger.

    A separate set of experiments, led by Paul Cordo of the Robert S. Dow Neurological Sciences Institute in Portland, Oregon, shows that mechanically jiggling the muscle with a sort of vibrator can enhance a normal subject's ability to sense whether his or her wrist has been flexed by a small amount—the essence of proprioception. “It just makes your jaw drop,” says Cordo of the dramatic influence of the tiny, 10-micrometer jiggle.

    These experiments have taken “two big leaps,” says Jacob Levin of the Massachusetts Institute of Technology, who did the cricket experiments: “One, this work was done in humans; two, it went all the way to the level of perception.” Now, researchers are taking it another step: to the level of the neurons themselves. Separate sets of recent experiments led by Cordo and Faye Chiou Tan of the Baylor College of Medicine in Houston suggest that the nervous system may pump up its own sensitivity in this way.

    Both groups found that as a muscle is exercised, its sensory neurons can become more sensitive, although they think different mechanisms are at work. Chiou Tan's group detected increasing electrical noise levels in the exercising muscle; the Cordo group traced a humped curve of sensitivity that may have resulted from noise generated in the brain itself during precision movements. Together, the findings suggest that the brain and its peripheral neurons are generating noise and making use of SR. “Everyone is very intrigued and perplexed at this point,” says Chiou Tan.

    Rehabilitation specialists are already hoping to exploit these effects by developing gloves and socks outfitted with perhaps thousands of piezoelectric noise inducers. If they worked, such devices could help patients maintain an accurate posture and keep them aware of their numb limbs, reducing injuries and infections. Kerrigan, Collins, and Harvard's Lewis Lipsitz plan to take a first step toward testing this promise in a couple of months, when they will repeat Collins's earlier SR experiments in patients. Says Chiou Tan, who also does rehabilitation: “[SR] doesn't have clinical applications yet, but it's getting close.”

  3. ARCHAEOLOGY

    Oldest Mound Complex Found at Louisiana Site

    1. Heather Pringle
    1. Heather Pringle, a science writer in Vancouver, Canada, is the author of In Search of Ancient North America.

    Millennia before the arrival of Europeans, early Native Americans went on a construction binge, dotting the eastern side of the continent with thousands of vast earthen mounds. With shapes ranging from massive cones and quadrangular platforms to gigantic serpents, the mounds were clearly legacies from many different cultures, serving purposes that ranged from ceremonial centers to charnel houses. A new finding in Louisiana has now extended the tradition of mound building back in time by nearly 2000 years—and opened a new perspective on the cultures of ancient North America.

    Ancient clues.

    These artifacts pointed to an early origin for Watson Brake earthworks.

    J. SAUNDERS/NORTHEAST LOUISIANA UNIVERSITY

    Most archaeologists believed that the first large earthworks were built 3500 years ago at Poverty Point, Louisiana, by a people who had prospered from trading. But on page 1796, a multidisciplinary team headed by archaeologist Joe Saunders of Northeast Louisiana University in Monroe and including colleagues from the fields of soil science, geomorphology, biology, paleontology, and physics, reports dating construction of an elaborate earthen enclosure in northeastern Louisiana to a 400-year period beginning 5400 years ago. That makes Watson Brake, as it is now called, the oldest known extant mound complex in the Americas.

    Digging in.

    Archaeologists auger into Mound A to obtain core samples.

    J. SAUNDERS/NORTHEAST LOUISIANA UNIVERSITY

    The existence of this extensive public architecture, consisting of 11 mounds and connecting ridges that enclose nearly 9 hectares, is hard to reconcile with archaeologists' traditional picture of the small, mobile bands of hunter-gatherers that inhabited the southeastern United States 5400 years ago. Greatly influenced by modern studies of San hunter-gatherers in Africa, researchers often assumed these peoples lived in simple egalitarian societies little prone to social change. To construct an earthen enclosure 280 meters in diameter according to a preconceived plan, however, the builders had to have sophisticated leadership skills. They must also have had a wealth of food to sustain the hard labor of raising mounds as tall as a two-story house.

    “It's rare that archaeologists ever find something that so totally changes our picture of what happened in the past, as is true for this case,” says archaeologist Vincas Steponaitis of the University of North Carolina, Chapel Hill, who is president of the Society for American Archaeology. Tristram Kidder, a professor of anthropology at Tulane University in New Orleans, agrees: “I think it's a wonderful contribution.”

    Watson Brake first came to scientific attention in the 1970s, when local resident Reca Jones discerned its outline after a timbering operation clear-cut some of the area. Initially, researchers chalked it up to the Poverty Point people, who flourished in the region from 3700 to 2700 years ago and also constructed conical mounds and long ridges. Saunders, however, was skeptical. In walking the mound area, he did not see any of the telltale stone tools associated with the Poverty Point people or any of the clay cooking balls that they had heated in fires and then placed in water for boiling food. Moreover, he saw no telltale refuse from later cultures.

    With colleague Thurman Allen, a soil scientist from the Natural Resources Conservation Service in Monroe, Saunders cored the tallest mound at Watson Brake by auger in 1993. The pair discovered not only an ancient garbage dump below the mound but an important indicator of the earthwork's great antiquity: 1 meter into the mound, they encountered a reddish, clay-enriched layer of soil, a so-called soil “horizon” that could only have been formed over several thousands of years as iron and clay leached out above and concentrated below.

    Fascinated, Saunders mounted excavations of the site. The work soon revealed that people had camped at Watson Brake both before and during construction, leaving behind refuse and hearths. Moreover, radiocarbon dates on charcoal and on humates (organic acids from decayed vegetation in the soil) from horizons just underneath the base of the mounds showed that construction began at a startlingly early date: 5400 to 5300 years ago.

    The early date fitted well, however, with the artifacts the team found beneath the mounds and scattered across some intermediate layers. None of the distinctive Epps or Motley projectile points of the Poverty Point people or their clay cooking balls turned up, nor did any of the imported stone known as novaculite that these avid traders regularly obtained from Arkansas. Instead, the team found Evans points, fired earthen blocks, and local stone favored by Middle Archaic bands more than 5000 years ago.

    Still, Saunders wanted further evidence of the site's antiquity. He applied a dating method called optically stimulated luminescence to sediment samples gathered from midmound layers where people had once camped. This technique measures the buildup of electrons that occurs in sediments after they are buried and no longer exposed to sunlight. The result indicated that an intermediate stage of mound building had occurred sometime over 4000 years ago. He also dated samples from horizons below the mound with an experimental technique that estimates age from the amount of oxidizable carbon remaining in ancient organic sediments. The technique put the beginning of construction down to about 5180 years ago, in good agreement with the radiocarbon date.

    The combination of all these lines of evidence, researchers say, makes the date entirely convincing. “There's just no question about it,” says Jon Gibson, an archaeologist at the University of Southwestern Louisiana in Lafayette and an authority on the Poverty Point culture. “Saunders has come at it from too many different angles.”

    That leaves the puzzle of what spurred the Watson Brake people to build these ancient mounds. Archaeologists once thought mound building was linked to agriculture, which created food surpluses and tended to lead to more permanent settlements and more complex societies. But because there was little evidence of agriculture at places such as Poverty Point, many researchers thought that these mounds arose as a result of extensive trading networks, which fostered societies complex and prosperous enough to build them.

    Trade did not seem to be a factor at Watson Brake, however, as the artifacts found were all made of local materials. Neither did agriculture. When Saunders and his colleagues gleaned charred seeds from the sediments and sent them to Kristin Gremillion, a specialist on ancient agriculture at Ohio State University in Columbus, she found no signs of domesticated plants. However, she did identify the wild ancestors of three domesticated plants—Chenopodium berlandieri, Iva annua, and Polygonum spp.—first cultivated in the American Southeast some 4000 years ago. Clearly, Archaic mound builders were already gathering and consuming the wild plants' starchy seeds.

    In fact, Saunders says, the rich environment at Watson Brake may deserve much of the credit for making the construction project possible. The site perches on a terrace that, 5400 years ago, overlooked what was then the Arkansas River and an extensive wetland. Animal remains at the site suggest that the mound builders took full advantage of the varied habitats. In addition to hunting deer, turkey, raccoon, and other upland species, Watson Brake's inhabitants collected freshwater mussels and snails and fished both the main channel and the backwaters.

    Indeed, the mound builders were adept fishers. Northeast Louisiana University paleontologist Gary Stringer and faunal expert Edwin Jackson of the University of Southern Mississippi in Hattiesburg identified at least nine fish species. The most abundant were freshwater drum, which weigh up to 27 kilograms. Catching these fish, especially during spring and summer when they are spawning and easily captured in nets, is a highly efficient way of obtaining food, notes Stringer.

    But even though the team's work provides ample evidence of how the Watson Brake people mustered the food surpluses needed for the construction project, another big mystery remains. Saunders has so far unearthed few clues about what purpose the giant enclosure might have served. Soil sampling inside the earthwork retrieved few artifacts, even though team members screened sediments through fine geological sieves. This suggests that the builders did not conduct ceremonies or other activities within the enclosure.

    Stranger still, the excavations to date show little evidence that people occupied the area once the complex was completed. It's as if the builders had nothing to keep them there once the job was done. “I know it sounds awfully Zen-like,” Saunders concludes, “but maybe the answer is that building them was the purpose.”

  4. CHRONOBIOLOGY

    Gene for Mammals' Body Clocks Found

    1. Michael Balter

    As anyone who has suffered from jet lag knows, the body's 24-hour biological clock delivers a powerful timekeeping signal. In recent years, clock researchers have made significant progress in understanding the biochemical gears and springs that keep this clock running, largely by identifying a handful of genes that appear key to the process in a few animal and plant species. Now, a team at the Baylor College of Medicine in Houston, Texas, has come up with the first evidence that some of these genes may have been conserved over the course of evolution, indicating that universal mechanisms across all species might keep the clock ticking.

    In today's issue of Cell, molecular geneticist Cheng Chi Lee, developmental biologist Gregor Eichele, and their co-workers report isolating a gene in mice and humans similar to the period (per) gene of the fruit fly Drosophila melanogaster. The per gene, which is turned on and off in a daily cycle, appears to work with other genes to create an oscillating mechanism that runs the fly's internal clock (Science, 3 November 1995, p. 732). “This is an extremely interesting piece of work,” says clock researcher Joseph Takahashi at Northwestern University in Evanston, Illinois. “This is really the first molecular link between the Drosophila clock gene story and the emerging mouse gene story.”

    The Baylor group found the new gene during a hunt for DNA sequences that code for regulatory proteins on human chromosome 17. Out of five such sequences they identified, one was found to code for a protein that shares 44% of PER's amino acid sequence and showed greater similarity in a region of the protein, called the PAS domain, which is a common feature in most clock genes identified so far. They dubbed this gene RIGUI and in a similar study in mice found the same gene on chromosome 11, which they dubbed m-rigui.

    This finding is likely to be bolstered by another paper, from Hajime Tei of the University of Tokyo and co-workers in Japan and California—expected to be published shortly in Nature—which will also report the identification of putative human and mouse homologs of Drosophila's per gene. Takahashi says that this group's results are so similar that he believes “it's the same gene.”

    In a clue to the function of RIGUI, Lee and his colleagues found that expression of the gene rises and falls according to a circadian pattern, like that of per. For example, Lee and his colleagues measured the production of m-rigui messenger RNA (mRNA)—a necessary intermediate in protein synthesis—in a part of the mouse brain called the suprachiasmatic nucleus, thought to be the master clock regulator in mammals. They found dramatic swings in mRNA levels over a 24-hour period, even when the animals were kept in the dark. Moreover, the timing of m-rigui expression could be altered by shifting the timing of the animals' light and dark cycle. Both of these effects are key tests of an internally controlled circadian pattern.

    These findings imply, Lee and his colleagues say, that RIGUI may play the same role that per does in the fruit fly. They caution, however, that the sequences are not close enough for them to be sure. Steven Reppert, a neurobiologist and clock researcher at Massachusetts General Hospital in Boston, echoes this caution, saying that the partial homology with per is not conclusive evidence that RIGUI has the same function in mammals that per does in insects. He adds that the oscillating expression of RIGUI in brain tissues does not prove that it is central to the clock's regulation. The only way to prove this, he says, would be to “knock out the [mouse] gene and see what happens to the circadian rhythms.” If they are disrupted, Reppert concludes, the Baylor group's results would represent “a profound finding.”

    The Baylor group is now embarking on just such experiments. And despite their reservations, Reppert and other researchers agree that the new results are likely to open new doors in clock research. “We will now be able to test molecular models of the clock in mammals,” says Takahashi. “Once we get a couple of these genes, the next ones will start falling into place.”

  5. X-RAY CRYSTALLOGRAPHY

    Researchers Get Their First Good Look at the Nucleosome

    1. Carol Featherstone
    1. Carol Featherstone is a writer in Cambridge, U.K.

    The cell's nucleus is a miracle of packaging. Stretch out human DNA, and it would be 2 meters long. Yet the cell manages to cram it into a space just a few micrometers in diameter. Now, Timothy Richmond and his team at the Swiss Federal Institute of Technology (ETH) in Zurich have provided the first detailed look at a key piece of the molecular machinery responsible for this feat: a fundamental DNA packaging unit called the nucleosome core particle.

    New portrait.

    The exact contacts between the nucleosome histones (colored as indicated) and the nucleic acid can now be seen.

    T. RICHMOND ET AL.

    An average cell nucleus contains 25 million of these particles. Each consists of a discus-shaped core of eight small proteins, called the histone octamer, encircled by 146 base pairs of DNA that spiral 1.65 turns around the edge of the discus. The nucleosome particles, first described by Roger Kornberg of Stanford University in 1974, are connected, much like beads on a string, by stretches of linker DNA. In this week's Nature (18 September), the Richmond team reports that they have determined the x-ray crystallographic structure of the nucleosome core particle to a resolution of 2.8 angstroms—good enough to distinguish about 80% of the atoms in the proteins and all of those in the DNA.

    With a molecular weight of 206,000 daltons, about half of it protein and half DNA, the nucleosome is by far the largest DNA-protein complex to be imaged at atomic resolution. But more than that, researchers say, the structure will be very important for understanding such dynamic aspects of nuclear function as gene transcription, which is the first step of protein synthesis, and DNA replication and repair.

    In order for these activities to occur, the DNA and its associated proteins, collectively called chromatin, must be at least partially unwrapped, and recent evidence suggests that changes in nucleosome structure play a role in that unwrapping. Already, this new close-up of the core particle is giving researchers fresh insights into how that happens. For example, the tails of the histones project out of the nucleosome, making them good targets for enzymes involved in controlling transcription.

    “A few years ago, nucleosomes were thought of simply as rocks in the way of transcription,” remarks postdoc Karolin Luger of the ETH team. “Now, it's all active participation.” She adds that “it was good timing” to finish the structure just when the nucleosome's dynamic role was being appreciated. Transcription expert David Allis of the University of Rochester in New York, who got his first look at the structure during a presentation Luger made at last month's transcription meeting at Cold Spring Harbor Laboratory in New York, agrees: “I was dazzled by the pictures. It's fantastic to sit back and see the structure we've all been working on.”

    The ETH team's accomplishment is the culmination of a 20-year quest that began in the lab of Nobel Prize-winning crystallographer Aaron Klug at the Medical Research Council Laboratory for Molecular Biology in Cambridge, United Kingdom. In 1984, the Klug team, including Richmond, a postdoc at the time, produced a crystal structure of the particle at 7 angstroms resolution that allowed researchers to see the overall shape of the histone proteins and how the DNA bends as it twists around the protein mass. Individual atoms couldn't be seen at all, however. The resolution was poor because subtle variations in both the nucleic acid and protein components of natural nucleosomes prevent them from forming the well-ordered crystals needed. “Those [early] crystals [of natural nucleosomes] weren't all that good,” Klug recalls.

    Richmond, who moved from Cambridge to the ETH in 1985, decided that the only way to be sure of getting the desired crystal quality would be to systematically remove the heterogeneity from the nucleosomes. In effect, he set out to synthesize his own nucleosomes.

    He and his colleagues began by making a recombinant DNA with a defined sequence. They also produced all four types of histone—H2A, H2B, H3, and H4—that together form the octamer by expressing the genes in bacteria, which do not perform the chemical modifications that alter the histones in eukaryotic cells. Then the researchers performed site-directed mutagenesis of the histone genes to create sites in the proteins into which they could insert heavy-atom labels to make the eventual diffraction pattern interpretable.

    Each of the four proteins and the DNA then had to be purified and enticed to refold into their native conformations in the synthetic nucleosomes. And, finally, the particles had to be crystallized in a form from which good x-ray diffraction data could be collected. Even today all that would be no small feat, but in the early 1980s when the necessary technology was in its infancy, it was daringly ambitious. Indeed, Richmond's team didn't get their first truly high-resolution crystals until nearly 10 years after they started. “It's taken a long time, but understandably so,” says Klug. “It's a great achievement.”

    In a fortuitous coincidence, the new European Synchrotron Radiation Facility (ESRF) in Grenoble, France, which produces high-intensity and focused x-ray beams, came online just as the Swiss team got their first crystals. Christian Riekel, who was setting up the high-brilliancex-ray beam at the ESRF, provided the crystallographers with his share of beam time. At a time when no equivalent x-ray source was available in the world, “it really did make the whole thing possible,” says Richmond.

    The overall structure that resulted from this effort looks like the 1984 structure. And the trace of the central protein structure is almost identical to that Evangelos Moudrianakis and colleagues at Johns Hopkins University found in a 1991 x-ray crystallography analysis of the histone octamer alone. But the new structure shows in molecular detail exactly how the DNA makes contact with the histone proteins as it wraps around them.

    The DNA contacts the histone octamer at 14 main points, most of which have quite different structures. This means that the DNA does not follow a regular path around the protein but is more curved at some positions on the nucleosome surface than at others, which may have important functional consequences. For example, the curvature might distort the DNA so that some of the factors that regulate gene transcription are encouraged to bind, while others are not.

    The distribution of the contacts between DNA and protein also allows researchers to envisage how a large enzyme complex like the one that replicates DNA can travel along the DNA strand without completely displacing the nucleosome. “The DNA is like a piece of Velcro on the outside of the histone octamer,” explains Richmond. An enzyme could displace 30 or 40 base pairs of DNA from the protein at a time, but when it has passed, that DNA can stick back to the very same nucleosome, so the histone octamer may never be totally removed from the DNA.

    Previous biochemical studies had indicated that the histone tails extend beyond the DNA. The new structure provides a more direct view of the position of the tails, which may play an important role in making contact with adjacent nucleosomes as the chromatin folds back on itself to form the higher order structure needed to pack all the chromatin into the nucleus.

    That folding would make large stretches of the DNA inaccessible when the genes encoded in them need to be kept inactive. The structure suggests how active genes become accessible. The projecting histone tails contain some of the sites that are modified by histone acetyl transferases—the enzymes that have been hot news in the past couple of years because of their role in regulating transcription. By adding acetyl groups to the tails that poke out, these enzymes would almost certainly disrupt a higher order structure and open up the chromatin to infiltration by the transcriptional machinery.

    An atomic-level description of the nucleosome core particle is a tremendous technical achievement in itself but is only the first step toward understanding chromatin structure. To see the structure of two particles connected by the linker DNA would be nice, Richmond says. The structure of three particles, the central one wrapped by uncut DNA, might be even better. But the real goal is “to see what two or three turns of a higher order structure looks like.” With luck, this next quest will not take another 2 decades.

  6. ORGANIC CHEMISTRY

    Polymer Folds Just Like a Protein

    1. Elizabeth Pennisi

    The exact linear arrangement of amino acids in a protein is not the only thing that determines how it behaves: Also key is the protein's precise three-dimensional shape. For decades, chemists have sought to understand what forces make a string of amino acids bend and curl into a particular configuration, with the hope of one day making their own synthetic polymers that can duplicate the functions of natural proteins. But they have had a hard time getting anything other than a protein to fold in solution.

    21st century chemistry.

    This spontaneously folding polymer may be the wave of the future.

    NELSON ET AL.

    Now on page 1793, a team led by organic chemist Jeffrey Moore of the University of Illinois, Urbana, reports achieving this goal with a polymer they made from repeating units of a hydrocarbon molecule called phenylacetylene. They found that the polymer readily coils into a helix, one of the basic folding motifs of proteins, and forms a cavity that can be modified for different purposes.

    Other organic chemists are enthusiastic, because it is a new addition to the small number of synthetic polymers, sometimes called “foldamers,” that they have coaxed into folding. The achievement shows “Mother Nature doesn't have a monopoly on folded structures,” says Brent Iverson, a chemist at the University of Texas, Austin. “This is a very important new direction for chemistry.”

    Researchers hope that the work will point the way to new types of tailor-made complex molecules that have the specificity and self-organizing capabilities of proteins. If they can be made to catalyze chemical reactions as the body's own enzymes do, these tailor-made molecules could be useful as industrial catalysts or as biomedically active substances that would not degrade as easily as proteins themselves.

    The new results may also shed light on a long-standing disagreement among protein chemists. Some chemists think that proteins in solution fold to protect those amino acids that are uncharged, or “hydrophobic,” from contact with water, a so-called polar solvent because each water molecule carries partial negative and positive charges. In contrast, others have argued that relatively weak links between a hydrogen atom and two adjacent atoms are responsible for a protein's kinks and curls. But the phenylacetylene polymer folded even though it has no such hydrogen bonds, showing that at least in this case “you can drive the ordering and folding just using a hydrophobic effect,” says Moore's collaborator Jeffery Saven, now at the University of Pennsylvania, Philadelphia.

    To test his ideas about protein folding, Illinois physical chemist Peter Wolynes teamed up with Moore 2 years ago to make a nonprotein polymer that could fold like a protein. For their polymer building block, they chose a molecule, phenylacetylene, that has no nitrogen or oxygen atoms in its backbone and thus lacks the key hydrogen-bond ingredients. Computer modeling by Saven and Wolynes indicated that if a chain had at least eight of these uncharged, ring-shaped phenyl groups, it would be able to twist into a helix and thereby avoid contact with a polar solvent.

    The team then synthesized a selection of polymers, ranging in length from two to 18 links, and dissolved them in different polar organic solvents. Using several different spectroscopic techniques, the team found that chains 10 links or longer did form helices as predicted. The Illinois team also showed that the new foldamer, like proteins, can be made to fold, unfold, and then refold.

    Changing the solvents also confirmed that hydrophobic-like forces were responsible for the folding. The more polar the solvent, the “stronger the induced organization,” Moore says, adding: “This is a step along the way of understanding the rest of protein behavior.”

    Not everyone is convinced, however, that this polymer reflects what is going on in proteins. “The network of forces [in the polymer] is very clearly different from [the forces] in a protein,” comments organic chemist Sam Gellman of the University of Wisconsin, Madison. He points out that proteins are a mosaic of uncharged and charged amino acids, and that the latter could also help drive folding.

    But even if foldamers aren't complete protein mimics, researchers hope they will be able to duplicate the sophisticated chemistry that goes on in cells. “This is a field in which almost anything you can imagine, you can try to do,” Gellman predicts. “For chemists, it will be the challenge of the 21st century.”

  7. HUMAN GENETICS

    Gene Found for the Fading Eyesight of Old Age

    1. Elizabeth Pennisi

    For many people, retirement means more time to read, watch television, sew, play cards, or drive to places they have always longed to visit. Yet by the time they reach age 65, all too many retirees find they no longer have those options because they have lost much of their vision to age-related macular degeneration, a disease that destroys the macula, the part of the retina that sees fine details. Indeed, macular degeneration is the most common uncorrectable cause of vision loss in the elderly. Currently in the United States alone, 1.5 million people have seriously impaired vision, while another 10.5 million show early signs of the disease.

    Now, on page 1805, a team led by molecular geneticist Michael Dean of the National Cancer Institute (NCI)-Frederick Cancer Research and Development Center in Frederick, Maryland, and Richard Lewis, an ophthalmologist at Baylor College of Medicine in Houston, reports a genetic cause for the disorder. Earlier this year, the same team had found that mutations in a gene called ABCR (ATP-binding cassette transporter-retina) cause Stargardt disease, a form of macular degeneration that develops early, usually leaving its victims blind by age 20. Now, the new work from Dean, Lewis, and their colleagues indicates that mutations in this gene, which codes for a protein thought to shuttle molecules across the membranes of certain retinal cells, could also account for 16% of the age-related cases.

    Carl Kupfer, director of the National Eye Institute in Bethesda, Maryland, says the finding is “very exciting. This is the first demonstration of a causative role for a specific gene in age-related macular degeneration seen in the general population.” By linking ABCR to age-related macular degeneration, Lewis and Dean's team has set the stage for researchers to identify the underlying mechanism of this gradual vision loss—and perhaps develop drugs that could halt it.

    Moreover, if mutations in the gene are as common a cause of macular degeneration as it now appears, the work may lead to diagnostic tests that will enable ophthalmologists to pinpoint people at risk. “We then have decades to intervene to prevent or modulate this disorder,” says Lewis. For example, people with ABCR mutations could be advised to avoid smoking and high-cholesterol foods, habits that increase the risk of macular degeneration.

    Lewis began hunting for a gene involved in age-related macular degeneration in 1985. Like others in the field, he studied rare inherited diseases that lead to the destruction of the macula, thinking that one of the genes responsible for these diseases might also play a role in the age-related disorder. He and collaborators Mark Leppert of the University of Utah, Salt Lake City, and Baylor geneticist James Lupski concentrated on finding the gene for Stargardt disease, and by early 1996 they had mapped it to a particular spot on chromosome 1. At that point, Leppert got an unexpected phone call from Dean.

    Dean said he had found an intriguing gene located in the same spot on chromosome 1. He had come across it as part of his efforts to learn more about a family of proteins known as ATP-binding cassette transporter proteins because they use ATP—the standard cellular energy source—to transport molecules into or out of cells. Mutations in these proteins had been linked to a variety of genetic diseases, including cystic fibrosis, adrenoleukodystrophy, and Zellweger syndrome, and Dean and NCI geneticist Rando Allikmets had been hunting for new ones by screening databases of human DNA for genes with related sequences. Of the 20 new candidates they and their colleagues had come across, they found ABCR the most interesting because it seemed to be expressed only in the eye. That hinted that it might play a role in eye diseases.

    After Dean telephoned Leppert about the finding, the NCI, Baylor, and Utah groups joined forces to search for mutations in ABCR in patients with Stargardt disease. When they began finding them, “we knew we were on to something,” Dean recalls. ABCR turned out to be the chromosome 1 gene at fault in that disorder, they reported in the March Nature Genetics.

    That work showed that both gene copies need to be mutated to produce the rapid macular degeneration of Stargardt disease. But the researchers wondered whether a single mutation might cause the slower degeneration that comes with age. To find out, Dean's team tested 167 people—96 from Utah and 71 identified in Boston by Harvard Medical School ophthalmologist Johanna Seddon—with age-related macular degeneration for ABCR mutations. Of those, 26 had an aberrant ABCR gene, the researchers report. Because this eye problem is considered to have multiple, albeit unknown, causes, the finding that 16% of the patients had problems in this one gene surprised even the researchers themselves. “That we found [mutated ABCR] in one person in six in the first 167 people we screened is mind-blowing,” says Lewis.

    What's more, ABCR mutations are especially frequent in patients with the most common form of age-related macular degeneration, the so-called dry type, which accounts for 80% of cases and apparently results from damage to the pigmented layer of cells in the retina. The damage causes bits of debris to accumulate, leading to gradual vision loss. Twenty-five of the 134 patients with dry-type macular degeneration had mutations in the gene. In contrast, the researchers found only one mutation in the 33 people with the wet form, which results when excess blood vessels grow into the eye and leak blood that apparently damages the retina.

    Researchers do not yet know exactly what the protein made by ABCR does normally, or how it malfunctions to cause macular generation. But a major clue emerged soon after the Nature Genetics paper came out. Two groups independently showed that ABCR is actually the so-called rim protein, discovered 20 years ago in the rod cells, one of the two types of light-detecting cells in the retina. The rim protein gets its name because it is found along the outer edges of the membrane folds that make up the rod cells' light-sensitive ends. The discovery that it is actually a transporter protein suggests, says Dean, that it could be involved in the molecular recycling that goes on at the photoreceptive ends of rod cells. The cell ends are constantly being degraded, releasing pigments and other materials, and then reassembled. The retinal pigment epithelium, a cell layer that underlies the retina, takes up the materials released by the degradation, presumably for reuse.

    If the ABCR protein helps transport these materials across the cell membranes, mutations that impede it could cause degraded material to build up and interfere with retinal cell function. Ultimately, Dean speculates, not just the rod cells, but also the retinal pigment epithelium and the retina's other light detectors, the cone cells, could be damaged.

    Or it may be that an altered ABCR protein does its job just fine, but is itself not recycled properly and so accumulates inappropriately. “So you are left with junk that may be toxic,” suggests genetic epidemiologist Margaret Pericak-Vance of Duke University in Durham, North Carolina. “There are a number of possibilities.”

    To try and sort through them, several teams are knocking out the ABCR gene in mice to create animals that make no ABCR at all. By seeing what kind of defects result, they may be able to figure out what the protein does. Others are trying to find out what molecule the ABCR protein transports, if indeed that is what it does.

    At the same time, there's a push to find out just how common ABCR mutations are among people with macular degeneration. “We have to look at large numbers of well-characterized patients and controls and really determine the prevalence of this gene in well-defined cases,” says Kupfer. Moreover, Pericak-Vance points out that researchers also need to find out whether the age at which patients lose their vision depends on which mutations they carry. Answers to these questions will help determine whether screening for mutations in this gene is warranted.

    But even if this gene proves to have a smaller role in age-related macular degeneration than this first result implies, it is a break in what had been an intractable case, says Dean: “This is the first chink in the armor of a disease that's been resistant to figuring out what's going on.”

  8. PHYSICS

    Slicing an Electron's Charge Into Three

    1. David Ehrenstein

    As everyone learns in high school, electric charges come as multiples of an indivisible unit: the charge of an electron. But two groups of physicists have demonstrated an exception. As an Israeli team announced in last week's issue of Nature and a French team will report in the 29 September Physical Review Letters, charge in a thin layer of electrons subjected to a high magnetic field and chilled to nearly absolute zero can come in units of exactly a third of an electron.

    Counterintuitive as it is, the result isn't a surprise to solid-state physicists. They have gotten over their shock during the 14 years since fractional charges were first predicted as part of a theory to explain a puzzling phenomenon called the fractional quantum Hall (FQH) effect. But actually observing a fractional charge—a manifestation of fractionally charged “quasi-particles” that take shape in the quantum-mechanical soup of electrons and magnetic field—is a thrill nonetheless. “It's exciting that the prediction has been confirmed,” says Charles Kane of the University of Pennsylvania. “You can sort of imagine those quasi-particles going blip, blip, blip, and I think that makes it seem more real.”

    The generic Hall effect has been part of physics since 1879, when Edwin H. Hall reported that a magnetic field applied perpendicular to a current-carrying wire creates a voltage across the wire's width. This Hall voltage develops, as physicists later realized, because the field causes electrons to pile up on one side of the wire. In the 1980s, physicists discovered a quantum variant of the effect: Under extreme conditions—when electrons were restricted to an ultrathin layer of a solid at very low temperatures and high magnetic fields—increasing the magnetic field caused the voltage to increase in discrete steps, rather than continuously.

    Even more surprising, the plateaus in the Hall voltage appeared when the ratio of current along the layer to the voltage across it—known as the Hall conductance—reached multiples of a specific value. Physicists soon managed to explain the plateaus at integer multiples, known as the integer quantum Hall effect. But the FQH effect seen at higher magnetic fields, in which the plateaus correspond to fractional multiples such as 1/3, 2/3, 2/5, and 3/7, “was puzzling for quite a while,” says physicist Rafi de-Picciotto of the Weizmann Institute of Science in Rehovot, Israel, a member of the Israeli team.

    In 1983 Robert Laughlin, now at Stanford University, proposed an explanation for the FQH effect, and although the theory was widely accepted, it included a strange concept: fractional charges. Laughlin proposed that in the FQH effect the electrons in the layer form an exotic quantum-mechanical state in which they move collectively. In this state they coexist with vortices, pointlike objects resembling tiny whirlpools around which the electrons circulate. Electrons are fermions, particles that normally can't occupy the same quantum state, but when each electron teams up with an odd number of vortices, they form aggregates that can coexist in a single quantum state.

    The number of vortices increases with magnetic field. At particular values of the field, there are just enough vortices for all of the electrons to form one of these stable arrangements, say, an arrangement in which each electron is “bound” to exactly three vortices. If another vortex is introduced, by increasing the magnetic field, for example, the electrons move away from it, to maintain the same ratio of electrons to vortices everywhere else. By doing so, they open a gap in the negative charge, and a positive charge corresponding to exactly a third of an electron's is left behind.

    These fractionally charged quasi-particles can carry electric current in the FQH state. Meanwhile, because the background “sea” of electrons bound to vortices clings to stable configurations in the face of increasing magnetic field, the Hall conductance remains constant, as the FQH effect demonstrates.

    Laughlin's picture has withstood every test since he proposed it, but physicists still had a hard time accustoming themselves to it. “It is very difficult to imagine that electrons will somehow divide, because they are really elementary particles,” says de-Picciotto. “What [the physics community] wanted to see was a direct observation of the charge.”

    The two research teams—one led by Michael Reznikov of the Weizmann Institute and the other by D. Christian Glattli of the Commission of Atomic Energy in Saclay, France—set out to look for the fractional charges by measuring fluctuations in the current through an FQH system chilled to within a tenth of a degree of absolute zero. The method is like gauging the size of hailstones by listening to them hit a tin roof, Kane and Matthew Fisher of the University of California, Santa Barbara, explain in a commentary accompanying the Nature paper. By measuring a current so small that the size of the individual “hailstones” could be determined, the researchers found they corresponded to charges just one-third that of an electron.

    De-Picciotto is delighted with the result, but he confesses to a little regret that it is so neat: “It would have been nice if we could identify something new which is not predicted by any theories, and then it would make people think even harder in order to try and explain it.”

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution


Navigate This Article