News this Week

Science  14 Feb 1997:
Vol. 275, Issue 5302, pp. 924
  1. Archaeology

    Death in Norse Greenland

    1. Heather Pringle

    By combining data from ice cores, archaeological digs, and fossil flies, researchers have shown how increasing cold and an inflexible culture could have doomed this medieval Norse outpost

    In the 10th century A.D., the inhospitable coast of Greenland was transformed into Europe's westernmost outpost. Settlers from Scandinavia founded two major settlements on the narrow fringe of land between the ice sheet and the sea, and raised dairy cattle and sheep in the face of glacial cold and fierce winter gales. The Norse, an estimated 5000 to 6500 strong at the colony's height, built stone churches and apparently traded walrus ivory and a live polar bear in order to get their own bishop. Then, in one of the great mysteries of medieval history, the colony foundered. By 1361, a seafaring Norwegian priest, Ivar Bárdarson, and his companions reported Greenland's Western Settlement eerily empty. “They found nobody, either Christians or heathens, only some wild cattle and sheep,” noted an anonymous author who recorded Bárdarson's journey in a mid-14th century text. By 1500, archaeological and historical evidence shows, the Norse settlers had utterly vanished.

    The remains of their days.

    Archaeological digs suggest a crisis on some Norse farms.


    Just what happened continues to be the subject of debate. Some researchers, influenced by rare medieval accounts of hostilities, pin the blame on warring Thule hunters—ancestors of the modern Inuit who migrated from Ellesmere Island in about A.D. 1100. No archaeological evidence has ever surfaced to support this idea, however, and others have favored changes in climate. The North Atlantic region was unusually mild when the Norse first settled Greenland, then plunged into a 500-year cold spell known as the Little Ice Age starting about 1300. Now, researchers are bringing many different types of data together to paint the clearest picture yet of the Norse colony's last days and the forces that led to its demise.

    The data sources for these international, interdisciplinary studies range from fossil flies to a model of the settlers' economy to the latest records of seasonal climate change, teased out of core samples of Greenland ice. The results show that worsening climate was at least partly to blame, as reported last year by Paul Buckland, a paleoecologist at the University of Sheffield in the United Kingdom, and colleagues. But researchers have now also found that the colony's economy was fragile from the start. Spurning fishing and Inuit hunting methods in favor of traditional Norse dairy farming, the colony had teetered dangerously close to extinction throughout its history. In the end, a 20-year series of cool summers was apparently enough to trigger abandonment of one of the two major settlements, says Tom McGovern, a zooarchaeologist at the City University of New York (CUNY). The real mystery, adds Buckland, is not why the Norse died out, but why they clung to such a poorly adapted economy for 5 centuries. “They lived on the edge from the beginning to the end,” he says. “It didn't take much to push them over.”

    Many of the new studies were done under the auspices of a 5-year-old research cooperative called the North Atlantic Biocultural Organization (NABO) headquartered at CUNY, which focuses on the North Atlantic region since the Iron Age. And the collaboration between these archaeologists and the physical scientists of the ice-core program—the Greenland Ice Sheet Project Two (GISP2)—draws kudos from the research community. The fate of the Norse “was one of those mysteries that no one has ever known completely,” says Noel Broadbent, program director for the Arctic Social Sciences program at the National Science Foundation, which has just awarded NABO $208,000 for workshops, publications, and fieldwork on subjects ranging from Norse settlement to hunting and gathering cultures in Labrador. “What [the NABO team has] done is bring all these scholars together from a number of different disciplines and put them on the issue [of Norse extinction]. And it creates a much greater dynamic. There's not enough of this kind of science done.” This new approach has yielded new insight, agrees William Fitzhugh, an Arctic archaeologist at the National Museum of Natural History in Washington. “There's no doubt there has been a climatic impact,” he says. “And it can be significant in a place like the North Atlantic where agricultural societies cherished the European way of life and [were] not adapting to the environment.”

    The Norse way of life

    Much of the new research draws on data gleaned from the most northerly of the Norse outposts, the Western Settlement near modern-day Nuuk, where four separate archaeological teams have conducted extensive digs since the mid-1970s. To understand the Norse economy, McGovern spent nearly 2 decades studying animal bones from these digs and from previously excavated Eastern Settlement sites. He and colleagues identified tens of thousands of bones by species and analyzed the contributions of livestock and wild game to the Norse diet. Combining these results with details from the Icelandic sagas and other data, such as the habits of wild game, McGovern sketched the medieval settlers' seasonal rounds.

    He found that they journeyed north in summer to hunt walrus for ivory to trade with Europe and spent winters tending sheep, goats, and cattle confined in byres. The Norse also hunted caribou as well as migratory harp and common seals in pupping areas. But they apparently did little fishing: McGovern's 2 decades of study turned up very little fish bone, and excavations in both settlements have turned up almost no fishing gear. Buckland's recent findings, in press in a Columbia University monograph, confirm this: After screening nearly 1 metric ton of house floor and midden sediments through 300-micrometer mesh, he and his associates gleaned only one skate tooth and three fish vertebrae. “So this is a nonmaritime culture on the edge of the sea,” says Buckland. “It's very strange.”

    To examine the effects of climate on this economy, McGovern and colleagues at CUNY devised a series of computer simulations, using archaeological data on such variables as the sizes of pastures and the number of cattle stalls—and therefore cattle—in each byre. Tax records from medieval Norse Iceland added further data on the standard numbers of livestock and farming laws. The CUNY team estimated the amount of fodder grown and consumed and the production of dairy foods and meats, and put it all into a still-evolving computer model that has received NABO funding. Then, they ran their model to see how the Norse economy would fare in three hypothetical climatic scenarios, from optimal to catastrophically cold.

    Their results, also in press in the monograph, reveal the difficulty Norse families had in amassing fodder and food surpluses to tide them over hard times. While the settlers could weather one bad year, a series of poor to moderate years would have pushed the entire community to the edge. Although the harshest weather was in winter, the model shows that the fate of the Norse hung on the summers, when the community stocked up on fodder for the winter. In particular, says McGovern, “a series of closely spaced poor summers” would have been devastating, “drawing down all the resources of not just an individual farm but the whole community.” Those shortages would have become most acute in late winter—“a period of maximum vulnerability,” says McGovern.

    To arctic archaeologists such as Susan Kaplan, director of the Peary-MacMillan Arctic Museum at Bowdoin College in Brunswick, Maine, this research offers much new clarification. “What he's presenting here is for some people not as romantic or mysterious a view of the Norse demise as some would like, but it's one that is probably realistic,” she says.

    McGovern and others can now compare their model with actual data on the climate the Norse faced, extracted from the GISP2 ice core. The top part of the core yields a seasonal record of dust, trapped gas, and chemical changes in past atmospheres and is considered a particularly faithful record of Greenland itself. Researchers have already analyzed changes in the proportions of oxygen isotopes in the ice, thought to indicate past temperature, to trace climate over thousands of years. But as part of her dissertation research, University of Colorado paleoclimatologist Lisa Barlow took a closer look at Greenland climate, measuring another paleoclimate indicator—hydrogen isotopes—season by season in samples taken from a 200-meter section of the core. She charted summer and winter signals from medieval times to the mid-1980s.

    In the 14th century, Barlow discovered four major isotopic excursions suggesting clusters of chilly periods from 1308 to 1318, 1324 to 1329, 1343 to 1362, and 1380 to 1384. The longest cold spell, beginning in 1343, correlated almost exactly with the abandonment of the Western Settlement, as suggested by Bárdarson's account and confirmed by such evidence as radiocarbon dating of mammal bones strewn over the top layers of Western Settlement sites. And “if you pull out the wintertime and summertime signals [in that period], it looks like the excursions are happening more in the summertime,” says Barlow, whose work is in press at Holocene.

    Barlow's picture is persuasive, says Paul Mayewski, a paleoclimatologist at the University of New Hampshire and a leader of the GISP2 team. In this time period, he says, “because it's so close to the present, the calibration with temperature is probably quite good.” And he's pleased that the ice-core data are at last being applied to human history. “When I started GISP, one of the goals we had was to match climate records with archaeological records. … There is an immense story that can be pulled out of the paleoclimate data.”

    The final days

    With both the climate record and the model of the Norse economy suggesting that a series of cool summers led to starvation in the Western Settlement, McGovern set out to trace the details of its last months and days. By reanalyzing animal bones gathered from the uppermost floor layers in farmhouses, he pieced together a bitter tale of desperation and disaster. For example, although canine bones were rare in the older layers of the house floors, four different teams unearthed remains of large elkhound-like dogs, likely used in hunting caribou, in the top strata in four farms. Some bones revealed human cut marks, suggesting that the Norse butchered these valuable dogs for food.

    And before the inhabitants of a farm called Nipaatsoq ate their dogs, says McGovern, they consumed all their dairy cattle—in violation of traditional Norse law, which prohibits the slaughter of cows. Excavators at Nipaatsoq found the hooves of five cows—the total number of cattle sheltered in the farm's byre—scattered among other food remains across a lower layer of one room. Apparently, fodder for the cattle had run out, and people had even tried to eat the hooves. “The hooves are usually a waste piece,” says Tom Amorosi, a zooarchaeologist at CUNY. “If you're finding that in the food remains, it means they're really searching for calories. They've scaled down to a point where there's no food left.” Adds McGovern: “And of course, once you've eaten your cows, you're out of the dairy-farming business.”

    Other, tinier fauna echoed this tale of decline and fall. In as-yet-unpublished research, entomologist and NABO member Peter Skidmore of the University of Sheffield discovered an intriguing succession of fossil flies preserved in bedroom sediments at Nipaatsoq. In lower layers, the dominant fly was Teleomarina flavipes, a species of housefly that required warm quarters and was accidentally introduced by the Norse to Greenland. In the penultimate layer, however, T. flavipes disappeared—perhaps as room fires died—only to be succeeded by two cold-tolerant indoor carrion species. The nature of the carrion—animal or human—is unknown. In the top layer, which likely accumulated during a 2-year period encompassing the abandonment of the farm, an outdoor fauna of flies predominated, as if the farmhouse roof had caved in. To McGovern, all the evidence to date adds up to a final crisis—“a late-winter disaster, in which they eat the cows and they eat the dogs and then the flies get them.” Whether a similar crisis took place at the other settlement is unknown, but McGovern and colleagues are eager to compare this record with data from an upcoming excavation at the Eastern Settlement.

    But not everyone is convinced that such a catastrophe unfolded. Jette Arneborg, a curator at the Danish National Museum in Copenhagen and a principal investigator at one of the digs in the Western Settlement, has found scant evidence of such harrowing events. “To me, it doesn't really seem that there was a disaster,” she says, noting that the buildings were abandoned in a very orderly fashion. The settlers seem to have taken all their precious items, including church bells, away with them, leaving only ordinary bulky items behind. “I think they simply decided to give up the area,” says Arneborg, perhaps leaving for the Eastern Settlement. Instead of starvation, she suggests that the settlement was abandoned in part because of declining trade with Europe, which left settlers isolated from their homeland.

    Whatever the reason the settlers left, it's clear that they wanted to keep close ties with Norway. Indeed, says McGovern, one of the most intriguing findings to date is the key role that culture played in the colony's extinction. Archaeological evidence shows that Greenland's other inhabitants, the Thule, flourished throughout the 14th century, thanks to their prowess in hunting ringed seals below the sea ice in late winter, when few other sources of meat and fat were available. But the Norse failed to adopt ringed-seal hunting methods and other technology from their highly successful neighbors, says McGovern.

    In fact, the cultural flow seemed to go in only one direction. Teams excavating Thule campsites have uncovered scavenged or stolen Norse artifacts, including chess pieces, iron nails, and fragments of cloth, but Arneborg and others working in the Norse farmsteads have found scarcely any Thule goods. “If you looked at the distribution of artifacts on the Norse side, you would say that there's been no contact,” says McGovern. It may be, he adds, that the fervently Christian Norse spurned contact with the shamanistic Thule.

    The styles of clothing in Greenland further underline the isolation of the two cultures. Naturally mummified Thule women exhumed at the Qilakitsoq site in 1978 were swaddled in warm sealskin parkas and trousers, while women buried in a Norse churchyard were dressed in low-cut, narrow-waisted woolen gowns like those then fashionable in Europe.

    To McGovern, all the evidence to date suggests that for the Norse, ethnic purity triumphed at the expense of biological survival. While the starving settlers slaughtered their cattle and dogs, “there were seals in the fjord, right under the ice.” But without harpoons and the skill to find the seals' breathing holes in the ice, the Norse couldn't reach them. It seems, says McGovern, that the Norse in Greenland remained true to the laws and customs of their warmer homeland—and paid the final price for it.

  2. Autoimmunity

    Thyroid Disease—A Case of Cell Suicide?

    1. Nigel Williams

    The name tells the story: Autoimmune diseases, such as diabetes and rheumatoid arthritis, result when the immune system goes awry and turns on the body's own tissues. But how it does so is far from clear. Researchers do not know which of the immune system's several types of killer T cells carries out the attack. And even more puzzling, killer cells are scarce at the site of tissue destruction in many autoimmune diseases. New results published in this issue (p. 960), however, could resolve those puzzles. In at least one disease, they suggest, the immune system itself may not carry out the final act: Instead, the target cells commit suicide through a process called apoptosis.

    “These are intriguing results and present an appealing mechanism,” says autoimmune disease researcher Ricardo Pujol-Borrell of the University Hospital “Germans Trias i Pujol,” in Barcelona, Spain. “What's interesting is that apoptosis is a natural process, and I've always believed autoimmune diseases result from an exaggeration of natural processes,” adds immunologist Noel Rose of Johns Hopkins University. So far, the results apply only to Hashimoto's thyroiditis (HT), a disease marked by a gradual destruction of thyroid tissue that is among the commonest autoimmune diseases. But they suggest that in this and possibly other autoimmune diseases, the immune system may have a less direct role than currently thought.

    Kiss of death.

    IL-1 prompts FasL-bearing thyroid cells to express Fas, and hence to die.

    The new work, done in four laboratories in Italy, builds on rapid progress in the past few years in understanding apoptosis, a normal process for eliminating unwanted cells in tissues and organs during development and for reining in immune responses. One key to triggering this process of cell suicide is a molecule called Fas, found on the surface of many different types of cells. When another molecule, the Fas ligand (FasL), binds to it, Fas initiates a series of events inside the cell that leads to its death.

    Although many cell types can express Fas on their surfaces, FasL at first seemed to occur mainly on the immune system's activated T lymphocytes. Expression of the ligand allows these T cells not only to kill unwanted cells by prompting them to undergo apoptosis, but also to moderate their own activity by triggering apoptosis in other T cells, which express both Fas and FasL. Researchers soon found that a small number of other cell types expressed FasL, such as cells in the eye chamber, parts of the nervous system, and the testis. These sensitive sites use FasL to protect themselves from immune attack by prompting apoptosis in attacking T cells.

    Now, the Italian researchers have identified a role for Fas in the converse phenomenon—autoimmune disease. The team was studying cells from patients with HT, a chronic disease, most common in middle-aged women, that leads to loss of thyroid hormone-producing cells. The team's first clue that apoptosis was involved came when they found Fas on the surface of cells taken from the thyroid glands of several patients, while it did not appear on thyroid cells from control glands. They then showed that interleukin-1 (IL-1), an immune messenger molecule found in the diseased thyroid glands, induced control thyroid cells to express Fas.

    But what came next really surprised the team. They found that both normal thyroid cells and cells from patients with HT expressed high levels of FasL. “That was totally unexpected,” says immunologist Roberto Testi of the University of Rome “Tor Vergata,” who is one of the team members. This result suggested that the abnormal Fas expression leads the cells to trigger apoptosis in each other or in themselves.

    To bolster this picture, the team took IL-1, which they had shown in lab studies induces Fas expression, and added it to normal thyroid cells in culture. They found that large numbers of cells died with the characteristic features of apoptosis. “This puts a different slant on the role of FasL and suggests a completely unexpected pathological role for the molecule,” says Doug Green at the La Jolla Institute for Allergy and Immunology in California.

    There are some problems with the apoptosis theory, however: The rapid cell death demonstrated in the laboratory does not square with the normally slow progression of the disease, which can last for years. The team believes that tight control of Fas expression within the body may explain this slow pace. “We need to know the sequence of events,” says Testi. Another key unknown is the source of the IL-1 that sets the process in motion. IL-1 is normally produced by activated cells of the immune system to stimulate other cells within the system. “People have not thought of IL-1 as a destructive cytokine, but they now may want to look again,” says Green. But if these loose ends in the theory can be tied up, researchers can begin looking for ways to block cell death to prevent thyroid destruction.

    The Italian team's results may also hold out hope for a better understanding of other autoimmune diseases—and why T lymphocytes are puzzlingly scarce in so many of them. Says Green: “The new work is a fascinating hint at an entirely new disease mechanism. I think we are going to see more of this.”

  3. Solar Physics

    Bold Prediction Downplays the Sun's Next Peak

    1. James Glanz

    Every 11 years, a pox of dark spots appears on the face of the sun, and it spits energetic particles and radiation into space. These outbursts, which mark peaks in the sun's magnetic cycle, can destroy space-borne electronics and stir up Earth's upper atmosphere, but researchers are at a loss to predict their intensity. Forecasters are reduced to the equivalent of consulting the Farmer's Almanac, says David Hathaway of NASA's Marshall Space Flight Center in Huntsville, Alabama, taking hints from things like the trend of rising solar activity in our century and the unexplained fact that odd-numbered peaks are almost always higher than the preceding even-numbered ones.

    Now, researchers may have found a way around the numerology, and they have come up with predictions for the next solar maximum, around 2000, that are sharply at odds with the forecast derived from the Farmer's Almanac approach. The “even-odd” numerological methods predict that this one—cycle 23, in the usual numbering scheme—will be the most intense in history. But Sabatino Sofia of Yale University and Kenneth Schatten of Goddard Space Flight Center in Greenbelt, Maryland, say that the sun's current magnetic field, which serves as the “seed” that will grow to the powerful, unstable field of solar maximum, foreshadows only a modest peak, the lowest in decades. Others are not prepared to go quite that far: One rival, for example, predicts a higher but not record-breaking maximum.

    Low-balling the max.

    The next solar maximum, the highest ever by some estimates (red), will be modest, according to other predictions (blue and purple).


    Which forecast fares best, say solar physicists, will be a good test of dynamo theory, which describes how the sun generates and transforms its magnetic field. And there's plenty at stake for satellite operators as well. The energetic particles emitted during the solar flares and magnetic storms of a powerful solar maximum can blast the delicate electronics of military, scientific, and commercial satellites. The ultraviolet radiation from an active sun also heats Earth's upper atmosphere, causing it to expand and exert extra drag on satellites, says Bill Wagner, head of solar physics at NASA. “If we get a real barn-burner of a maximum [in 2000], there are maybe even new [shuttle] missions required to bring fuel up” to reboost facilities like the orbiting Hubble Space Telescope and the planned space station, says Wagner. But Schatten and Sofia's prediction suggests that NASA can rest easy.

    Presented at an American Astronomical Society Meeting in Toronto last month, it is drawing attention not just for its low-ball numbers. “Their [method] is more grounded in observations and physics than any of the others,” says JoAnn Joselyn of the National Oceanic and Atmospheric Administration in Boulder, Colorado. If it falls on its face, adds Spiro Antiochos of the Naval Research Laboratory, “we'd have to really rethink dynamo theory.”

    During quiet periods, the sun is threaded by a weak polar field—one running mostly parallel to lines of longitude. Because the sun spins faster at the equator than at the poles, those field lines get wound and stretched like colossal rubber bands as they are carried along with the hot gases in the solar interior. This stretching creates powerful fields running mostly parallel to lines of latitude. Bundles of these field lines grow strong enough to push out surrounding gases and become buoyant, bobbing up through the surface of the sun. Dark sunspots appear at the “footpoints” of bundles rearing above the solar surface.

    By writhing, reconnecting, and even wrenching free of the sun, these field lines create displays like flares and launch great eruptions of gas called coronal mass ejections. Eventually, says the theory, enough field lines reconnect to restore the smooth polar field of the quiet sun, although it now points in the opposite direction from the original field. But although the events driving every cycle are similar, peak activity varies. “You're talking about a very big range,” says Schatten—from almost no sunspots during the last half of the 17th century to an average of 190 per month when cycle 19 maxed out in the late 1950s.

    Sofia and Schatten searched for clues to the next peak in the seed field of the quiet sun. They reason that, like an amplifier with a fixed amount of gain, the field strength—and hence the activity—at maximum will depend on the strength of the polar field at minimum. “Those [polar] fields are a telltale sign for the next cycle,” says Schatten. They looked at past relations between the polar-field intensity in quiet times and the height of the peaks that followed to calibrate their method, then turned current observations of the field into a prediction for the next peak: an average sunspot number of just 130, with a possible error of 30 in either direction. The even-odd predictions, in contrast, run as high as 200.

    Skeptics point out that measurements of the weak polar field, based on subtle frequency shifts imprinted on sunlight by the magnetic field, cover only a few solar cycles, giving researchers a short track record. And some parts of the dynamo theory itself still need to be firmed up. “To get the best out of the … technique, we need to have a better understanding of the dynamo model,” says Richard Thompson, who heads the Australian Space Forecast Centre and has his own strategy for predicting the solar maximum.

    Thompson thinks that, for now, forecasters would do better to rely on a “proxy” for the sun's field that can be measured at Earth's surface. The wind of particles that blows from the sun stretches the polar field far into space, where it disturbs Earth's own field. The surface field fluctuations that result have been monitored continuously since 1868, and their number during each full cycle has shown good correlations with the size of the next cycle's peak, Thompson says. His own work based on this approach yields a prediction of about 164.

    That's close to the “consensus value” reached by a NASA-sponsored panel that Joselyn headed last fall. The panel considered 28 prediction methods of all kinds, including high even-odd predictions and these two, says Joselyn. In most cases, “forecasters like to go with persistence,” she says, and the panel settled on a maximum of about the same size as the last one.

    The next 4 years will tell who is right. “To me, it's a great test of the two models,” says Schatten. The outcome could even induce solar prognosticators to put away their almanacs in favor of computers.

  4. Cell Biology

    Cells Count Proteins to Keep Their Telomeres in Line

    1. Marcia Barinaga

    Dividing cells have to take good care of their telomeres. These stretches of repetitive, apparently nonsensical DNA at the end of the chromosomes make up for a quirk in the enzymes that replicate DNA: The enzymes can't reproduce the very ends of the DNA strands, so a cell's chromosomes get shorter each time it divides. Without the telomeres to act as buffers, essential genes could be lost. Indeed, in most cells of higher organisms, even this buffer is eventually exhausted, a development associated with aging and ultimately death.

    But cells that divide repeatedly, such as cancer cells, germ-line cells, and microorganisms like yeast, restore their telomeres by adding back DNA each time they divide. New results described on page 986 by a research team led by molecular biologist David Shore, of the University of Geneva, provide insight into a crucial part of this process.


    The yeast cell counts artificial Rap1 binding sites (black) as part of the telomere and shortens the true telomere accordingly.

    The puzzle Shore and his colleagues, Stéphane Marcand and Eric Gilson of the École Normale Supérieure in Lyon, France, have addressed is what tells the telomere-restoring enzyme, telomerase, how much DNA to add back. They report that in baker's yeast (Saccharomyces cerevisiae), cells measure telomeres by counting the copies of a protein called Rap1 bound to the telomere, and shut off telomere growth when a standard length is reached.

    The work is “an elegant series of studies that really sheds light on part of the telomeric sizing machinery,” says Art Lustig, who studies telomeres at Tulane University in New Orleans. Cells seem to have several ways to measure telomeres, adds Lustig, who studies one of the alternate systems. But Shore's team, he says, has shown that protein counting seems to be one key way. And that is likely to be true for more than yeast. The Shore team notes that Titia de Lange's group at Rockefeller University in New York City has evidence that a similar counting system may operate in human cells, although de Lange declined to discuss the as-yet-unpublished work.

    Telomerase itself was eliminated as a possible telomere measuring stick years ago. The enzyme has been hard to study directly, because researchers have not yet isolated the catalytic protein component, although they have candidates, one of which is described on page 973 by Lea Harrington's team at the University of Toronto. But despite the problems in purifying the enzyme, test-tube experiments with partially pure telomerase have shown that the enzyme alone “can't sense telomere length, and that implies that in vivo, the enzyme has to be told what to do,” says telomerase researcher Gregg Morin, of Geron Corp. in Menlo Park, California.

    In the early 1990s, it began to look like those instructions come from Rap1, which had been discovered in S. cerevisiae in the mid-1980s and shown to bind to telomeres. Shore's and Lustig's groups, as well as Virginia Zakian's team, then at the Fred Hutchinson Cancer Research Center in Seattle, found that mutations in Rap1 or alterations in its levels can change telomere lengths. The most dramatic result came in 1992, when Lustig's group reported that mutations that remove part of Rap1 cause runaway telomere growth. “That was the first clear demonstration that Rap1 is a negative regulator of telomerase elongation,” says Shore. In 1995, Elizabeth Blackburn's team at the University of California, San Francisco, extended the findings beyond S. cerevisiae, with the discovery that mutations in the telomere sequence that hinder Rap1 binding produce longer telomeres in a related yeast, Kluyveromyces lactis.

    While all this implied that Rap1 puts the brakes on telomere growth, how the protein accomplishes that was left wide open. Shore suspected the cell might be counting Rap1 molecules bound to the telomeres. To test that idea, his team decided to play with the number of Rap1 binding sites on an S. cerevisiae telomere—roughly 15 on the 300-base-pair stretch of DNA—and watch the effects on telomere length.

    They used an assay developed by molecular geneticist Dan Gottschling, now at Seattle's Hutchinson center, to follow telomere restoration. The assay replaces the end of a yeast chromosome, including the telomere, with a stretch of DNA that ends in a short “seed” sequence of telomere-like DNA. Left on its own, the yeast adds to that seed, producing a telomere of normal length.

    In one experiment, Shore's team inserted an 80-base-pair piece of telomere DNA between the chromosome and the seed, separated from the seed by a short nontelomere stretch of linker DNA. When the cell then elongated the seed, it grew to only 220 base pairs, not 300. That meant the cell had counted the inserted DNA as part of the telomere. What's more, Shore says, the enzyme “didn't particularly care which direction the [inserted] sequence was in.” That is just what you would expect if the cell were monitoring bound proteins rather than the telomere sequence itself, because a protein would bind to double-stranded DNA regardless of which way it is facing.

    To see whether Rap1 was key to the length measurement, the team created an engineered Rap1 protein that would bind to artificial sites made of nontelomeric DNA. They added some of these sites between the chromosome and the telomere sequence and found that this shortened the amount of telomere sequence the cells added to the chromosome, as if “the machinery that regulates telomere length is fooled,” says Shore, into counting the extra Rap1 molecules as part of the telomere.

    “It is clear from this experiment that Rap1 is being counted,” Gottschling agrees. “But then that raises the question of how you count proteins.” The Shore team suggests that the binding of 15 or so Rap1s alters the shape of the telomere so that telomerase can't bind to the end. When the number drops to fewer than that, they propose, the enzyme can bind and elongate the telomere again.

    Whether that model turns out to be right or not, the counting seems to require other proteins. Shore's group found two proteins they call Rap1 interacting factors (Rifs) that bind to Rap1 and help regulate the telomeres. Deleting either of the RIF genes produces longer telomeres, as does a mutation of Rap1 that keeps it from binding to Rif1. In cells missing both RIF genes, the telomeres grow completely out of control, suggesting that the Rifs play a vital role in counting. Shore suspects there are more such helper-proteins.

    Shore and others will be eagerly seeking those proteins, as well as answers to the obvious next questions: How are the Rap1 proteins actually counted, and how similar is the system in mammals? “We are just starting to get an idea [of] who the players are,” he says. “How they work is another question.”

  5. Mathematics

    Smart Neurons Offer Neuroscience a Math Lesson

    1. Barry Cipra

    SAN DIEGO—The modern metaphor of the brain as computer might suggest it is nothing more than billions of simple on-off switches, elaborately wired together. Far from it, neuroscientists are learning: Real neurons are exceedingly complex devices that can, for example, change their threshold or time lag for responding to incoming signals as the properties of their membranes change. But which properties are important for which processes? Some of the answers, says Nancy Kopell of Boston University, may emerge from a synapse between two disciplines: mathematical analysis of patterns of neurological activity.

    In a presentation at the joint meetings of the American Mathematical Society and the Mathematical Association of America, held here last month, Kopell described how she and her mathematically minded colleagues are probing phenomena such as the synchronized oscillation of neurons during sleep and the choreographed firing of neurons controlling the lobster gut. By analyzing sets of differential equations describing a neuron's properties and behaviors, the theorists are learning how biophysical changes in individual cells can, in effect, “rewire” a neural circuit, altering how it processes signals. In short, Kopell says, “smart neurons can do smart things, and analysis helps us see how.”

    Model behavior.

    A mathematical model of a neuron (light color) mimics a real cell's ability to change its electrical properties depending on the strength of ion currents across its membrane.


    Much like its crude silicon counterpart, a neuron responds to electrical input arriving from its neighbors through junctions called synapses, which can be either excitatory or inhibitory. It can then respond in a variety of ways, including firing a pulse, bursting (that is, firing rapid salvos of pulses), or switching between firing continually and being silent. Shaping the neuron's response is a range of neuromodulators: chemicals that affect the electrical properties of its membrane.

    Differential equations are the mathematical tool of choice for describing this behavior, just as they are for other time-varying flows of material or energy. Computational neuroscientists have assembled these equations into computer simulations of networks of neurons that can be run much like laboratory experiments. The modeler sets the properties and connections of individual neurons and compares the output of the network with that of real neural systems. The models are becoming sophisticated enough that “you can actually predict certain things,” says Christof Koch of the California Institute of Technology's Computational and Neural Systems Program.

    But they have limitations. It's hard to identify cause and effect in the simulations—which properties of the neurons being simulated are at the root of the observed behavior. And that gets in the way of extrapolating results from networks of computationally tractable size to actual networks several orders of magnitude larger in the brain, says Roger Traub of IBM's T. J. Watson Research Center in New York, who is himself a pioneer in neuron-network modeling. As Koch puts it, “Analytical approaches are crucial if we are ever to make sense of the data.”

    That's where Kopell and other theoretically minded researchers come in, he says. In addition to running computational experiments on models of neurons, they analyze the underlying mathematics to gain a sense of which neuronal properties are crucial for particular behaviors. For example, working with David Somers at the Massachusetts Institute of Technology, Kopell found from studying the differential equations for certain models how biophysical changes that affect only the shape of the electrical wave form of a bursting neuron can alter the behavior of an entire network of neurons. The wave form, their analysis showed, determined whether the array of cells would synchronize their firing or fire as a traveling wave, in which each cell fires just after the adjacent one—modes of behavior, says Kopell, that “are ubiquitous in the nervous system.”

    Similar analysis helped explain a long-standing puzzle about synchronized neural activity. Neuroscientists have known that inhibitory signals play a role in the synchronized oscillations seen in the thalamus and cortex during different stages of sleep—but just how these signals could get neurons to fire in concert was a mystery. Kopell worked with David Terman at Ohio State University and Amit Bose at the New Jersey Institute of Technology to show how slowly decaying inhibition can synchronize networks of neurons that have particular membrane properties.

    Kopell is also testing her theoretical tool kit against an actual system of neurons studied by Eve Marder of Brandeis University and her colleagues Larry Abbott and Scott Hooper, now at Ohio University: a simple, two-cell component of the stomatogastric ganglion, which controls digestion in crustaceans such as lobsters and crabs. “There is an enormous amount of detail known about the biophysics of each of the cells that are in the network,” Kopell explains, “but what is still reasonably mysterious is how those cells work with one another to create the functionally important output of the network.”

    In the two-cell subnetwork, one cell is bistable, meaning it can remain at either a high or low potential, while the other is a bursting oscillator. The puzzle, Marder explains, is that on its own, the bursting neuron responds to changes in input only by changing the silent interval between bursts. In the two-cell network, however, it changes the duration of both the bursts and the silent phases. In 1991, Marder, Abbott, and Hooper traced the bursting neuron's complex responses to changes in the membrane properties of its companion. Now, Kopell, Abbott, and Cristina Soto, Kopell's student, are extending the analysis to other two-cell networks in which the cells have very different dynamical “personalities.”

    Such analyses are an important adjunct to computer simulations, Kopell says, because they help theorists get to the root of neurons' enormous versatility. If nature can produce surprises with a pair of smart neurons in a lobster's gut, just think what it can do with billions of skillful cells inside a human skull.

  6. Neuroimmunology

    Tracing Molecules That Make the Brain-Body Connection

    1. Elizabeth Pennisi

    A rheumatologist, Ronald Wilder has long wondered why rheumatoid arthritis symptoms so often fade during pregnancy. The obvious answer is hormones, but less clear is how hormones influence the abnormal immune attack that destroys the joints. Now, with several colleagues at the National Institute of Arthritis and Musculoskeletal and Skin Diseases, Wilder is looking to an emerging discipline, one that fuses immunology with neurobiology, endocrinology, and even psychiatry, to provide him with answers.

    While researchers once considered the body's network of immune defenses a system unto itself, they have learned over the past 15 years that it is intimately intertwined with the nervous and endocrine systems. There are direct physical links—neurons that innervate immune organs such as the spleen and lymph nodes, for example. And now, researchers are unraveling the molecular links, which include the interleukins, originally viewed only as regulators of immune cells; neurotransmitters, once thought to act only between nerve cells; and hormones, the endocrine messengers.


    The brain influences immune function through hormones, particularly those in the hypothalamus-pituitary-adrenal axis, and immune messengers in turn affect the brain.


    As these connections come to light, they are helping to explain some previously mysterious correlations between mental and hormonal states and the immune system. Among these insights, discussed in a November 1996 meeting* at the National Institutes of Health, are how increases in corticosteroid hormones affect the immune system late in pregnancy, and how the immune system triggers specific pathways in the brain to produce the fever and fatigue that accompany infections. “We're making important advances in understanding the infrastructure of how these systems communicate and how breaking the communications can result in disease,” says rheumatologist Esther Sternberg of the National Institute of Mental Health (NIMH). Ultimately, researchers say, the information may help design improved therapies for the conditions, such as better drugs for allergies and other immune reactions.

    Vagus express. One set of connections leads from the immune system to the brain. Acting as what psychologist Steven Maier of the University of Colorado, Boulder, describes as a “diffuse sensory organ,” the immune system relays data about incipient inflammation or new infections to the brain. The brain responds by causing a fever and the familiar tired and achy feelings that accompany infections.

    Researchers had fingered certain immune regulators, primarily interleukin-1 (IL-1), interleukin-6 (IL-6), and tumor necrosis factor-α (TNF-α), as the immune system's messengers to the brain, but this idea had always seemed somewhat flawed. Fevers develop within minutes of the injection of inflammatory toxins into laboratory animals—too fast for the messengers, released at the site of the injection, to achieve the high concentrations thought necessary to breach the barrier between the blood and the brain.

    Then, in 1994, Maier and his colleagues and, independently, Robert Dantzer of INRA-INSERM in Bordeaux, France, found evidence suggesting that the immune message takes a neural route to the brain. Usually, injecting immune messengers into the peritoneal cavity surrounding the gut elicits a fever in laboratory rats. But the researchers found that this doesn't happen if they first cut the vagus nerve, which extends from the brain to the kidney, liver, and other organs. “If you cut the vagus, you can block a lot of the symptoms of sickness,” Maier explains.

    Close ties.

    Immune cells (red) stimulate the vagus nerve (seen in cross section, in green) via paraganglia cells (green and yellow).


    Maier and Linda Watkins and Lisa Goehler of the University of Colorado have begun to see how the vagus might pick up the immune signals. They find that clumps of nerve cells called paraganglia, which can stimulate the vagal nerves, are triggered by IL-1 from nearby immune cells. As a result, Maier said at the meeting, the immune messenger doesn't have to travel all the way to the brain. Instead, it “acts locally” on the paraganglia, which stimulate the vagus to relay IL-1's message to the brain. But because the brain can sometimes respond to IL-1 even when the vagus has been severed, Maier says, other nervous-system pathways may be carrying this immune message as well. What's more, he adds, an initial neuronal message may subsequently be reinforced by blood-borne ones. “Immunological agents are recruiting the brain to alter behavior,” making an organism tired, more sensitive to pain, or less interested in food, explains NIMH neuroendocrinologist Philip Gold.

    Other researchers have found that stress, as well as inflammation, can activate parts of these same pathways. Matthew Kluger and his colleagues at the Lovelace Research Institutes in Albuquerque, New Mexico, find that simply introducing a lab mouse to a new environment can prompt an increase in IL-6, raising the body temperature and inducing typical “sickness behavior,” in which the mouse loses interest in eating and exploring. In this case, though, the communication between the brain and the immune system is two-way: The brain apparently registers the stress and then activates certain immune messengers, which in turn signal the brain to cause the body to react. Ultimately, those reactions include deactivation of the immune messengers, creating a complex feedback.

    Interconnections. Indeed, as researchers have known for more than a decade, the brain can shape the immune response. “It has now been shown scientifically: Mental states can influence the body's resistance to disease,” says George Chrousos, an endocrinologist at the National Institute of Child Health and Human Development (NICHD).

    Investigators had long suspected that they act, at least in part, through the hypothalamic-pituitary-adrenal axis (HPA), which is triggered when a stress, say the sight of a potential predator, stimulates production of corticotropin-releasing factor (CRF) by the hypothalamus of the brain. Among other actions, CRF ultimately causes the release of corticosteroids from the adrenal glands. These adrenal hormones have several actions, such as providing a burst of energy by raising blood-sugar concentrations, that enable an individual to deal with a threatening or stressful stimulus.

    In addition, though, the corticosteroids inhibit IL-1 production and thus reduce the inflammatory responses, an effect that is the basis of their use as anti-inflammatory drugs. In the body itself, these hormones may damp down an immune response that has run its course, because the effects of IL-1 and -6 on the brain include triggering the HPA by stimulating CRF production, which creates an immune-suppressing feedback.

    That damping down can have a downside, though, possibly leading to decreased ability to fight infections. At Ohio State University, Janice Kiecolt-Glaser and Ronald Glaser found that in stressed medical students, overactive HPA responses decreased the effectiveness of the hepatitis-B vaccination. Then working with immunologist John Sheridan, also from Ohio State, they found that the flu vaccine was less likely to work in people caring for spouses with Alzheimer's disease—a task known to cause a lot of stress—than in people of similar age and background not in those caretaker roles.

    Yet, the immune-suppressing feedback can be helpful, says NICHD's Chrousos, as it may explain the lessened rheumatoid arthritis symptoms during pregnancy. His group found that during the last trimester, the fetus produces CRF that gets into the mother's circulation and tends to make the HPA axis overly active. In addition, the estrogen increase during pregnancy may stimulate cortisol secretion. Test tube studies suggest that corticosteroid concentrations similar to those in late pregnancy suppress the cell-mediated branch of the immune system, which causes the symptoms of rheumatoid arthritis.

    Conversely, there is mounting evidence that a depressed HPA axis, resulting in too little corticosteroid, can lead to a hyperactive immune system and increased risk of developing autoimmune diseases. NIMH's Sternberg and her colleagues found the first evidence for such a connection 8 years ago in studies of two strains of rats that differ in their inflammatory responses and their susceptibility to many experimentally induced autoimmune diseases. They found that Lewis rats, the sensitive strain, release much less CRF and less corticosteroid in response to stress or exposure to antigens than do the resistant Fischer rats.

    Such a correlation does not necessarily prove a connection, but at the meeting, Sternberg and Barbara Misiewicz of NIMH reported that transplanting cells from the hypothalamus of embryonic Fischer rats into adult Lewis rat brains makes the recipients as unlikely to develop autoimmune disease as are the Fischer rat donors. By producing more CRF than the Lewis rat's own brain does, the transplanted tissue may prompt more vigorous HPA responses and tame the animals' hyperactive immune system.

    A depressed HPA axis may contribute to an overly sensitive immune system in people, too. “The evidence is mounting that these principles apply not only in chickens, not only in rats, but also in humans,” Sternberg emphasizes. For example, at the University of Trier, Germany, psychologist Angelika Buske-Kirschbaum has found that children known to suffer from atopic dermatitis—allergies that result in itchy skin and rashes—or from asthma have a blunted HPA response. When asked to tell a story or do mental math, these children show less increase in the glucocorticoid concentrations in their saliva than do their healthy peers. “They showed this very dramatic difference in the salivary cortisol response,” says Sternberg. The researchers propose that the children's lower HPA function may make them susceptible to allergies in the same way it makes Lewis rats prone to autoimmune disease.

    These kinds of studies, suggesting intimate links between the endocrine and immune systems and mental states, are inspiring new studies that aim to draw even tighter connections. NIMH's Gold, for example, hopes to learn how the neuroendocrine patterns of depression affect the immune system. In one form of the disease, the HPA axis is underactive, suggesting that those with this condition “may be immunologically disinhibited,” he says, while in another other form, corticosteroid levels are unusually high. Ultimately, he hopes that studies of these interconnections will “provide us targets for drug treatments,” Gold says, not only for depression but also for the physical symptoms associated with this condition. At the same time, his group hopes to learn whether inflammation or disease can cause depression to flare.

    That holistic approach is what neuroendocrine immunology is all about, its pioneers argue. “This is the coming together of these fragmented sciences,” says neuroscientist Bruce McEwen of Rockefeller University in New York City. “We're putting the body back together again.”

    • * The Third International Congress of the International Society for Neuroimmunomodulation was held in Bethesda, Maryland, 13-15 November 1996.

  7. Atomic Physics

    Atoms Take a Turn for the Better

    1. Charles Seife

    Every time a 747 jetliner maneuvers, patterns of light and shadow in a device called an interferometer measure the change in angle. Now, photons have a rival for sensing small rotations: interfering atoms. In the 3 February issue of Physical Review Letters, Massachusetts Institute of Technology (MIT) physicists describe how they used an atom interferometer, which takes advantage of the wavelike nature of matter described by quantum mechanics, to measure rotations as subtle as a quarter of a degree per hour. The paper marks a first step toward practical applications for atom interferometers, which physicists first developed in 1991.

    The sensitivity of the MIT instrument is “on par with the interferometer in a 747,” says Edward Smith, a member of the team that built it. And it may be just the beginning for this atom-based instrument. Because atoms have wavelengths many orders of magnitude shorter than light, he adds, “in the end, atom interferometers will be 10,000 times better than the very best commercial optical interferometers … and probably less costly.”

    Like a beam of light, a beam of atoms can be split with a fine grating, sent down separate paths, and brought together again. Because of atoms' wavelike nature, the converging beams produce an interference pattern of “bright” and “dark” spots, which indicates the relative arrival times of the crests and troughs in the two beams. Anything that affects the path lengths should shift the interference pattern—and because atom wavelengths are so short, atom interferometers promise unparalleled sensitivity.

    Measuring rotations was a tempting application. Interferometers can sense rotations because any twisting of the interferometer shortens one beam's path and lengthens the other, so when the waves reach the end of the device, they are no longer in phase. The phase difference—manifested in the interference pattern—shows how much the interferometer has twisted.

    To realize this scheme with atoms, the MIT physicists needed exquisite vibration control and larger, finely etched gratings to control the atoms. The technology was ready a year ago, and it has now yielded a device that rivals commercial optical interferometers.

    Mark Kasevich, a physicist at Stanford University, already has something even better in the works, he says: an atom interferometer that will be two orders of magnitude more sensitive than the best commercial devices. “There's a number of groups quietly trying to improve [atom interferometers],” says Steven Chu, also at Stanford. “It's getting exciting.”

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution

Navigate This Article