News this Week

Science  20 Feb 1998:
Vol. 279, Issue 5354, pp. 1129

    The Nucleus's Revolving Door

    1. Elizabeth Pennisi

    Many different transport systems carry molecules in and out of the nucleus, but a small protein called Ran seems to act as the dispatcher, coordinating the direction of transport

    Like any military headquarters, the cell's command center—the nucleus—needs to be in constant communication with its troops to keep the cell running smoothly. Not only does it have to send out its orders, such as the messenger RNAs (mRNAs) that provide the instructions for making proteins, but it also has to receive intelligence from the far-flung reaches of the cell, often in the form of proteins that help regulate the activity of genes in the nucleus. But despite the importance of these communication channels, cell biologists have had little information about just how they operate—until recently, that is.

    New work, much of it done over the past 2 years, shows that cells have evolved a highly complex system for transporting large molecules such as proteins and mRNAs into and out of the nucleus. Researchers have identified a host of different, although structurally related, proteins that escort molecules through the nuclear membrane. “There are a larger number of [nuclear transport] pathways than we thought,” says cell biologist Stephen Adam of Northwestern University in Evanston, Illinois.

    This diversity of transport proteins opens the way to an intricate division of labor. Some move RNA out of the nucleus, for example, while others work only to get proteins in. “You can regulate any particular [pathway] without shutting down [transport] globally,” says Mary Dasso, a cell biologist at the National Institute of Child Health and Human Development (NICHD).

    Ins and outs.

    Ran's role in nuclear transport is still controversial, but in one model, RanGTP helps unload newly imported molecular cargo and then becomes part of an outgoing cargo complex. Once outside, RanGTP becomes RanGDP to unload the exported cargo. RanGDP may also help load cargo about to enter the nucleus.


    Besides identifying the transporter proteins that do the heavy labor of carrying molecules across the membrane, researchers have caught sight of one protein that seems to act as a foreman. Known as Ran, it may oversee the various nuclear transport pathways. It also seems to link them with critical events in the life and death of the cell. While coordinating nuclear transport, for example, Ran interacts with other molecules that help determine when cells divide or whether they commit suicide in so-called programmed cell death, which rids the body of cells with damaged DNA.

    Such findings are causing more and more researchers to sit and up and take notice of the pivotal role nuclear transport plays in the life of the cell. Because critical processes such as cell division and protein synthesis depend on molecules that have to be carried into or out of the nucleus at exactly the right time, nuclear transport is “central to understanding cell regulation,” says Günter Blobel, a cell biologist at Rockefeller University in New York City.

    Gaining nuclear access

    Although the nuclear membrane is peppered with structures called nuclear pore complexes—conglomerations of some 100 proteins that arrange into a tunnel—only small molecules can diffuse freely in and out of the nucleus. Larger traffic must push or be pushed through. Yet every second, RNAs are leaving, typically complexed with certain proteins. At the same time, thousands of proteins are coming in.

    Cell biologists began getting their first clues to how all these molecules make it through the nuclear membrane about 12 years ago when they noticed that certain proteins that need to get into the nucleus all contain a similar stretch of amino acids. Researchers speculated that the stretch might be some kind of shipping tag, marking these proteins for transport across the nuclear membrane. That idea was confirmed about 6 years later, when researchers identified a protein called importin α that recognizes and binds to the sequence. Once that binding occurs, a third protein, importin β, joins the other two, and the resulting complex of molecules makes its way through the nuclear pores.

    This early picture of nuclear transport seemed simple—two or three proteins joining together to escort another into the nucleus. But then several groups began to realize that other proteins might also serve as escorts into—and perhaps out of—nuclear pores. Rather than call them importins, a few began to think of these transporter proteins as karyopherins, from the Greek words meaning “nucleus” and “carry.”

    In 1995, cell biologist Gideon Dreyfuss and his colleagues at the University of Pennsylvania got a clue that helped them locate a new karyopherin when they found a protein with a different nuclear localization sequence. They made the discovery while studying a protein called hnRNP A1 that somehow accompanies mRNAs out of the nucleus and then quickly shuttles back in again. Yet, it lacks the localization sequence found in the other proteins destined for the nucleus.

    By making a systematic series of changes in the hnRNP A1 protein and testing to see whether the altered proteins were still imported, Dreyfuss and his colleagues homed in on a new import password, called M9, that consists of 38 amino acids. “This was the second signal defined for import,” notes biochemist Iain Mattaj of the European Molecular Biology Laboratory (EMBL) in Heidelberg, Germany. The Pennsylvania researchers later showed that hnRNP A1 is carried through nuclear pores not by the two importins, but by a different escort protein, which they named transportin. Transportin does, however, resemble importin β; their amino acid sequences are about 24% identical.

    View this table:

    Dreyfuss discovered this alternate pathway in human cells grown in the laboratory. But human cells aren't unique in having multiple import routes. While Dreyfuss was tracking down transportin, Blobel's team at Rockefeller showed that the yeast version of transportin, karyopherin β2, also helps RNA-binding proteins get into the nucleus. Since then, the Rockefeller group has found additional yeast escort proteins.

    They were aided in their quest by the sequencing of the yeast genome, completed in 1996. They already knew that yeast has an importin β counterpart, a protein called karyopherin β1, and they identified several sequences in the yeast genome that seemed to code for similar proteins. Meanwhile, another research team searched through the newly acquired database of yeast genes and came up with 13 possible importin relatives.

    In the past year, Blobel's group has determined the roles of several of these proteins. Two of them, karyopherin β4 (Kap123) and karyopherin β3 (Kap121), seem to shepherd into the nucleus proteins destined to become part of the protein factories called ribosomes. Ribosomes are assembled inside a nuclear body called the nucleolus before being shipped out again to the cytoplasm, where protein synthesis takes place. Just 3 months ago, the Rockefeller group announced two more karyopherins, one that imports a protein involved in the processing of transfer RNA (tRNA) and another that imports mRNA binding proteins. “The prediction is that there are as many [transport] pathways as there are factors,” notes Dirk Görlich, a biochemist with the Center for Molecular Biology at the University of Heidelberg in Germany.

    Taking leave

    But while researchers unraveled more and more of the intricacies of transport into the nucleus, they had only the barest details about how molecules leave the nucleus. The first details came out of labs independently studying the AIDS virus, HIV. In 1995, Susan Taylor's team at the University of California, San Diego, and Reinhard Lührman's team at the Institute for Molecular Biology and Tumor Research in Marburg, Germany, showed that the mRNA made by the virus leaves the nucleus in association with a viral protein called Rev. Rev, they discovered, contains a stretch of eight amino acids, including four leucines, that tags it for export out of the nucleus with its RNA cargo.

    Other proteins that need to be transported out of the nucleus bear similar eight-amino-acid sequences, Taylor's team and others found. The leucine-rich sequence, it appears, is an export tag that enables Rev and the other proteins to link up with their escorts, just as the import signal enables molecules to attach to proteins that would allow them access into the nucleus.

    But no one was able to tease out the export transporters that recognize the leucine-rich password. Then about a year ago, researchers searching for new AIDS drugs made a discovery that helped unlock the secret. Barbara Wolff and her colleagues at the Sandoz Research Institute in Vienna, Austria, found that an experimental antibiotic called leptomycin B blocks both Rev activity and also all export of other proteins bearing a leucine-rich password. Because other work had shown that in the yeast Schizosaccharomyces pombe, this antibiotic interacts with a protein called CRM1, whose only known function was helping to maintain the chromosomes, researchers began to suspect that CRM1 might be an elusive export escort.

    That finding caught the attention of researchers who were already hot on CRM1's trail because it resembles importin β. Earlier, Gerard Grosveld and Maarten Fornerod at St. Jude Children's Research Hospital in Memphis, Tennessee, had discovered that CRM1 interacts with a protein that is part of the nuclear pore complex. On learning of the Sandoz results, EMBL's Mattaj and Fornerod, now also at EMBL, immediately began looking at leptomycin's effects on nuclear export in the frog oocyte, where a large nucleus makes it easy to monitor the comings and goings of proteins and RNA.

    They found that, as in the mammalian cells studied by Wolff, leptomycin inhibits export of Rev and of certain small RNAs that need to go into the cytoplasm to pick up proteins that help them process mRNA when they return to the nucleus. In contrast, adding excess CRM1 to the nucleus speeds up the export of these substances. And finally, the group demonstrated that CRM1 binds to the leucine-rich export tag, except in the presence of leptomycin. At the same time, several other groups, including those of Karsten Weis of the University of California, San Francisco, Catherine Dargemont of the Pasteur Institute in Paris, and Eisuke Nishida at Kyoto University in Japan, were coming to the same conclusion about CRM1.

    The resemblance between CRM1 and importin β suggested that there might be other links between import and export. That expectation was borne out when Görlich and his colleagues set out to pin down how importin α gets back out of the nucleus once it has carried in its protein cargo. They found that importin α leaves the nucleus with the help of a protein called CAS, whose amino acid sequence shows that it is an importin β relative.

    Finding all these links between nuclear export and import is “conceptually very important,” says Weis. It not only means that much of the knowledge about import receptors can be applied to export receptors, but it also begins to point to ways in which transport into and out of the nucleus might be coordinated. If similar proteins—or even the same proteins—are responsible for transport in both directions, the molecules that they interact with, and are presumably controlled by, are probably similar. Researchers are now boring in on the protein Ran as central to this coordination.

    Ran as dispatcher

    One clue to Ran's key role is that it turns up consistently in both the import and export complexes. Görlich's group found, for example, that a form of Ran has to bind to a complex of importin α and CAS for the importin to be transported out of the nucleus. And cell biologist Ian Macara and his colleagues at the University of Virginia, Charlottesville, made a similar finding, showing that Rev and other proteins can't be exported from the nucleus unless this form of Ran is present. Another hint of Ran's coordinating role is that it exists in two different forms, one in the nucleus and one in the cytoplasm. Imbalances between those forms might serve to specify the transport direction.

    In the form found in the nucleus, Ran appears to be linked to the energy-carrying molecule GTP, while in the cytoplasm it's mainly bound to GDP, the low-energy, spent form of the molecule. That's because Ran is a GTPase, an enzyme that splits GTP into GDP and phosphate, and the cytoplasm contains two other proteins that trigger Ran to split GTP. In contrast, the nucleus contains a protein that helps Ran shed GDP and recharge itself with fresh GTP.

    Presumably, the distribution of these Ran-related proteins results in higher concentrations of GTP inside the nucleus than out of it, and, as Görlich suggested in 1996, that may be key. “The asymmetry in RanGTP and RanGDP may be very relevant in [determining] the direction of transport,” says Blobel. Larry Gerace, a cell biologist at The Scripps Research Institute in La Jolla, California, agrees: “It's clearly a central player. But,” he adds, “how it functions is highly controversial.”

    Some researchers have contended that RanGTP provides energy needed to move molecules in and out of the nucleus, but most now think it is just that the chemical form of Ran determines whether molecules are imported into the nucleus or exported out. As a direction sensor, it would keep newly imported proteins from slipping immediately back out again by helping them break away from their transport proteins. And for export complexes in the nucleus, binding with RanGTP signals that it's time to move out.

    Once outside, though, the RanGTP may be converted to RanGDP. Gerace notes that RanGAP1 and RanBP2, the two proteins that activate Ran's GTP-splitting activity, accumulate right outside the nucleus near the pores—just the right spot to ensure that any RanGTP quickly becomes RanGDP. The latter may then help proteins seeking to enter the nucleus rendezvous with their importin escorts. After the RanGDP accompanies the import complex through the nuclear membrane, the whole process can be repeated when RanGDP becomes RanGTP or is displaced by a RanGTP.

    Although this role for Ran is still conjecture, it's helping to define the direction of the research. “If [Ran] works one way in import and the opposite way [in export], one can make predictions,” Görlich explains. “It makes it much easier to design experiments.”

    The recognition of Ran's central role is also drawing attention to other molecules that interact with Ran or its associated proteins. These molecules don't take part directly in import and export, but, like officials at a border crossing, they can't be ignored. One may help to stabilize import complexes; another may help these complexes accumulate at the entrance of the nuclear pore. With these proteins, the cell may fine-tune protein import and export. For example, NICHD's Dasso says, “our current feeling is that [one such molecule] is coordinating nuclear transport with other nuclear events,” such as mitosis or gene activation.

    That fine-tuning can be critical to a cell's well-being, says Jonathan Pines, a cell biologist at the Wellcome CRC Institute in Cambridge, England. For example, a protein complex containing two enzymes called cyclin B and CDC2 appears to help coordinate activity in the nucleus and the cytoplasm during cell division. Once mitosis begins, the complex seems to accumulate in the nucleus. But until then, it “is constantly being imported and exported,” says Pines. Exactly how is unclear, but by shuttling back and forth, the complex is in constant communication with the nucleus and cytoplasm. “The transport of cyclin B is going to be very important for the regulation of the cell cycle,” Dasso says.

    Similarly, a cell might regulate its protein productivity by controlling how much mRNA gets out of the nucleus to the ribosomes, where proteins are put together, and how fast this transport takes place. Because the nuclear membrane separates the place where genetic information is encoded and transcribed into RNA from where the RNA message is translated into a protein's amino acid sequence, “you can introduce a great number of regulatory steps,” Görlich explains.

    This realization is fueling a flurry of activity among cell biologists eager to describe the other import and export pathways. As they discover which molecules travel these many roads, they are rapidly drawing in researchers who once never gave much thought to transport within the cell. Now these researchers are discovering that the journey can be as important as what happens after a molecule arrives. “The field is going to explode in the next year or two,” Dasso predicts. “I'm bracing for it to go crazy.”

    Additional Reading

    1. 502.
    2. 503.
    3. 504.
    4. 505.
    5. 506.
    6. 507.
    7. 508.
    8. 509.
    9. 510.

    Black Sea Deluge May Have Helped Spread Farming

    1. Richard A. Kerr

    Imagine 50 cubic kilometers of Mediterranean seawater, a torrent equivalent to 200 Niagara Falls, pouring through a narrow strait and cascading 150 meters into the Black Sea every day. Audible for 500 kilometers, such a deluge would have raised the level of the Black Sea 15 centimeters a day, swallowing a kilometer or two of shoreline—as well as any slow-footed inhabitants. But the flood, possibly the most catastrophic that humans have witnessed, was apparently not imaginary.

    The big gush.

    Water pouring through the Bosporus may have suddenly created a larger Black Sea (light blue).

    Based on analyses of Black Sea sediments, oceanographers William Ryan and Walter Pitman of the Lamont-Doherty Earth Observatory in Palisades, New York, have put together evidence that about 7500 years ago, this great deluge really happened, suddenly filling the Black Sea to its present level. “People would have been terrified,” notes Pitman. He and Ryan go on to suggest that the disaster helped spread farming into central Europe, and perhaps even inspired the biblical account of Noah and the flood.

    This catastrophic tale, which Pitman and Ryan presented at a recent American Geophysical Union meeting in San Francisco, is winning support among oceanographers and gaining some serious attention from initially incredulous archaeologists. The case for a flood is “persuasive, although more work needs to be done,” says oceanographer David Ross of Massachusetts' Woods Hole Oceanographic Institution, who has worked extensively in the Black Sea. But when it comes to the flood's impact on human prehistory, archaeologists are cautious. Such a flood may have driven farmers on the Black Sea coast into other parts of Europe, says archaeologist Douglas Bailey of the University of Wales at Cardiff, “but I don't think it was that dramatic. There's no one explanation that covers the emergence of agriculture across Europe.”

    Today, the Black Sea is a brackish inland sea, fed by fresh water from European rivers and saltier, Mediterranean seawater flowing in through the Bosporus strait. In the 1970s and '80s, cores through now-submerged sediments off the northern and western coasts revealed the remains of a coastal plain that was exposed late in the last ice age and into the interglacial warmth of the past 10,000 years. Long after glacial meltwaters began raising world sea levels, it seems, the Black Sea was a freshwater lake, much smaller and lower than today's sea; it was cut off from the Mediterranean because the level of that sea was even lower than the Bosporus.

    Evidence that a rising Mediterranean suddenly refilled this lowered Black Sea emerged from a joint Russian-U.S. expedition in 1993, during which researchers used seismic waves to image the layers of sediment at the bottom of the Black Sea. If rising waters had crept slowly across the coastal plain, they would have deposited a wedge of sediment as they went. But as Ryan, Pitman, and colleagues reported in Marine Geology last year, they saw no sign of that. Instead, they found a thin, uniform dusting of sediment, consistent with a geologically instantaneous refilling of the Black Sea.

    In addition, radiocarbon dating of the shells of the first salt-tolerant molluscan invaders from the Mediterranean yielded the same age–7550 years before present, plus or minus 100 years—regardless of whether the shells came from deep, permanently flooded sediments or from the shallow shelf. If the refilling had been gradual, the team reasoned, the shells in deeper water would have been laid down first.

    Finally, seismic probing has shown that the hard-rock basement beneath the sediments filling the Bosporus channel lies at a depth of nearly 100 meters, rather than 35 meters, as had been thought. So the floodwaters could have cut a very deep channel through the sediments and down to bedrock, letting the water spill through far faster.

    Oceanographers are slowly accepting the notion of a catastrophic refilling. “I think they're probably right about the flood,” says oceanographer Michael Arthur of Pennsylvania State University in University Park. But Ryan and Pitman go far beyond that, proposing that the flood also fostered the spread of agriculture across Neolithic Europe. By 9000 years ago, farming—both cultivating grains and raising livestock—had originated in southwestern Asia; by 8000 years ago, it had spread to Greece and into the Balkans, including Romania and Bulgaria. Farming stayed in this region for some centuries, then surged across eastern Europe and into central Europe east of the Rhine River at about the same time as the flood, Bailey notes. Archaeologists debate whether the migration of people or the passing of seeds and animals from neighbor to neighbor drove the dispersion of farming. Pitman and Ryan argue for mass migration.

    “We would say this flood caused a diaspora,” says Pitman. The timing is right, he says, to have driven Neolithic farmers up the rich river valleys into central Europe, as well as Egypt and southern Mesopotamia, where a new and distinctive farming culture appears at about that time. In the Mesopotamian kingdoms, the shaken immigrants' tales might have grown into the Sumerian flood myth and eventually evolved into the biblical flood, he suggests.

    Others aren't so sure that the Black Sea flood was behind agriculture's spread. Arthur, for one, argues that the timing may be off. He notes that Pitman and Ryan date the flood to the same radiocarbon age as the first sediments laid down after the flooding, which were black and organic rich and therefore formed in conditions lacking oxygen. But Arthur thinks that the flooding may in fact have occurred 2000 years earlier. According to his geochemical model, that's how long it would take to remove all the oxygen from the dense, salty water that flowed into the deep Black Sea. If so, the flood would have been too early to account for the arrival of new farmers in Europe.

    Archaeologists also remain reluctant to link the flood to major upheavals in human history. One of them, Peter Bogucki of Princeton University, says he is “fascinated” by what Pitman and Ryan are doing, but without any direct evidence that its effects cascaded throughout Europe, he is “not ready to see the flood as the trigger for massive, continental-scale change. … The spread of agriculture [was] a very complicated event.”

    As for the flood myths, no one will ever be certain of their origins, concedes Pitman. But researchers can hope to refine the crucial timing of the flood and its putative effects. More data on just when and where the practice of farming spread may help prove—or disprove—this flood story.


    Failure Isn't What It Used to Be ... But Neither Is Success

    1. Jon Cohen

    ChicagoIn his kickoff speech to the main annual U.S. AIDS meeting, held here 1 to 5 February, retrovirologist Ashley Haase of the University of Minnesota, Minneapolis, set the tone for the gathering with a quote from novelist William Faulkner: “If it aint complicated it dont matter whether it works or not because if it aint complicated up enough it aint right.” If Faulkner's logic is correct, AIDS research is on the right track: As the 3500 researchers who attended the Fifth Conference on Retroviruses and Opportunistic Infections heard over and over again, the more we know about the workings of HIV, the more complicated the picture gets.


    Although the viral load in 45 patients is heading back to pretreatment levels, their gains in CD4 cell counts have held up.


    Take the new AIDS treatments that are helping HIV-infected people live longer, healthier lives. Until now, many researchers have assumed that a treatment failed when a patient's virus levels increased, but new data suggest that it's not quite that simple. Their immune cells can remain high even when the virus is thriving—perhaps because other factors besides viral levels can affect immune-cell production, powerful new tests suggest. But there's some bad news as well: Even when drugs succeed in keeping HIV at bay, an odd, newly discovered side effect could complicate treatments, according to some physicians.

    The meaning of failure. Clinicians routinely judge the success of anti-HIV treatments by their ability to drive down the amount of the virus in a person's blood (the “viral load”). A good treatment quickly knocks down viral load to the point at which the most sensitive assays can't detect it, allowing white blood cells called CD4s, which the virus selectively targets and destroys, to rebound. If the virus returns, the treatment is usually deemed to have failed. But now, several investigators have noticed something unexpected: In some patients, the resurging virus doesn't wipe out gains in CD4 counts.

    Steven Deeks and his co-workers at the University of California, San Francisco (UCSF), reported a study of 45 patients taking a combination of anti-HIV drugs, including ones that inhibit the viral protease—an enzyme that HIV uses to assemble itself. These patients' viral loads had dropped at first, but after about 6 months bounced back to near-original levels. Yet, after 18 months, their CD4 counts remained higher (by roughly 125 cells per cubic millimeter of blood) than they were before treatment (see graph). In one case, the treatment failed to decrease viral load at all, yet the patient's CD4 count jumped by about 500. Because these patients had only 77 CD4s on average before treatment (the normal range is 600 to 1200), such gains may be enough to help ward off opportunistic infections that are the hallmark of AIDS. “Treatment failure is not synonymous with clinical failure,” says the University of Pittsburgh's John Mellors, a leading AIDS virologist. Deeks says he does not expect the benefits to last indefinitely, however. “At some point, [these patients are] going to progress,” he predicts.

    Researchers floated several explanations for this disconnect between CD4 counts and viral load. David Ho, head of the Aaron Diamond AIDS Research Center in New York City, is investigating whether the similar patients he's following have developed mutant viruses that are resistant to the drugs but are less able to destroy the immune system. “The virus is trying to escape from the drugs, but by making those mutations, the virus is not fit enough,” suggests Ho. Others speculated that when treatment knocks back the virus even temporarily, the immune system gets a chance to recharge itself.

    The immune system at work. Endocrinologist Marc Hellerstein of the University of California, Berkeley, offered intriguing data of his own that may provide a different explanation. Hellerstein's group, working with Joseph McCune's laboratory at UCSF, has developed a powerful new tool—which it published in the 20 January Proceedings of the National Academy of Sciences–for determining how many CD4 cells a person makes each day. Researchers have long sought such a technique to help reveal the intricacies of the battle between the immune system and HIV (Science, 21 November 1997, p. 1399). “I think they've made a genuine contribution to the field,” says Ho.

    The technique relies on tracking the amount of DNA the body synthesizes as it makes new cells. Hellerstein and colleagues first labeled deoxyribose, a sugar that forms the backbone of DNA, with deuterium, a heavy form of hydrogen. They infused people with the labeled sugar and drew their blood for several days. When they analyzed these blood samples in a mass spectrometer, the researchers could determine how many cells had taken up deuterium in their DNA and thus calculate the rate at which new cells were being made.

    Hellerstein's group applied the technique to CD4 cells separated from the blood of eight patients taking various combinations of anti-HIV drugs, including protease inhibitors; five HIV-infected people not taking protease inhibitors; and five uninfected people. A few intriguing trends emerged. The researchers found an astonishing variation from person to person in CD4 production rates: The lowest was 0.7 billion CD4s a day, and the highest was 12 billion. Those on the most potent drug regimens produced new CD4s at the highest rates, while uninfected people made them at the lowest rates. Among infected people, those with the lowest CD4 counts tended to produce the cells more slowly. Hellerstein also reported that one patient whose viral load increased while his CD4 count remained high was “grinding out the cells” at a furious rate.

    Hellerstein cautions against drawing hard and fast conclusions from these early data. “I think it's more interesting that we can answer these questions than [that] we have answered them,” says Hellerstein, who has high hopes that the assay—which can be used to measure production rates of any type of cell—will lead to insights well beyond AIDS. Nevertheless, he suggests that the results indicate that the treatments may stimulate CD4 production directly, and that the extent of damage to the immune system may determine how well someone responds to treatment. If so, this would challenge the popular notion that AIDS drugs work simply by preventing HIV from killing CD4 cells. “I think that's quite wrong,” asserts Hellerstein. In other words, it “ain't complicated up enough.”

    The price of success. Treatment failure may no longer mean what it did, but neither may success. Twelve groups reported at the meeting that long-term treatment with protease inhibitors causes a puzzling redistribution of fat around the body that could be an omen of other, more serious side effects. The most detailed of these studies, reported by clinician Andrew Carr of St. Vincent Hospital in Sydney, Australia, involved 116 patients taking protease inhibitors for an average of 10 months. Of these, 72 (64%) had fat wasting of the limbs and face with fat accumulation in the gut, a syndrome called lipodystrophy. “I don't think anyone has a good concept of what the mechanism is, or how reversible it is,” says Henry Masur, who heads the critical-care unit at the Clinical Center of the National Institutes of Health, where he and his colleagues have studied 18 patients with the disorder.

    “The data from [the Australian group] have struck all of us,” says clinician Scott Hammer of Harvard Medical School in Boston. “The excitement of success we've experienced has to be tempered by the toxicity.” Carr notes that although lipodystrophy by itself does not appear to be dangerous, it suggests that the powerful protease inhibitors are producing a systemic effect that could lead to more severe problems later on. “Are these people going to get heart attacks and strokes in 10 years' time?” Carr asks. He also worries that the drugs could cause metabolic imbalances that lead to insulin resistance.

    Such concerns led Carr and others to question the common wisdom that treatment should begin as soon as patients learn they are infected with HIV. “These people are going to be on drugs for years,” says Carr. Once again, Faulkner's observation could apply: The common wisdom may not be complicated enough.


    New Appetite-Boosting Peptides Found

    1. Marcia Barinaga

    As the popularity of the ill-fated diet drug combination fen/phen showed, people who are morbidly, or just unhappily, overweight are hungry for a pill to help them shed pounds. At the other end of the spectrum, people whose health is threatened by a lack of appetite, because of chemotherapy or illness, would benefit from an appetite-boosting drug. Given what's likely to be a billion-dollar demand, pharmaceutical companies are scrambling to find drugs to control appetite. Now, researchers at the University of Texas Southwestern Medical Center in Dallas and SmithKline Beecham Pharmaceuticals have found a potential new drug target: In today's issue of Cell, the team, led by Masashi Yanagisawa of UT Southwestern, reports the discovery of two related peptides, from the brains of rats, that trigger eating by the animals.

    The peptides—which the researchers named orexins, from the Greek word for appetite—aren't the first ones found to turn up hunger, but researchers see them as especially intriguing. For one thing, unlike most of the others, these peptides are made only in a brain area called the lateral hypothalamus (LH), previously identified as the brain's “feeding center” because its destruction caused experimental animals to stop eating and starve to death. The orexins may be key to that brain area's normal function.

    What's more, because researchers already know the identity of the receptors through which these peptides work, they can begin to look for drugs that either mimic or block their appetite-enhancing effects. “I think it's terrific,” says Jeffrey Friedman, of Rockefeller University in New York City, whose team discovered the appetite-suppressing protein leptin. “All the available evidence says there are many molecules” involved in the control of eating behavior, “and these two clearly need to be added to that list.”

    Yanagisawa's team members weren't planning to enter this hot field when they began their work. They were collaborating with researchers at SmithKline Beecham's labs in Pennsylvania and the United Kingdom in a search for peptides that activate so-called “orphan receptors.” These are proteins whose amino acid sequences show the structural hallmarks of cell surface receptors but whose triggering molecules are as yet undiscovered.

    Out of this search came two related peptides that activate two related orphan receptors. Further study revealed that the peptides seem to be made exclusively in the lateral hypothalamus. “That was the obvious clue that they may be involved in feeding behavior,” says Yanagisawa.

    To test that hypothesis, Yanagisawa's team injected the peptides directly into the brains of rats. They proved to be powerful appetite boosters, causing the animals to eat from three to six times more than control rats did for several hours. Next, the researchers looked to see what effect starvation has on brain levels of the orexins. If the peptides really govern feeding, Yanagisawa says, those levels should go up when the animals are hungry. “We did the starvation experiment,” says Yanagisawa, “and lo and behold [orexin] was upregulated.”

    The discovery should give physiologists new clues to the puzzle of how the brain controls appetite. “Every new discovery [like the orexins] adds to an understanding of how the system works,” says physiologist Larry Bellinger, who studies feeding behavior at the Baylor College of Dentistry in Dallas.

    Until recently, the LH has been overshadowed by a neighboring area, the ventromedial hypothalamus (VMH). Dubbed the “satiety center” because experiments done in the 1940s showed that its destruction turns animals into chronic overeaters, the VMH has been a hot spot of research for the past few years. Not only are receptors for leptin found there, but so are neurons containing a molecule called peptide Y, which is known to enhance appetite.

    But the idea of satiety or feeding centers is now recognized as too simplistic. Indeed, Bellinger says, the hypothalamus is criss-crossed with “a huge wiring diagram” of neurons receiving and passing on information about the body's nutritional state, such as how many fat cells there are or how much sugar the blood contains. And the LH is almost certainly relaying some of these signals. For example, low blood-sugar levels activate some LH neurons. When triggered, those neurons might release the orexins or another appetite-stimulating peptide discovered 2 years ago in the LH—melanin-concentrating hormone.

    A better understanding of the wiring that controls feeding is essential to developing safe appetite-controlling drugs, says Bellinger. But there is already reason to hope that the orexins and their receptors will make good targets for such drugs. Because the orexins appear to be only in the LH, they may have fewer other functions in the brain than does peptide Y, whose broad distribution and multiple functions make it harder to find drugs that block only its appetite-inducing effects. “There is no guarantee that [the orexins] won't do other things either,” says Friedman, but the fact that they seem to be restricted to the LH is “good news.” And it's the kind of news that drug companies are sure to pounce on.


    Self-Assembled LEDs Shine Brightly

    1. Robert F. Service

    San Jose, CaliforniaThere's no easy path to illumination, at least when the light in question comes from organic molecules.

    Devices that coax light from these materials, instead of from traditional semiconductors, could serve as flexible, large-area displays. The leading materials, small organic molecules, do yield bright, long-lived devices, but they have to be laid down from a vapor to create the uniform, thin layers that emit light efficiently—and doing so requires costly machines. Large, chainlike molecules called polymers can also emit light, but it can be hard to control their arrangement and purity, making polymer-based devices less bright and in many cases shorter lived. Now Tobin Marks and his colleagues at Northwestern University in Evanston, Illinois, along with Nasser Peyghambarian, Bernard Kippelen, and their colleagues at the University of Arizona, Tucson, have found a middle way.


    Molecular self-assembly created the light-emitting layer of this organic LED.


    At the Photonics West conference here last month, the researchers offered a hybrid approach that relies on benchtop chemistry instead of elaborate machines to entice small organic molecules to form a stack of precisely controlled layers. The new strategy attaches specially designed chemical linking groups to a series of small molecules. When a substrate is simply dipped into a series of containers holding different building blocks, the molecules link end to end into the desired arrangement, forming polymerlike networks. “From a scientific point of view, it's very exciting work,” says Homer Antoniadis, an organic LED (light-emitting diode) expert at Hewlett-Packard. Antoniadis adds, though, that at this early stage the self-assembly process is too slow to be commercialized.

    Like other strategies for turning out organic LEDs, the new technique aims to create a light-emitting layer sandwiched between conducting layers. When a voltage is applied to the device, negatively charged electrons and positively charged “holes” funnel from the conducting layers into the light-emitting layer. There, they annihilate each other and produce photons of light, in a color determined by the electronic properties of the organic layer.

    Marks and his colleagues aren't the first to build up these multilayer devices through self-assembly. Researchers have used the strategy to make polymer-based devices, simply allowing each new polymer layer to adsorb to the one below. But Marks hoped to produce brighter, more durable devices by transferring the approach to small, organic light emitters and linking layers together with strong covalent bonds.

    The researchers constructed their devices on a base of glass coated with a transparent electrode of indium tin oxide (ITO). For one typical device, they first dipped the glass into a solution containing chlorosiloxanes, which bind to the ITO, forming a layer with unbound hydroxyl groups exposed on the surface. Next came a bath in a solution of chlorosilane-functionalized triarylamines, which were designed to bind to the hydroxyl groups. These first two layers created a route for positively charged holes to travel from the ITO electrode to the light-emitting layer, which was created next, when the team dipped the glass into molecules called chlorosilane-functionalized biphenyls. Finally, they coated the top with another electrode.

    At the meeting, Marks presented evidence that the organics assemble themselves into the well-ordered, smooth layers that make the most efficient devices. And when the LEDs were powered up, they emitted blue light about as bright as the glow of a standard television set—on par with the best organic blue LEDs. Marks has shown that the same strategy can produce other colored LEDs as well. Jean-Michel Nunzi, of the French Atomic Energy Commission in Gif-Sur-Yvette just outside Paris, says Marks's technique may also “be the way to go” for creating organic-based solar cells.


    Your (Light-Emitting) Logo Here

    1. Robert F. Service

    With a printer no different from one you might hook up to your PC, University of California, Los Angeles (UCLA), physicist Yang Yang is pushing the state of the art in light-emitting displays. At the Photonics West meeting in San Jose, California, last month, Yang reported using an ink-jet printer to create the first polymer-based light-emitting logos.

    Other researchers trying to create polymer displays with ink-jet printers have run into obstacles (Science, 17 October 1997, p. 383). One is that the organic solvents needed to dissolve the best polymer light emitters tend to melt key components of standard printers. Another is that ink-jet printers spray liquid into tiny dots, rather than a continuous film, creating gaps that can cause printed light emitters to short out.

    Yang and his UCLA colleague Jayesh Bharathan hit on the idea of printing not the light-emitting layer itself but a conducting polymer called polythiophene, which is water soluble. The polythiophene pattern, roughly 2 centimeters square, provides an electrical connection from a transparent bottom electrode made from indium tin oxide (ITO) to an overlying layer of a light-emitting polymer called MEH-PPV. The light-emitting material dissolves only in a solvent, but it does not need to be patterned. Moreover, it also seals any gaps in the printed polythiophene, thereby preventing a short. The device is capped with a metal electrode.

    When the current is turned on, positive electrical charges, called holes, migrate from the ITO electrode through the polythiophene and into the light-emitting polymer. There, they meet up with negatively charged electrons coming from the top electrode and combine, giving off light. The display lights up only in areas where the polythiophene is patterned.

    For now, the UCLA displays shine in just a single color. Cheap, large, full-color display screens, made in one simple ink-jet printing step, will have to wait until water-soluble, light-emitting polymers are available. But display researchers such as Hewlett-Packard's Homer Antoniadis think Yang's strategy of sealing gaps in a printed layer with an adjacent layer will also help. “Yang has shown that you can really make it work,” says Antoniadis.


    Malaria Strains Appear to Gang Up Against Immune Defenses

    1. Nigel Williams

    The battle between parasites and their host organism's defenses has been likened to a molecular arms race: The evolution of one drives the evolution of the other. Although theoretical studies have supported this view of the parasite-host conflict, direct evidence has been hard to come by, because the interactions are complex and field data are sparse. But new research reported on page 1173 provides evidence of just such a battle being waged in West Africa between the malaria parasite and its human hosts. The work suggests that two strains of the parasites have evolved a surprising tactic to defeat immune defenses: cooperation. “This novel synergism between the two strains is a very exciting result and may shape future vaccine development,” says Bryan Grenfell, an epidemiologist at Cambridge University.

    The new work, led by Adrian Hill at Oxford University—with colleagues at Oxford, the U.K. Medical Research Council Laboratories in Fajara, The Gambia, and the London School of Hygiene and Tropical Medicine—studied the dynamics of malaria infections in people living in The Gambia. The parasite that causes the disease, Plasmodium falciparum, is endemic in that country, where 20% of the people carry it in their blood. Malaria provides a rare opportunity to study the molecular arms race between parasite and host because researchers can link specific immunological reactions to specific variants of the parasite. This precision is hard to achieve for most parasitic infections, which involve a huge array of immune responses as the parasites progress through often-complex life cycles.

    Hill's team focused on the initial immune-system reactions that begin shortly after a mosquito injects the parasites into the bloodstream. The parasites migrate to the host's liver cells, where they appear to trigger a highly specific immune response. Molecules of the host defense system called human leukocyte antigens (HLAs) bind to small fragments of proteins from the parasite; specific HLAs bind specific fragments. This HLA binding then initiates an attack by a population of immune cells called cytotoxic T lymphocytes (CTLs), which latch onto HLA-bound fragments and are then activated to kill the parasites. Some parasites slip through these defenses, however. By studying surviving strains, Hill and his colleagues could infer molecular details of the initial CTL response.

    Earlier research had shown that an HLA molecule called HLA-B35 is common in the Gambian population. HLA-B35 binds to a specific fragment from P. falciparum's so-called circumsporozoite protein, provoking CTLs into attacking the parasite. Researchers have found four variants of the parasite in The Gambia that differ in this region of the circumsporozoite protein. Only two of them, called cp26 and cp29, bind with HLA-B35 and hence provoke CTL attack by this route.

    In lab studies, Hill's team found, as expected, that CTLs from malaria patients and from people who had not been exposed to malaria could kill cells displaying fragments of these two variant proteins. But they also discovered that if protein fragments from one of these variants were present in the culture, the CTLs were unable to kill the parasites bearing the other variant. The mechanism is a mystery, but Grenfell likens it to a lock and key: “If a key is completely different from the one that opens the lock, then it will have no effect on the lock, but if the key is just slightly different, it can jam the lock,” he says. One parasite's proteins appear to jam the lock that unleashes CTLs to destroy the other variant parasite.

    This cooperative strategy is highly effective: It works at very low doses, and each variant's protein fragments appear to be equally effective in protecting the other. But the key question is whether this mechanism works to the parasites' advantage in real patients. “There have been other reports of antagonism in lab studies with HIV and hepatitis B virus, but it's not clear what is going on in patients,” says Hill. “There's a lack of field data showing that antagonism works within people.”

    To answer that question, the team studied malaria parasite DNA recovered from 800 infected patients. They analyzed which strains were present in patients' blood and assumed that the variants they found were those that had survived the initial attack in the liver. More than 40% of the patients had been infected with more than one strain of the parasite. When Hill and his colleagues compared the distribution of different strains with what would be expected if there were no links, they found the results were highly skewed. “We found a much higher co-occurrence of the two strains containing cp26 and cp29,” says Hill. The cooperative approach did indeed appear to help the two strains survive. “It was a striking result,” he adds.

    The team also compared patients who had the HLA-B35 molecule with those who did not. They found that the cp26 and cp29 variants were more common in the blood of the HLA-B35 group. “These results provide evidence that HLA molecules influence the strains of parasites causing malaria infections,” says Hill. It shows the specific interactions between host and parasite molecules you would need for coevolution, he says. It also shows that particular HLA molecules play a key role in particular parasite strategies, and any change to them could shift the ground in favor of the host. “It's meat on the bones of the [coevolution] idea,” says immunologist Jonathan Howard of the Institute of Genetics at the University of Cologne in Germany.

    Some researchers say this sort of cooperation may not be limited to malaria parasites. “These are very interesting results, which suggest that the interdependency of the two variants shows that they are gaining a mutual advantage. I would expect to find this phenomenon in other situations,” says Howard. “Antagonism has been shown for HIV proteins in the lab, and you'd expect it might also occur in the body.”

    The results also have sobering implications for vaccine development. “On the face of it, it's bad news,” says Hill. A vaccine containing a protein fragment from one strain of malaria parasite could suppress the immune response to a related strain. “Some malaria vaccine programs have been looking at inducing responses to the liver stage of the parasite, but these studies suggest that use of some variant proteins from this stage may be counterproductive,” he says. “The work shows just how difficult it may be to manipulate immune responses to our advantage,” says Howard.

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution