News this Week

Science  03 Jan 1997:
Vol. 275, Issue 5296, pp. 27

You are currently viewing the .

View Full Text

Log in to view the full text

Log in through your institution

Log in through your institution

  1. Cell Biology

    The Many Faces of WAS Protein

    1. Carol Featherstone
    1. Carol Featherstone is a free-lance writer in Cambridge, U.K.

    The protein made by the gene defective in Wiskott-Aldrich syndrome may have multiple functions, interacting with both the cell's internal skeleton and its growth-control pathways

    How could a single errant gene cause everything from bleeding problems to eczema to cancer? That's the puzzle immunologists have faced in the rare hereditary disease known as Wiskott-Aldrich syndrome (WAS). Boys who inherit the most severe forms of this condition, which is transmitted by a gene carried on the X chromosome, have immune deficiencies that lead not only to eczema but also to frequent infections. They have few platelets—the tiny cells needed for normal blood clotting—and those few are abnormal. And WAS patients often develop immune-cell cancers—lymphomas and leukemias—that kill them by the age of 30. Now, a flurry of recent results from several labs has begun to connect these pieces of the puzzle.

    The work is not only starting to reveal how a single defective gene can wreak such diverse havoc, but is also pointing to some intriguing new connections between the internal communication pathways of normal cells. It suggests that WASp, the protein made by the gene that is defective in WAS patients, is a multifaceted molecule that can interact with both the cell's internal skeleton and the signaling systems that control a cell's responses to its environment. “It has been a really important year,” says Tomas Kirchhausen, whose group at Harvard Medical School is studying WASp. “For a long time, we've known that changes in the cytoskeleton are important aspects of the cell's response to its environment. Now we have a candidate molecule that seems to connect the cytoskeleton and signal transduction pathways.”

    On the one hand, WASp may help regulate the highly malleable, internal network of protein filaments known as the cytoskeleton. Those filaments that are made of the protein actin, for example, rapidly disassemble and re-form to help immune cells move in response to external signals, such as the inflammatory molecules that draw them to infection sites, and also during the cell-to-cell interactions needed for triggering antibody production and other immune responses.

    Exactly how receptor signals trigger this actin filament reorganization has been unclear. But the new work indicates that WASp may relay the signals by acting “like a bridge or a scaffold linking components on the plasma membrane with components on the cytoskeleton,” says Kirchhausen. And on the other hand, WASp may provide a hub that connects receptors to pathways that regulate immune-cell proliferation, which is essential for normal immune responses. If the functioning of the WASp protein is impaired by mutation, the result could then be failure of both immune responses and the cell's normal growth-control systems.

    Buzzing about WASp

    The current excitement about WASp had its origins in 1994 when Jonathan Derry and Uta Francke at Stanford University, in collaboration with Hans Ochs at the University of Washington, Seattle, cloned the gene that is mutated in WAS. Its sequence provided an early clue that the gene's protein product might have multiple functions. It showed, Ochs says, that “the gene is very complex, with well-defined regions that give the protein several faces.” The first one recognized, however, turned out to be deceptive: a region containing many residues of the amino acid proline—a feature often seen in transcription factors, proteins that activate gene expression. If that were WASp's function, the protein should reside in the nucleus, where the genes are. But the work this year has instead helped pin down the protein to a location just beneath the cell surface.

    Division of labor.

    The colored bars identify the different WASp domains that may contribute to its functions, with the numbers indicating the amino acid positions.

    That would put WASp in the right position to relay incoming signals to the actin cytoskeleton, and there were already reasons to think that it might be doing just that: several features of blood cells from WAS patients. “Ages ago, [Harvard's] Fred Rosen thought that actin was not bundling properly [in T cells],” recalls Ochs. In work done more than 10 years ago, Rosen and his colleagues had noticed that WAS lymphocytes look strange, having too few of the cell-surface projections called microvilli that have actin at their core. The small, malformed appearance of the platelets in WAS patients also implied a problem with their cytoarchitecture. These malformed platelets are quickly destroyed by the spleen, leading to the patients' bleeding problems.

    But researchers didn't begin considering the possibility that WASp might have direct effects on the cytoskeleton until the beginning of this year. The clue came when three independent teams, which were led by Alan Hall of University College London, Arie Abo of Onyx Pharmaceuticals in Richmond, California, and Harvard's Kirchhausen and Rosen, found that WASp binds to a protein called Cdc42, a member of a superfamily of small GTP-hydrolyzing proteins (GTPases) that act as on-off switches for many cellular activities. In Cdc42's case, the activities it controls involve actin remodeling, such as the formation of filopodia, fingerlike projections from the cell surface that help cells move in response to external stimuli, and the orientation of T cells toward antigen-presenting cells in the early stages of an immune reaction. Because WASp mutations were known to disrupt actin-containing structures, the finding raised speculations, says Hall, that Cdc42 might be working in conjunction with the normal WASp protein to regulate actin filament formation.

    Since then, that idea has gathered support—and faced some contradictory evidence. The support came when the Hall and Abo groups found that Cdc42 binds WASp only when the GTPase is carrying GTP—the form that functions as the “on” switch for filopodia formation when Cdc42 is activated by an appropriate signal from a receptor. In addition, Abo's group provided evidence that WASp is localized in the cell with polymerized actin. When he and his colleagues genetically altered cells so that they expressed large amounts of WASp, they found that the protein clumped together with actin polymers in the cytoplasm in an interaction that is regulated by Cdc42. The WASp apparently binds to the actin through a region on its carboxyl end that is near regions with some structural similarities to certain known actin-regulating proteins. Taken together, these findings suggest that when Cdc42 is activated by an appropriate signal, it binds WASp, which in turn binds actin, bringing about the remodeling needed for filopodia formation and other cytoskeletal effects.

    The latest results from Hall and his colleagues, published in the 1 November issue of Cell, indicate that this interpretation may not be correct, however. They found that fibroblasts, a kind of connective tissue cell that forms filopodia when injected with Cdc42, still produced them when they were injected with mutated Cdc42 proteins that could no longer bind WASp. “The simplest explanation,” says Hall, “is that the interaction [of WASp and Cdc42] is not required to generate filopodia.”

    Despite this apparent setback, researchers aren't giving up on the idea that the Cdc42-WASp interaction might be important. Cdc42 has other roles, such as transmitting stress signals to the genes in the nucleus, and WASp might be involved in that pathway in some way. And even if WASp doesn't cooperate with Cdc42 in regulating filopodia formation, there are other ways in which it could exert an effect on the actin cytoskeleton.

    In addition to the actin-binding region that Abo's team found on the protein's carboxyl end, WASp's amino end contains a sequence known as the pleckstrin homology (PH) domain because it resembles a sequence originally found in the protein pleckstrin. The PH domain of WASp, like that in pleckstrin itself, binds to a phospholipid in the cell membrane, produced when many different types of receptors are activated, that regulates actin filament growth. “The homology [to the PH domain] is very hard to detect by looking at the sequence,” says Kirchhausen, “but when you express [the domain from WASp] as a recombinant molecule, it binds tightly to [the phospholipid] PIP2.”

    Perhaps the best evidence implicating WASp in actin remodeling comes from yeast biologist Rong Li and her colleagues at Harvard University, who have cloned a yeast counterpart of the WASp gene. Inactivating the yeast gene, says Li, disrupts the normal patchy distribution of actin just beneath the yeast cell surface and at sites where new yeast cells bud off during that organism's asexual reproduction. In yeast cells lacking a working WASp gene, “the [subsurface] actin was not in patches but more like cables that permeate the cytoplasm,” Li says. The yeast WASp mutants are also defective in membrane growth and cell budding, both defects that could be explained by faulty actin organization. Abo's group, in collaboration with Mara Duncan and David Drubin at the University of California, Berkeley, has also observed an actin defect in yeast with an inactivated WASp gene.

    Growing connections

    Even as the case for WASp's actin connection builds, other work is linking the protein to the cell's growth-signaling pathways. In 1995, for example, Keith Robbins and colleagues at the National Institute of Dental Research in Bethesda, Maryland, noted that WASp binds to a protein called Nck, which is one of the first links in the chain of molecules that convey signals from certain growth-factor receptors into the cell. These receptors are called receptor tyrosine kinases because their intracellular portions are kinase enzymes that add phosphate groups to tyrosine residues in proteins, including the receptor proteins themselves, when the receptors are activated.

    To do its job, Nck interacts with the kinase receptor through one of its domains, called SH2, but only when the receptor is in its activated, phosphorylated form. Through another of its domains, called SH3, Nck binds to WASp. Nck might therefore be a conduit for conducting signals between growth-factor receptors and WASp to influence the actin cytoskeleton when the cell mobilizes itself to begin proliferating.

    And Nck might not be the only connection between WASp and the cell's growth pathways. In the last few months, a collaboration between several groups in London and another between Boston-based researchers reported that WASp, through its proline-rich domain, also binds to two non-receptor tyrosine kinases that are thought to be links in intracellular signaling pathways. One of these, called Fyn, is related to the so-called Src kinase, which has been implicated in regulating both the actin cytoskeleton and cell division. Fyn itself may be involved in transmitting the T cell receptor signals that tell the immune cells to proliferate in response to antigen activation, although the evidence is controversial.

    The other WASp-binding kinase, Itk, may help relay signals that trigger T cells to make interleukin 2, which sets in motion the T-cell proliferation and differentiation that are at the heart of a cellular immune response. “When the gene for Itk was inactivated in mice, T-cell development was defective,” explains Leslie Berg of Harvard University, who is studying the interaction between WASp and Itk. “The mice just didn't produce the [normal] numbers [of T cells].”

    WASp's connections may go even further. Berg's group has found in test-tube studies that WASp interacts through its proline-rich region with at least seven proteins that have SH3 domains. It is not yet certain which of these interactions might occur in the cell. But what is clear, Berg explains, is that “WASp is capable of binding several different [SH3-containing] proteins simultaneously.” This multiple binding, which could enable WASp to coordinate the interactions of several signaling pathways, has led, says Kirchhausen, to the emerging view that WASp is “an integrator allowing regulatory elements to talk together.”

    The functional significance of this linkage between growth-factor signaling and the state of the actin cytoskeleton is not yet clear. Because cells usually need to be attached to their substrate before they can proliferate, one idea is that actin organization is important for cell attachment and therefore also for proliferation.

    Researchers have a long way to go to work out just what WASp does and how its many roles might be related. But the answers may turn out to be important to the function of many cell types besides blood cells. In the October issue of the EMBO Journal, a group led by Tadaomi Takenawa at the University of Tokyo described a relative of the blood-cell WASp that is present in many tissues, reaching high concentrations in the brain. The functions of WASp in these other tissues are even more uncertain than they are in lymphocytes. But as Ochs says, “That we can see WASp in all kinds of cells is the most exciting thing. We still don't know all the answers, but one day soon undoubtedly someone will put the mosaic together.”

  2. Social Science

    Evolutionary Psychologists Look for Roots of Cognition

    1. Nigel Williams

    LONDON—Few researchers would dispute that our body's organs and the way they function are adaptations shaped by natural selection during evolution. But for decades, social scientists have regarded cognition and behavior as being exempt from this evolutionary shaping. No longer. Over the past 30 years, zoologists have been increasingly successful at explaining many specific animal behaviors, from mating to foraging strategies, as Darwinian adaptations and thus the products not of a learned skill but of natural selection. Buoyed by this success, a growing number of researchers are now putting human mental activity back under the evolutionary spotlight.

    This line of inquiry has led to a remarkable burst of collaboration, with biologists and psychologists trading methods and data, and 30 of them recently gathered at a workshop in London to compare notes. “There's an explosion of new work and a lot of excitement,” says behavioral ecologist Marc Hauser of Harvard University. “In many areas of psychological research, entirely new questions are being asked and new answers obtained based on adaptationist thinking,” says biologist Alex Kacelnik of the University of Oxford. Hauser and Kacelnik, for example, presented results at the meeting that imply deep evolutionary roots for the mental mechanisms behind numeracy—the ability to assess bulk or amount—and the tendency to discount future rewards in favor of present ones.

    As was evident at the meeting, the new approach still has major problems to overcome if it is to win over skeptical psychologists. “There's a tremendous amount of resistance to the idea human behavior has a biological past,” says ecologist Anders Moller at the Pierre and Marie Curie University in Paris. Most important to overcoming the resistance, how can you show that a feature of human mental activity is an adaptation? One strategy is to compare how humans respond to ancient features of the environment with our response to newer features to which we would not have had time to evolve a response. Says psychologist Steven Pinker at the Massachusetts Institute of Technology, “We instinctively fear snakes, but we appear not to be afraid of fast cars, which are a real danger now. This suggests our emotions were shaped by our evolutionary environment, not the one we grew up in.”

    Another tack is to compare human and animal data to reconstruct how our cognitive features evolved. “We need to pull things out from the nonhuman animals to see which cognitive systems are old,” says Hauser. That is the strategy he and others are using to study numeracy, which is common to humans and some animals. Psychologists have found that infants aged less than a year display basic numeracy. For example, in work discussed at the meeting, Arizona University psychologist Karen Wynn studied the response of infants aged 8 to 10 months when two similar dolls are placed behind a screen and one of the two dolls is sometimes missing when the screen is lifted. Wynn found that infants will look at the single doll for much longer, apparently puzzled by the disappearance of the second doll. This suggests infants can do simple arithmetic and memorize objects before language competence develops.

    Hauser and his colleagues tested a colony of semi-free-ranging rhesus monkeys for the same skills with pairs of bright purple eggplants. They found that, like infants, these Old World monkeys looked longer at the impossible outcome—when two fruits were placed behind the screen but only one was present when the screen was removed. “From these results, adult rhesus monkeys and 8- to 10-month-old human infants appear to have comparable abilities for simple arithmetical computations,” says Hauser, implying that this skill is hardwired in the brain.

    Go figure.

    New research suggests that Rhesus monkeys can do simple arithmetic.


    The evolutionary programming may have taken place long ago, because Hauser observed similar responses in New World monkeys—a captive colony of cottontop tamarins. If the observed behavior is a common adaptation, it dates back to before the divergence of these primate groups, and hence long predates the emergence of humans. “Comparative studies using similar methods are vital to help study which cognitive abilities are evolutionary adaptations,” says Hauser.

    Oxford's Kacelnik is finding an even deeper evolutionary root for the human tendency to “discount” future events—trading off the value of opportunities in the future for rewards now. He and his colleagues carried out a study of discounting in captive starlings, using a test system varying the size and delay of food rewards. They found a distinctive pattern of behavior in which the perceived value of future rewards diminished rapidly with time, on a hyperbolic curve that gave high value to short-term gains and maximized the rate of reward rather than its overall value.

    Kacelnik compared these results with a number of studies of human discounting and found the pattern of response was remarkably similar to that in the birds, suggesting that the tendency to maximize short-term rewards may have an evolutionary root and that “play today” has been a successful strategy in the past. Although similarity alone does not prove an evolutionary link—the same kind of behavior could have evolved independently in humans and starlings—Kacelnik believes such results will lead to further tests of animal and human behavior. “A key challenge for adaptationist thinking is to produce precise predictions about psychological mechanisms,” he says.

    But already, advocates of a Darwinian approach to psychology are emboldened by their successes. They are carrying the search for built-in psychological adaptations to questions ranging from the psychological differences between males and females and conflicts of interest between parents and offspring to morality and political behavior. Says psychologist John Tooby of the University of California, Santa Barbara, “People come factory-equipped. There's stuff built into brains.”

  3. Social Science

    Selling Darwinism in a Citadel of Social Science

    1. Nigel Williams

    LONDON—A small office among the clutter of buildings that comprise one of Britain's most prestigious institutes for the social sciences, the London School of Economics (LSE), has a surprising occupant: evolutionary biologist Helena Cronin. As co-director of the LSE's Centre for Philosophy of Natural and Social Sciences, Cronin could have quietly got on with her studies in the philosophy of biology, but given her unusual surroundings she decided it would be a shame not to spread the word.

    Getting a grip on the mind.

    Evolutionary biologist Helena Cronin of the London School of Economics.

    Credit: LSE

    Cronin, who is a champion of Darwinism as a way of understanding both biology and human affairs and author of an authoritative book on evolution, decided that the LSE might benefit from more exposure to Darwinian thinking. The social sciences, she figured, could gain a far better insight into how people behave and how societies work by supplementing their exploration of the cultural factors in human behavior with a look at how evolution shaped our minds and bodies.

    Three years ago, Cronin launched this campaign on her unsuspecting colleagues by organizing at the LSE an ambitious conference entitled “Darwin and the Human Sciences.” The conference was packed and attracted a distinguished group of speakers from scattered disciplines. And surprisingly, there has been no outright opposition from her colleagues, she says.

    Following this success, Cronin began a series of Darwin Seminars at the LSE last year. The series kicked off with psychologist Leda Cosmides of the University of California, Santa Barbara, talking about adapted minds, and evolutionary biologist John Maynard Smith of the University of Sussex on the evolution of language. “We had no idea how much interest there would be and booked a room for 50 people,” she says, but found “people were sitting on the stairs queuing to get in. We hastily got a bigger room.”

    With 14 seminars now under her belt, covering topics as diverse as sexual attractiveness, medicine, and war, “we've never looked back,” she says. The latest seminar, held last month by psychologist Susan Blackmore of the University of the West of England, looked at how evolution may have shaped our psychology. This attracted some national radio publicity, and Cronin had to change venue rapidly and book the biggest room at the LSE, holding 350 people.

    Social scientists are taking note, says LSE psychologist Andy Wells. “Darwinian models are of great importance, but it is too early to judge how significant their impact will be,” he says. Sociologist Christopher Badcock of the LSE, who founded a course focused on a Darwinian perspective on the subject 10 years ago, says it was a bitter struggle to set up the course because of hostility from colleagues. But with the decline in popularity of Marxist approaches and other political developments, he says, “things are changing gradually, and there's huge interest now among the students.”

  4. Physics

    Improbable Particles—or Artifacts?

    1. Andrew Watson
    1. Andrew Watson is a science writer in Norwich, U.K.

    “When you have eliminated the impossible, whatever remains, however improbable, must be the truth,” wrote Sir Arthur Conan Doyle. Eliminating the impossible is just what researchers at CERN, the European Center for Particle Physics, are now trying to do. Science has learned that over the past year, one of the four huge detectors on the Large Electron-Positron Collider (LEP) has picked up 18 unusual events that don't fit into any known physics, yet are so tantalizing that, so far, physicists can't write them off.

    In each case, the ALEPH detector recorded four jets of mesons and similar particles spraying from high-energy collisions of electrons and antimatter positrons. The total mass of the daughter particles always added up to 106 billion electron volts (GeV), but the two pairs of jets made unequal contributions. This pattern could imply that a short-lived pair of dissimilar particles lived briefly after each collision before decaying to produce the jets. But existing physics has no candidates. “This large peak that ALEPH is seeing is completely unexpected from the Standard Model point of view,” says CERN theorist Carlos Wagner. Nor do the events fit neatly into popular extensions of the existing theory, such as a scheme called supersymmetry.

    Odder still, the other three detectors on LEP have seen nothing comparable. “It's somewhat bizarre that, if there is anything there, they are seeing nothing,” says ALEPH's spokesperson-elect Peter Dornan, an experimentalist based at Imperial College London. Now, with the help of colleagues from other detector groups, the ALEPH researchers are trying to determine which is more improbable: that they have made a mistake, or that the so-far-inexplicable events are real.

    ALEPH researchers noticed the first events in data taken in 1995, when LEP was colliding electrons and positrons at energies of about 130 GeV. Following a standard analytic procedure, the researchers scanned the debris of the collisions for distinct particle jets—the signatures of decaying massive particles. Certain four-jet events are a special prize, because they might signify the production and decay of pairs of hypothetical particles predicted by extensions of the Standard Model. But some of the four-jet events the researchers did find didn't fit any predicted pattern (Science, 26 April 1996, p. 474).

    True colors?

    In this view down the LEP beamline, four jets of particles (colors) spray from a collision in the ALEPH detector. The odd pattern of jet masses could point to new physics—or to an experimental artifact.


    At first, many physicists were inclined to dismiss these first few events as a fluke that would vanish with more data when LEP restarted late in 1996. But now that these new LEP runs, at energies of 161 GeV and 172 GeV, have been completed, “a few more events have come along. When it's all added together, this effect looks more significant, so this is what's now creating the excitement,” says ALEPH researcher John Thompson of the Rutherford Appleton Laboratory near Oxford in the U.K., who in mid-December presented the ALEPH events to a meeting of about 100 theorists there. Thompson and his colleagues think it's unlikely that they've made a mistake, but theorists say it's equally hard to accept that ALEPH is seeing new physics where none is expected.

    The Holy Grails of particle physics are the Higgs particle—responsible for the way particles acquire mass—and evidence for supersymmetry, a hypothetical higher symmetry in nature in which known particles would have massive partners. “If it were to have a Higgs-like interpretation, we would expect these four jets to have [a different] character,” says Thompson. Supersymmetry also appears to be a long shot. In the ALEPH events, the visible collision products seem to account for all the collision energy, “which flies in the face of supersymmetry,” says Thompson. “Except for some of the more obscure supersymmetry models,” he explains, supersymmetry predicts an energy shortfall.

    Wagner and his colleagues at CERN have already submitted a paper that explores one possibility: that the events signal a variant of supersymmetry in which particles with left- and right-handed “spin” can interact differently. They speculate that two supersymmetric electrons, having masses of 48 GeV and 58 GeV, could have briefly materialized in the collisions before decaying into the four jets. But Wagner too advises caution. “Right now, we cannot be sure that this is new physics,” he says.

    The LEP experimental groups have joined forces in an attempt to resolve the matter, looking for features of the detectors that might explain why ALEPH sees something while the others do not. They are also examining ALEPH's data processing, including the algorithms that pick out the jets. So far, says Dornan, “we don't have an algorithm that will kill it.”

    What may finally settle the issue is more data, which will come when LEP begins its new runs in May 1997. In the meantime, physicists face a frustrating waiting game. As Frank Close of the Appleton Laboratory puts it, “You can't tell yet whether this is the emergence of a signal, like the tip of an iceberg, or whether it's a small piece of ice that's going to melt away.”

  5. Materials Science

    Researchers Construct Cell Look-Alikes

    1. Robert F. Service

    From cells to shells, biological systems are masters of organization, assembling molecules into structures of ever larger sizes. Scientists looking to imitate this talent have had little trouble getting molecules to arrange themselves into the simplest components—for instance, coaxing layers of fat molecules, or lipids, to curl into tiny spheres, called liposomes. Yet, when it comes to assembling complex structures, biology leaves the imitators behind. But now, scientists from the University of California, Santa Barbara (UCSB), have displayed more than a little organizational prowess, assembling groups of lipid molecules into structures resembling cells, with an outer membrane encasing a series of vesicles.

    The new work, which was presented last month at the Materials Research Society meeting in Boston, “is a very nice approach to making hierarchical materials,” says David Grainger, a chemist at Colorado State University in Fort Collins. The cell look-alikes, dubbed vesosomes, may also boost efforts to use lipid spheres for delivering drugs to tumors and other tissues, says Theresa Allen, a drug-delivery specialist at the University of Alberta in Edmonton. The spheres deliver the drugs as they leak through lipid membranes. Packaging the drugs inside two membranes could slow the release of the drugs, lengthening the time between injections for patients.

    To create the vesosomes, the UCSB researchers—materials scientist Joseph Zasadzinski and graduate students Scott Walker and Michael Kennedy—took a two-stage approach. First, they built and grouped together small lipid spheres, then shrink-wrapped the groups in an outer lipid membrane. The first part was easy. Researchers have been making liposomes for years by adding lipids to water and then blasting the solution with sound waves, among other techniques, to induce the fat molecules to assemble into spheres.

    Tethering together a cluster of the minispheres was trickier. Liposomes don't usually group together; like charges on their surfaces, for example, often push them apart. So the researchers engineered special, two-part chemical linkers into the outer surface of the liposomes. First, they took some lipid molecules and attached one half of the chemical linker—a small, organic molecule known as biotin. Next, they mixed these with undoctored lipids. When the lipids then assembled into spheres, each had biotin molecules poking out of its surface. Then the scientists spiked the mix with the second half of the chemical linker—streptavidin. Each streptavidin can bind four biotins. This multiple binding drew free vesicles together into big aggregates. They were so large, in fact, that to package them, the team had to cut them down to size by forcing them through an ultrafine filter. The result: tethered groups of liposomes measuring 0.3 to 1 micrometer across.

    Tiny bubbles.

    Vesicles packaged inside a fatty membrane.


    To shrink-wrap the groups, Zasadzinski and his colleagues again used a two-stage process, first linking the liposome groups to the shrink-wrapping material and then causing it, through chemical sleight of hand, to wrap around the liposome groups. For their wrapping material, the researchers used lipids organized into a different form: sheets rolled up into tiny cylinders. Researchers have been coaxing lipids into cylinders as well as liposomes for years, but those made by the UCSB researchers differed in one key way: They engineered biotin and streptavidin linkers into the cylinders' surfaces. So when the researchers stirred up a soup of cylinders and liposome groups, the linkers again drew the structures together.

    The final challenge was getting the cylinders to unfurl so the carpetlike sheets could form large sacs around the groups of smaller spheres. To pull this off, the team loosened some of the calcium bonds holding the cylinders together by adding to the mix a calcium-grabbing compound—ethylenediaminetetraacetic acid. As the cylinders unroll, about 15% naturally wrap themselves around neighboring aggregates, reports Zasadzinski.

    Currently, the researchers are trying to improve the efficiency of the shrink-wrapping process. They also plan to see whether their two-membrane vesosomes do, in fact, release encapsulated drugs more slowly than do single-membrane liposomes. Allen notes, however, that as drug deliverers, vesosomes have a few drawbacks. For one, streptavidin is a bacterial protein that could trigger an immune response if injected into a person's bloodstream, she says. Also, she adds, at about a micrometer across, today's liposomes are big enough that they would be cleared quickly from the bloodstream by filtering mechanisms in the liver and spleen.

    Zasadzinski says, however, that it should be fairly easy both to replace the streptavidin with nonimmunogenic compounds, as well as produce vesosomes that are tiny enough to remain in circulation. If he's successful, drug delivery experts may soon attempt their own bit of advanced biomimicry.

  6. Neuroscience

    New Knockout Mice Point to Molecular Basis of Memory

    1. Wade Roush

    Shakespeare's Macbeth, seeing his mad, guilt-plagued wife suffer, wondered why her doctor could not “Pluck from the memory a rooted sorrow, / Raze out the written troubles of the brain?” Sorrows can't be surgically excised, but recently biologists have accomplished something close: They've erased the ability to remember from the brains of laboratory mice, by plucking out or adding key proteins in particular clusters of brain cells. These feats at last offer direct confirmation of the reigning theory of how we remember and show how molecular changes affect the patterns of electrical activity of which memories are made.

    The work also showcases new methods for probing the workings of the mind. One set of studies, described in three related papers in the 27 December issue of the journal Cell by a team at the Massachusetts Institute of Technology (MIT) led by Nobel Prize-winning biologist Susumu Tonegawa and neuroscientist Matthew Wilson, uses an exotic new gene-splicing technique to produce a new kind of “knockout” mice that lack a certain gene. Knockouts are standard in biology these days, but these mice are different: The gene in question, which codes for a receptor for a key neurotransmitter—glutamate—was deleted in only one small group of cells in a brain region called the hippocampus, rather than in every cell in the mouse's body. A fourth paper in Cell, and one in the 6 December issue of Science (p. 1678) from an independent team, selectively enhanced, rather than deleted, a gene for a particular enzyme (called α-calcium-calmodulin-dependent kinase II or CaMKII). However, this study, led by Eric Kandel and Mark Mayford of Columbia University and Robert Muller of the State University of New York (SUNY) Downstate Medical Center in Brooklyn, had a less precise effect on the brain than did the Tonegawa group's method.

    Both manipulations disrupted patterns of neuronal firing in the hippocampus and impaired the animals' ability to learn their way around mazes. Not only do these results lend substantial support to the prevailing theory linking molecular events in the hippocampus to spatial learning; they are also the first to probe memory at all levels in a single set of experiments, from molecular changes through altered patterns of neuronal firing to impaired learning. “It's a dream of neurobiologists to understand some interesting cognitive phenomenon like a memory from the molecular level right up through behavior,” says neurobiologist Charles Stevens of the Salk Institute in La Jolla, California. “The articles in Cell are a big step in that direction.” The selective knockout technique also promises to generate a new wave of progress, as other researchers create their own favorite mouse strains, including those linked to common neurological diseases. Says neuroscientist Michael Stryker of the University of California, San Francisco: “It points the way forward to what the whole field will be doing in the future.”

    Researchers have long thought that the mammalian brain stores new memories by long-term strengthening of the electrochemical signaling between neurons. And many studies had implicated CaMKII and a particular receptor for the neurotransmitter glutamate, the N-methyl-D-aspartate (NMDA) receptor, as crucial players in this process. When glutamate released by a transmitting neuron binds to NMDA receptors on an adjacent, receiving neuron, the receptors open channels in the cell membrane and allow calcium ions to flood in. These ions convert CaMKII into its active form and unleash a biochemical cascade that heightens the receiving neuron's sensitivity to subsequent signals and increases the amount of current it sends on to other neurons.

    A key site for this memory-building process, known as long-term potentiation (LTP), is thought to be the hippocampus, because injuries there produce severe amnesia. This idea is supported by earlier studies that used drugs to block NMDA receptors, and also by work from the Tonegawa and Kandel labs. The two are friendly rivals, and a few years ago independently created the first knockout mice lacking CaMKII or other key neuronal molecules (Science, 10 July 1992, p. 162).

    These strategies for blocking the molecular events in LTP created learning impairments. But both were clumsy—the research equivalent of treating a hangnail by amputating one's hand—because they alter the entire brain throughout the life of the animal, including during embryonic development. So researchers couldn't be sure that the memory deficiencies in the altered mice weren't the product of developmental defects or changes outside the hippocampus.

    In the new experiments, the MIT team sidestepped that problem with a clever new method, relying on the natural variability in the expression of the CaMKII gene to delete the NMDA receptor gene from discrete parts of the mouse brain. Tonegawa lab member Joe Tsien inserted the gene encoding a DNA-splicing enzyme called Cre in such a way that it was under the control of a “promoter” or on-off switch for the CaMKII gene. This created a collection of mouse strains that expressed Cre in different combinations of tissues, mimicking the pattern of CaMKII expression. “Very fortunately,” as Tonegawa puts it, three strains expressed Cre only in certain cells (pyramidal cells) in a particular region of the hippocampus—and only about 3 weeks after birth, after all the normal hippocampal synaptic connections were built.

    To affect the NMDA receptors, the group then mated mice from one of these strains to other specially engineered mice that carried twin DNA sequences just before and after the gene encoding the NMDA receptor. In some of the progeny from this match, the Cre enzyme recognized the twin sequences and linked them together, in the process snipping out the NMDA gene in between and creating a mouse in which only the pyramidal cells of the hippocampus lacked the receptors (see photo).

    Pranks for the memories.

    A clever new method deleted key receptors in only one small part of the brain (purple).


    This two-step method is likely to spread like wildfire as neuroscientists adapt it to study the molecular defects underlying such disorders as Alzheimer's and Parkinson's diseases. “This is a major step along the road from global gene knockouts in the early embryo to spatial and temporal control over gene expression,” says neurobiologist Steven Hyman, director of the National Institute of Mental Health. And it produced striking results in the NMDA knockouts.

    To see how the absence of the NMDA receptors might affect memory, the MIT team analyzed brain slices from their new knockouts, using equipment that artificially shocks neurons and records their responses. After multiple shocks, the pyramidal cells exhibited no increase in the amount of electrical charge they transmitted, proving that intact NMDA receptors are key to LTP. Even more convincing, the team compared the behavior of adult knockout mice to that of their normal littermates in a 120-cm-diameter swimming pool. The knockouts were much slower to find a submerged platform on one side of the pool and less able to remember the platform's position later. Concludes Tonegawa, “These mice were basically incapable of acquiring spatial memory.”

    Underlying this handicap, the team found, was an altered pattern of neuronal firing. Researchers already knew that spatial learning in rodents involves “neural maps” of firing in the hippocampus. A given cluster of cells, for example, will fire only when a mouse is in a specific spot, say, the southwest corner of a box. That spot is called the cells' “place field.” Other cells fire when the mouse is in other locations. Together, overlapping place fields are thought to create a kind of internal map, and researchers had suspected—but never proven—that LTP is what sustains place fields over time.

    Now, the MIT team has confirmed this suspicion by inserting electrodes into the brains of living mice and watching pyramidal cells fire as the animals explored variously shaped cages. Place fields in the transgenic mice were less compact and focused than in normal mice, likely accounting for the difficulty the mice had in navigating a new environment.

    Meanwhile, working in parallel, the Columbia-SUNY team has come up with a similarly dramatic link between molecular changes in the hippocampus, place fields, and memory. They disrupted hippocampal LTP in a different way, using a combination of promoters that was less tissue specific. And they enhanced, rather than eliminated, expression of the gene encoding CaMKII in certain brain cells, including the pyramidal cells. Constant activation of CaMKII, they theorized, would disrupt learning. Indeed, their mice also showed spatial memory deficits and less focused place fields. Kandel's group also found that the fields were less stable over time.

    These findings, linking molecular, neuronal, and behavioral abnormalities, are moving neurobiologists closer than ever to an understanding of the molecular basis of memory, researchers say. “We're just at the beginning of making broad links between lower, molecular levels of analysis and higher levels of cognition and behavior, but these are certainly important steps,” says Daniel Schachter, a cognitive neuroscientist at Harvard Medical School and author of the new book Searching for Memory. By offering data at every level, the new studies are likely to prove memorable themselves.

  7. Computer Science

    Hedging Bets on Hard Problems

    1. Charles Seife
    1. Charles Seife is a science writer in Scarsdale, NY.

    It's a dilemma familiar to every Internet addict. Your Web browser insists that it is connected to a site, but as it happily chugs away, nothing happens. Do you wait, hoping that the data will come in in a few more seconds, or do you give up and try later? It's a gamble either way, because traffic on the Internet fluctuates from second to second. On page 51, computer scientists at the Xerox Palo Alto Research Center have adopted the practices of Wall Street in an approach to solving this and many other problems in computer science that entail the same kind of uncertainty.

    Just as investors try to improve their returns by splitting their money among a number of investments, the Xerox team improves a computer's performance on problems ranging from factoring a large number to minimizing a complicated error function by splitting its attention among a number of programs. “Using a ‘portfolio’ is a really neat way to improve [an algorithm's] performance,” says Bernardo Huberman, one of the researchers. Agrees Hal Varian, an economist and the dean of Information Management at the University of California, Berkeley, “It's very clever.”

    The algorithms that Huberman and his colleagues are trying to speed up rely on a gamble to solve extremely hard problems. These “NP-complete” problems take an enormous amount of effort to work through, the effort rising exponentially with the complexity of the example. But the solutions are crucial in many areas of science. Finding the global minimum of a complicated function—an NP-complete task—is crucial in training a neural net or predicting the shape of a folding protein.

    These problems are so difficult that the best a computer can do is to wander around methodically in search of an answer—a so-called Las Vegas algorithm. These algorithms often begin their calculations from random “seeds.” If this starting position is reasonably close to the correct solution, the algorithm breezes through the problem. But a bad seed can send the algorithm up the wrong path, and the computer may grind away fruitlessly for hours.

    In hopes of boosting a computer-science version of “return”—the speed with which a computer solves a hard problem—Huberman and his colleagues decided to diversify a computer's algorithm portfolio. They tested the approach on a well-known NP-complete problem: coloring a graph. The computer is given a set of circles connected by lines; given a certain number of colors, the computer is assigned to shade each circle so that no two connected by a line share the same color.

    Harder than it looks.

    Solving a graph-coloring problem takes exponentially longer as the problem grows.

    The Xerox team made a portfolio of two copies of a graph-coloring algorithm. The computer then split its attention between the two “investments.” In the same way that a gambler might put $98 on a favored horse and $2 on a long shot, one copy of the algorithm received most of the computer's attention, while the other, with a different random seed, got only a little bit. Once in a while, the less favored algorithm got lucky and solved the problem quickly. By playing with the computer's stake in each investment, Huberman and his colleagues found a balance where the benefit from the occasional long-shot jackpot more than compensated for the loss of attention to the favored algorithm. Not only was the two-copy portfolio 22% faster, on average, than a single copy of the algorithm, but the risk of a long wait for an answer dropped by 10%.

    The Xerox researchers predict that they can extract a more dramatic speedup by taking another page from Wall Street. Just as an investor can mix stocks and bonds, the computer portfolio approach can combine several different algorithms for solving the same problem. If those algorithms are complementary—one does very well in exactly the cases where a second fails, and vice-versa—the portfolio would have stunning performance.

    Huberman and the Xerox team are now experimenting with changing the amount of time the computer spends on each investment on the fly, much as an investor rebalances the proportion of investments in his portfolio, based on the various time scales over which each algorithm has the best chance of getting an answer. “We get a five- to 10-times speed improvement,” claims Huberman.

    They are also working on bringing the approach to the aid of frustrated Web surfers. Once the team unravels the causes and the distribution of Internet delays, they hope to develop a portfolio strategy that will lay the groundwork for a multiprocess browser that will speed up access to the Web. Huberman and his colleagues aren't predicting which of these research directions will pay off, but they know the benefits of diversifying. Says Berkeley's Varian, “The research mimics the algorithm.”

Stay Connected to Science