News this Week

Science  12 Dec 1997:
Vol. 278, Issue 5345, pp. 1879
  1. PHYSICS

    High-Temperature Mystery Heats Up

    1. Robert F. Service

    TAKE ONE OF THE NEW HIGH-TEMPERATURE SUPERCONDUCTORS, HEAT IT UP A BIT, AND THE PHYSICS GETS DOWNRIGHT STRANGE. THE EVENTUAL THEORY OF THESE MATERIALS WILL HAVE TO EXPLAIN THIS STRANGENESS

    From the moment they were discovered 11 years ago, high-temperature superconductors have puzzled physicists by their weird behavior. But the more researchers learn about these strange materials, the odder they seem to get. These complex and brittle ceramics can conduct electrons without the least bit of electrical friction if they are chilled to temperatures about 175 degrees below freezing—positively balmy compared to the frigid conditions that metals require to perform this feat. But heat these copper-oxide-based materials, known as cuprates, up a bit to the point where they no longer superconduct, and their behavior becomes even more inexplicable. Their electrical and magnetic properties no longer resemble those of metals, other ceramics, or any other known materials. High-temperature superconductors enter a realm of physics all their own.

    Persistent gap.

    The gap signature of superconductivity hangs around in underdoped cuprates as a pseudogap above Tc.

    SOURCE: B. BATLOGG/BELL LABS

    Today, it is at these higher temperatures—a regime that physicists refer to, apparently with no irony intended, as the “normal” state—where much of the focus of recent experiments on cuprates has shifted. “The normal state is one of the most bizarre animals we have come across in a long time,” says Gregory Boebinger, an experimental physicist at Lucent Technologies Bell Laboratories in Murray Hill, New Jersey. In this state, cuprates behave like good metallic conductors in some ways, while in others like nonconducting insulators. “What's not clear is what gives rise to the strange behavior,” he says.

    View this table:

    Initial reports of these weird phenomena started trickling in years ago. But only recently have researchers amassed enough experimental results to identify behavior patterns that cut across the dozens of related species of cuprate superconductors. The hope is that these new behavior patterns will eventually help researchers understand why cuprates are superconductors at relatively high temperatures. They just don't seem to obey the theories that explain the properties of low-temperature metallic superconductors, and most physicists believe an entirely new mechanism must be at work. But theorists have yet to come up with a hypothesis that can make rigorous, testable predictions. “It's still heavy mystery time” for high-temperature superconductors, as Boebinger puts it.

    Any theory that attempts to explain high-temperature superconductivity must also explain the many oddities of the normal state, says Bertram Batlogg, a physicist at Bell Labs. Adds Herb Mook, an experimental physicist at Oak Ridge National Laboratory in Tennessee: “I don't believe we can understand the mechanism of high-temperature superconductivity before we understand the normal state.”

    Early warnings

    It didn't take long for physicists to realize that the cuprates were strange beasts. According to the standard superconductivity model, known as BCS theory, these materials shouldn't superconduct at all. American theorists figured out in 1957 that the necessary requirement for superconductors to conduct electricity without resistance is for electrons to overcome their natural repulsion for one another and surf through the material in pairs. Traditional metallic superconductors achieve this at very low temperatures, because as one electron moves through the metal, it creates what amounts to a vibrational wake that draws another electron along in its path. Raise the temperature, however, and the extra kinetic energy of atoms in the material kicks the second electron out of the wake of the first, and superconductivity is destroyed. Researchers soon discovered that electrons in the superconducting ceramics also traveled in pairs, but the much higher temperatures mean that something other than a vibrational wake has to be holding them together.

    Soon after this realization, researchers around the world began seeing odd behaviors in the materials above their superconducting temperature. In all superconductors there is a critical temperature, or Tc, which marks the transition between their superconducting state and the normal state, where electrons glide about solo rather than in pairs. As the temperature of a standard metallic superconductor is raised above its Tc, it displays a characteristic pattern of conductivity changes: Its efficiency at conducting electrons steadily drops before finally leveling out as the added heat jostles the electrons, causing them to give up more and more energy to electrical resistance. Yet early experiments on the high-temperature superconducting (HTS) materials showed that as they are warmed above the Tc, their resistance first shoots up dramatically and then settles into a steady rise. Ceramics, it turns out, become very bad conductors in a hurry.

    Even more strange, their conductivity is worse in some directions than others; a normal metal conductor, in contrast, shuttles charge equally well in all directions. The explanation seems to lie in cuprates' crystal structure. They are complex crystalline compounds composed of four or more elements arrayed in repeating unit cells. Each cell is like a layer cake with copper-oxygen planes interleaved with planes of other atoms. In the late 1980s, a number of groups found that the cuprates conduct electrical charges up to 10,000 times more readily along the copper-oxygen planes than between them. “Within the plane, it conducts like a metal, but perpendicular [to it], it looks like an insulator,” says Patrick Lee, a theoretical physicist at the Massachusetts Institute of Technology (MIT) in Cambridge. “It was very weird.”

    Odd magnetic behaviors also showed up above the Tc. For instance, when researchers measured how the motion of electric charges in the material is influenced by a magnetic field—a property known as the Hall effect—they also saw the magnitude of this effect steadily change as the temperature climbed above the Tc. Normal metals, by contrast, show no changes with temperature.

    New experiments continue to expand the list of cuprate quirks. Last month, Mook, along with colleagues in Denmark, Canada, France, and the United Kingdom, reported that the degree of tiny magnetic fluctuations in cuprates also changes drastically as the materials are warmed above the Tc (Science, 21 November, p. 1432). In all of the cuprates, copper atoms act like tiny bar magnets that can point up or down. In the superconducting state, these magnets tend to align themselves alternately up, down, up, down. Above the Tc, this pattern breaks down. But when Mook and his colleagues slowly cooled a cuprate made from lanthanum, strontium, copper, and oxygen, they found signs that the amount of magnetic ordering in the material increased drastically just before it reached the Tc.

    That's “very interesting,” says University of California, Los Angeles, physicist Sudip Chakravarty. Researchers had suspected for some time that fluctuations in the magnetic orientation of copper atoms may be involved in pairing electrons when the material superconducts. If so, these fluctuations should be present above the Tc, but so weak that they are overwhelmed by thermal agitation. But previous studies had failed to find evidence of such magnetic fluctuations above the Tc. “[This] experiment is surprising because it does find a strong signal of magnetic excitation” above the Tc, says Chakravarty, although all involved acknowledge that this by no means settles the question about what causes electrons in the cuprates to pair up.

    Into the gap

    What has been perhaps the most striking oddity of all in the cuprate repertoire shows up when the materials are subjected to a little doctoring. All pure cuprates are electrical insulators, because their electrons are tied to atoms and are unable to roam about. To transform the cuprates into conductors, researchers add tiny quantities of substitutes—known as dopants—such as barium or strontium in place of a few lanthanum atoms. The overall crystalline structure of the material stays the same, but its electronic properties change dramatically. Barium can donate one less electron than lanthanum to the electron-hungry oxygen atoms. The result is that oxygen atoms end up with electron vacancies, known as holes. These holes act like positive charges that can hop from one oxygen to the next, and so conduct electrical charge. The level of doping is crucial: Replacing about 15% of the lanthanum atoms—or their equivalents in other recipes—results in the highest possible Tc for virtually all types of cuprates. Stuff in another 5%, and they do not superconduct at all. Go the other way and reduce the doping to 10%, and the physics gets very peculiar.

    These “underdoped” cuprates are less effective as superconductors, yet they continue to display a unique electronic signature of superconductivity even when they are warmed above their Tc. Quantum mechanics dictates that electrons can exist only at certain well-defined energy levels. In a solid, these levels are bunched together in bands separated by gaps of “forbidden” energies. When conventional metallic superconductors are cooled below the Tc, a gap of forbidden energies opens up: A pair of electrons can only absorb an amount of energy larger than the binding energy of the pair—anything less is not absorbed. This gap is a signature that electrons are paired and are surfing through the material together. As the temperature is warmed up toward the Tc again, this gap begins to shrink, and at the Tc and above, it disappears altogether. At that point, the material stops superconducting and behaves like a normal metal.

    But underdoped cuprates, it turns out, obey their own laws. These materials cease superconducting when warmed above their Tc, just like conventional BCS superconductors, but the gap that's the signature of superconductivity and paired electrons in metals remains. There's an odd quirk, however: Some electrons can now reside in the previously forbidden zone. Like ledges on a cliff face, a few energy levels develop in the gap, allowing some electrons to sit there. Still, the number of electron energy states in this region—the ledges—remains well below that found in metallic superconductors warmed above the Tc. The ledge density—or in this case the density of electronic states—is all wrong.

    Over the past few years, a number of experimental techniques have probed this partial gap, or “pseudogap,” as it has come to be known. But these experiments had largely left open the question of whether the presence of the pseudogap above the Tc is directly related to the formation of the superconducting gap below it. But experiments over the past year have begun to provide an answer.

    In 1996, a team of Stanford University researchers led by physicists Anthony Loeser and Zhi-Xun Shen first used a technique known as angle-resolved photoemission to show that the size of the energy gap in both the superconducting state and the normal state varied according to the direction in which charges are flowing within the copper-oxide planes (Science, 19 July 1996, p. 325). The gap was large in some directions and small in others; when mapped out, the variations resembled a cloverleaf, a pattern characteristic of what physicists call a d-wave superconductor. Numerous experiments have shown this cloverleaf pattern for the superconducting gap (Science, 19 January 1996, p. 288), but the Stanford team was the first to show that it applies to the pseudogap in the normal state as well. That was a “strong indication,” that the physics underlying the pseudogap and the superconducting gap are the same, says Bell Labs' Batlogg.

    In an upcoming issue of Physical Review Letters, a Swiss team reports an equally strong link. In this case the team, led by University of Geneva physicists Øystein Fischer and Christophe Renner, used a different technique known as electron tunneling to show that the superconducting gap and the pseudogap also have the same magnitude—the same amount of energy must be added to the materials to overcome the superconducting and pseudogap alike. “That means that the pseudogap is intimately related to the superconducting gap,” says Fischer.

    Start making sense

    If the two gaps are related, and electron pairing causes the superconducting gap, what causes the pseudogap? “It's still not clear,” says Tom Timusk, an experimental physicist at McMaster University in Hamilton, Ontario, Canada. But another recent experiment may offer some insight. This experiment, carried out by physicist Girsh Blumberg of the University of Illinois, Urbana-Champaign, and colleagues in the United States and Japan, uses a technique called electronic Raman scattering to show the characteristic pseudogap in the normal state. But that's not all they see. Raman scattering is a technique whereby researchers fire photons at a material and watch how the light scatters off. The difference in energy between the fired and scattered photons reveals the energy levels of the electrons in the material.

    When Blumberg and his colleagues fired photons at a cuprate made from bismuth, strontium, calcium, copper, and oxygen above the Tc, instead of producing the sharp peaks of discrete energy levels, the scattered photons showed a flat, smeared-out spectrum from low to high energies. This, says Blumberg, suggests that electrons in the cuprates interact with each other strongly, so that when incoming photons scatter from the material they excite not just one electron, but a whole group of electrons collectively.

    The researchers found a different picture when they jacked the photons' energy up to 75 milli-electron volts: A large single scattering peak now appeared. “There are different interpretations of what this means,” says Blumberg. “But my favorite is that superconducting [electron pairs] get established well above Tc, and their binding energy is 75 milli-electron volts.” Hence, the higher energy photons are absorbed as they split the pairs apart.

    If so, this suggests that somehow electrons in the cuprates manage to pair up at temperatures well above the superconducting temperature, maybe even as high as room temperature. Pairing is essential, but not enough, for electrons to superconduct. For this they also need “coherence,” whereby they all travel together in step. But the extra heat present in the normal state probably causes the pairs to break apart almost as soon as they form. Thus, the pairs never get a chance to travel long distances, and superconductivity never arises.

    That explanation agrees well with theoretical ideas advanced by Vic Emory of Brookhaven National Laboratory in Upton, New York, and Steven Kivelson of the University of California, Los Angeles, who argue that the pseudogap in HTS ceramics arises because the holes in doped cuprates are not distributed uniformly through the material. Neutron scattering experiments suggest cuprates have alternating striped regions with and without holes. And Emory and Kivelson suggest that interactions between these regions cause electrons to pair up. But at temperatures above the Tc, energy from heat prevents these pairs from becoming coherent. And although the notion is still by no means proven, “it is the simplest idea out there,” says Timusk.

    It's certainly not the only one. Another theoretical camp, led by Philip Anderson of Princeton University and MIT's Lee, argues that the architecture of the cuprates forces the two fundamental properties of electrons—their charge and spin—to separate. Spins on neighboring electrons then pair up, even as charges go their own way. The pseudogap, says Lee, is essentially a signature of this spin pairing, which takes energy to split apart.

    Others propose that the behavior of the cuprates is tied to atomic-level magnetic fluctuations in the materials or a mixing of electronic excitations between oxygen and copper atoms. “The problem is not that there's no theory of high-temperature superconductivity,” says Boebinger. “There's too damn many of them.” And at this point, none of the theories is sophisticated enough to predict falsifiable properties of the materials. But Lee, Mook, and others believe that the host of new experiments on the normal-state properties of the cuprates is already beginning to put welcome constraints on theorists. “The challenge is to fit all those results together,” says Mook. “We're not there yet. But I think we'll make it eventually.”

  2. QUANTUM MECHANICS

    Teleportation Beams Up a Photon's State

    1. Andrew Watson
    1. Andrew Watson is a science writer in Norwich, U.K.

    For “trekkies,” being teleported from the bridge of the Starship Enterprise onto the surface of an alien world is still a dream. But at least in the quirky world of quantum mechanics, teleporting is now a reality. Anton Zeilinger and his team at the University of Innsbruck in Austria have shown that part of the spin orientation of a photon of light can be transferred instantaneously to another photon, irrespective of distance.

    Zeilinger says his team's work is “the first experimental demonstration of quantum teleportation”—the transfer of a quantum state from one particle to another, first proposed by IBM's Charles Bennett and his collaborators in 1993 (Science, 25 October 1996, p. 504). “The real interest of this is [that] it represents a new kind of information transfer,” says Bennett, of the Thomas J. Watson Research Center in Yorktown Heights, New York. Quantum teleportation could have applications in quantum computing, says Tony Sudbery of Britain's University of York: “constructing stable memories, protecting delicate quantum states, [and] communicating between quantum computers.”

    In today's computers, it is a simple matter to read off the digital 1s and 0s that are the two possible states of a computer circuit. In the quantum world, however, where 1 and 0 are labels for, say, the two spin states of a photon, taking the reading is a more interventionist act: It forces the system to adopt one of the two quantum states. Until then, it is some mixture of the two. If Alice—in quantum-speak, the person or circuit at one end of a quantum communication channel—wants to tell Bob (at the other end) about her photon, she has to take a measurement of it. The measurement forces it to be either a 1 or a 0, when what she really wants to tell Bob is about the mixed quantum state. But if computers based on quantum principles are ever to become a reality, they must be capable of moving quantum information around without ruining it by either having to read it before sending it or by sending photons through noisy, disruptive circuits.

    Enter quantum teleportation. The key to the Austrian experiment, reported in this week's issue of Nature, is to create a pair of photon twins that are intimately related to each other. When photons from a laser are fired into certain crystals, a single photon can split into two identical twins. The quantum state of the parent photon must be conserved, so the sum of the two offspring photons must make up that original quantum state. In the language of modern quantum mechanics, the pair are “entangled,” linked through some invisible quantum web. If a measurement on one indicates, say, that its spin is up, the entangled twin is forced into the opposite state—spin down.

    Bennett and his collaborators realized that an entangled pair can serve as a vehicle for teleporting the state of a third photon, the “message” photon. In their scheme, Alice makes a combined measurement on one member of an entangled pair together with her message photon. This measurement does not tell her the state of her message photon, but it does entangle her two photons in such a way that they end up with opposite states. As a result of her measurement, Bob's photon becomes instantly primed so that when Alice tells Bob what to do to his photon, he will find that it is identical to the original message photon.

    Although the process destroys the original state of the message photon, that state lives on in Bob's photon, the second entangled photon, irrespective of his distance from Alice. There is no transport of actual photons from Alice to Bob—only a flow of quantum information. “It is more like faxing than teleportation,” says Sudbery.

    Although the setup sounds simple, the team's apparatus is a bewildering tangle of lasers, mirrors, and optical instruments, and the work “required a great deal of experimental finesse,” says Bennett. One key was the development over the past 2 years of pure and bright sources of entangled photons, explains Zeilinger. Another, he says, was learning how to take measurements like Alice's, which register information about the joint properties of two photons that lose their individual identities in the process.

    A separate group, headed by Francesco De Martini at Italy's National Institute of Nuclear Physics in Rome, has performed a similar experiment, which the researchers will report in Physical Review Letters. The Rome group's experiment departs from Bennett's original vision but is optically simpler than the Innsbruck work. It relies on a single pair of entangled photons, with one twin forced to play the role of Alice's message photon in addition to its teleporting role.

    “I think one of the main uses to which teleportation will be put is moving data around inside a quantum computer,” says Bennett. In quantum computation, “there's the very serious problem that quantum information is very delicate. If it leaks out of the computer at all in the course of the calculation, the result will be spoiled.” Teleportation can help beat this kind of problem. “This is a way of sending quantum information reliably through a noisy channel,” he adds.

    Sadly for Star Trek fans, however, quantum teleportation cannot be scaled up to move Captain Kirk from place to place. It is more akin to teleporting “states” of Captain Kirk around the universe. One could imagine the captain's amorous mood being teleported to a far-off clone, a prospect sure to strike fear into the hearts of all female aliens. One consolation: The original amorous state of the captain would be destroyed in the process.

  3. DEVELOPMENTAL BIOLOGY

    Possible New Roles for HOX Genes

    1. Steven Dickman
    1. Steven Dickman is a writer in Cambridge, Massachusetts. E-mail: sdickman{at}worldnet.att.net

    Madrid, SpainThe best discoveries in the trendy field of “evo-devo” shed light on two processes simultaneously: how genes shape the bodies of today's organisms during embryonic development, and how those same genes may have guided the organisms' evolution (Science, 4 July, p. 34). New results from Peter Holland's team at the University of Reading in the United Kingdom seem to have achieved both objectives.

    Division of labor.

    The descendants of a proto-HOX gene cluster (left) may help shape theAmphioxus ectoderm (above) and the gut (below).

    ILLUSTRATION: K. SUTLIFF

    At a workshop on development and evolution held here last month by the Madrid-based Juan March Foundation, Holland reported that he and his colleagues found a putative second cluster of HOX-like genes in Amphioxus, a fishlike marine invertebrate that is seen as a crucial evolutionary link to vertebrates. A great deal of evidence has shown that the HOX genes—so-called because they carry a DNA sequence known as the homeobox—play important roles in laying down the head-to-tail patterns of embryos of organisms ranging from worms to flies to humans.

    In addition to providing a better understanding of the genes controlling Amphioxus development, the Holland team's result is intriguing because it may also help explain a key development in the rise of complex organisms like vertebrates during evolution: the creation of multiple “germ layers”—the primitive embryonic tissues that give rise to all of a creature's tissues and organs. HOX genes have long been known to be active in ectoderm, the outermost germ layer, but Holland found that the new “sister cluster” of HOX-like genes is expressed in the innermost layer, the endoderm. The results suggest, he said, that the appearance of the new cluster—presumably resulting from duplication of a primordial HOX gene cluster—is related to the creation of multiple germ layers in early evolution.

    Although other researchers want more evidence before accepting that suggestion, it helped make Holland's presentation the most talked-about at the workshop. “These were the newest and best results at the meeting,” says Andre Adoutte, an evolutionary biologist specializing in molecular phylogeny at the University of South Paris.

    Holland made this discovery while studying a set of HOX genes that has long puzzled developmental biologists. Almost all HOX genes are arranged in clusters of roughly nine genes each. The expression patterns of these genes along the head-to-tail axis of the embryo typically follow their arrangement in the clusters, with those at one end tending to be expressed more anteriorly, while those at the other end are active farther back. But researchers have also identified three or four types of HOX-like genes that don't seem to fit this neat pattern.

    While they carry a typical homeobox sequence, for example, they have never been shown to be part of a cluster. That presented a conundrum: If these “dispersed” or “orphan” HOX genes weren't in clusters, they could not have arisen the way true HOX genes supposedly did—by duplication of successive genes along a single chromosome. But their close similarity to true HOX genes makes it unlikely they arose by chance.

    At the meeting, Holland described new data from his laboratory showing that three of these orphan HOX genes are in fact clustered in Amphioxus the way true HOX genes are. Using probes made from vertebrate HOX genes, he found that two dispersed Amphioxus genes were located in adjacent regions of a single chromosome. He then used the technique of “chromosome walking” to track down a third orphan HOX gene located close to the other two. Finding the genes close together means, Holland said, that they couldn't have just “hopped out” of the single original HOX cluster one by one. Instead, he concluded, “the two gene clusters originated suddenly, by duplication of a primordial cluster.”

    What's more, the team found that the genes' order of appearance on the chromosome corresponds to the head-to-tail order of their activity in the body. However, they differ from other HOX genes in one key respect: They are expressed only in the endoderm, the innermost of the three embryonic germ layers, which gives rise to the gut and other organs of the viscera. Other HOX genes—including the first HOX gene cluster of Amphioxus identified by the Holland team in 1992—are active only in the embryonic ectoderm, the outermost germ layer, which produces tissues including the skin, nervous system, and sense organs. “What we've got here are two gene clusters, each obeying spatial colinearity but in different … layers,” Holland said.

    To back his contention that the genes of the new cluster were key to the development of the inner germ layer, Holland used data from other laboratories to show that analogous genes in mice and frogs are expressed in the gut. One of the Amphioxus HOX-like genes, for instance, is analogous to the IPF gene in mammals which, gene knockout experiments show, is crucial for development of the pancreas in mice. Putting this all together, Holland proposed that doubling of the primordial HOX cluster occurred before an ancestor of Amphioxus acquired an endoderm and was, in fact, instrumental in its creation.

    Other meeting participants view this suggestion cautiously. Michael Akam of the University Museum of Zoology in Cambridge, U.K., says that while Holland's data linking the three genes “look ironclad,” his hypothesis that one HOX cluster took over the patterning of the inner body layer and the other the outer one is “almost too clean a model to be probable.” Diethard Tautz of the University of Munich in Germany agrees. Amphioxus, he says, “is only one data point. You need to have more than one organism to form a complete picture.”

    The additional data points may come soon, as the search for HOX genes and their relatives in primitive organisms such as acorn worms is heating up. The implications are so important, says participant Axel Meyer, an evolutionary biologist at the University of Konstanz in Germany, that the race to find these genes “is a gold rush.” But the “gold” in this case will be a clear idea of how evolution really happened.

  4. CELL BIOLOGY

    Pinning Down Cell Division

    1. Gretchen Vogel

    The events in the cell just before it divides are some of the most dramatic in biology. The chromosomes condense, the nuclear membrane disappears, and the cell starts to build its mitotic spindle—a set of fibers that will eventually pull the chromosomes to the opposite poles of the dividing cell. How the cell choreographs these complex changes is unclear, but on page 1957 molecular biologist Kun Ping Lu of Beth Israel Deaconess Medical Center in Boston and Harvard University and his colleagues report evidence for a new mechanism that may play a key role.

    Twister.

    The Pin1 protein may help regulate mitosis by binding to phosphorylated proteins. The bright green region at left is the phosphate binding site.

    LU ET AL.

    Cell biologists have long known that the cell's progress toward division is controlled by a group of kinases, enzymes that add phosphate groups to a variety of cell proteins. For the most part, though, they've had few clues to what those phosphate additions actually do. That's where the Lu team's work comes in. It suggests that the phosphates serve as a sort of tag for attracting an enzyme called Pin1, which may cause the phosphorylated proteins to change their shapes. Researchers don't yet know exactly what this accomplishes, although they point to several possibilities, such as turning off an active enzyme, directing a protein to a new place in the cell, or targeting a protein for degradation. Whatever the precise result, however, the work provides “a new function for phosphorylation,” says molecular biologist Tony Hunter of the Salk Institute in La Jolla, California.

    Lu and Hunter first discovered Pin1 2 years ago as a protein that interacts with and inhibits another critical cell regulator, called NIMA, which helps turn on mitosis. Pin1 itself is an isomerase enzyme that changes the configuration of the peptide bond preceding proline, an amino acid that is an important determinant of protein structure because it can put kinks into a protein chain. Previous studies also showed that Pin1 is crucial for both yeast and human cells to divide properly. Without it, for example, cells can't complete mitosis. But its precise role in the cell remained a mystery.

    Researchers got a clue earlier this year, however, when Joseph Noel of the Salk Institute solved Pin1's three-dimensional structure. It showed that the enzyme has a pocket for binding phosphate next to the site where it binds its proline target, says molecular biologist Lewis Cantley of Harvard, a co-author on the Science paper. That suggested Pin1 might bind phosphorylated proteins.

    To confirm that hunch, the team searched through a library of protein fragments for peptides that bind to the enzyme. Sure enough, says Cantley, Pin1 preferentially picked out peptides that have a phosphate attached to an amino acid adjacent to a proline. With some sequences, in fact, the phosphorylated version bound thousands of times better than the unphosphorylated peptide.

    Other unpublished work suggests that Pin1 might help orchestrate cell division by interacting with other proteins involved in mitosis. When members of Lu's team went “fishing” through the contents of ruptured cells for proteins that bind to Pin1, they landed at least a dozen that are also targeted by an antibody, called MPM-2, that binds to proteins involved in mitosis in a wide range of cells. These proteins, too, contain a proline and an adjacent phosphate.

    Taken together, say Lu and his colleagues, the experiments suggest that Pin1 helps regulate a two-step process that governs cell division. Adding phosphates to proteins involved in mitosis creates binding sites for Pin1, which can then latch onto them and twist the peptide bond next to the prolines it contacts. That might, in turn, change the shape of the whole protein, perhaps altering its ability to interact with still other proteins, its location in the cell, or its life-span.

    Whatever the binding does, Cantley suggests that it might enable Pin1 to serve as a sort of checkpoint on the way to cell division. He notes that while cells lacking the protein can't divide—indeed, they die instead—manipulations that increase Pin1 production delay the onset of mitosis. Based on that, he proposes that by binding to phosphorylated proteins, Pin1 may slow down the activity of any proteins that are getting ahead of the rest of the cell. Hunter agrees. The properties of the protein suggest it might work as “some sort of threshold device,” he says, preventing premature functioning of certain proteins. If so, cells lacking the protein may die, because events get so out of order that they go into mitotic arrest.

    Other researchers aren't convinced that the story is that straightforward, however. Cancer pharmacologists Sally Kornbluth and Tony Means of Duke University have evidence that Pin1 can bind to NIMA without the help of phosphate, and that it binds to other proteins that do not bind MPM-2. “The mechanisms that govern the effects of Pin1 in the cell … have yet to be defined,” Means says.

    Indeed, Cantley cautions that no one has yet pinned down exactly what the protein does when it binds: “We have no proof that isomerization is what's required for physiological function.” It is possible, he says, that simply binding to a protein is enough to slow it down. Because researchers can now identify Pin1's partners, they hope they will soon be able to sort out its role.

    But even before that happens, the protein is attracting drug companies' interest. Because blocking the enzyme kills cells as they attempt to divide, drugs that inhibit the enzyme should target fast-dividing cancer cells without affecting the majority of cells in the body that divide only occasionally. “At least three or four companies are interested in looking for inhibitors,” Lu says.

  5. CANCER THERAPY

    Heavy Ions Pack Powerful Punch

    1. Dennis Normile

    Chiba, JapanA high-stakes experiment in cancer therapy appears to be paying significant short-term dividends for dozens of patients. A team of researchers presented preliminary results at a recent conference here* indicating that expensive treatments with beams of carbon ions showed promise against a variety of tumors that had been considered untreatable or had resisted previous treatments. But questions remain about the long-term efficacy of the approach, as well as about its value for money.

    Ion power.

    Chiba's accelerator (top) supplies carbon ions to treat cancer patients (above)

    NATIONAL INSTITUTE OF RADIOLOGICAL SCIENCE

    The data were the most complete results yet from Japan's Heavy-Ion Medical Accelerator in Chiba (HIMAC), which began operating in 1995 at the National Institute of Radiological Sciences (NIRS) (Science, 12 May 1995, p. 797). HIMAC was built on the premise that because beams of heavy ions can be focused more precisely than x-rays, the larger mass and charge of the particles will result in greater damage to the tumor and less injury to surrounding healthy tissue. “And that's what they're seeing,” says Inder Daftari, a radiation oncology physicist at the University of California, San Francisco, and a member of a team that pioneered the technique at Lawrence Berkeley National Laboratory in California until the lab's aging accelerator was shut down in 1993. “The results are very encouraging,” adds John Munzenrider, a radiation oncologist at Massachusetts General Hospital and Harvard Medical School, both in Boston.

    The new results come from clinical studies to evaluate both toxicity and effectiveness in patients with head and neck, lung, liver, and cervical cancers. Treatments typically consisted of periods of irradiation with carbon-12 ions, two to four times a week for 4 to 6 weeks. In all cases, alternative treatments had failed or been ruled out.

    Twenty-four of 34 patients with advanced head and neck cancers had complete or substantial regression of the tumors after 6 months. And seven of nine patients treated more than 2 years ago are still alive. Just over half (23 out of 44) of the patients with non-small cell lung carcinomas showed complete or partial response 6 months after treatment, and eight of 14 patients survived the 2-year mark. For hepatocellular carcinoma with liver cirrhosis, 18 of 25 tumors showed complete or partial regression after 6 months, and three of four patients survived beyond 2 years of treatment. There were also preliminary indications of a good response for cervical cancer. “It's very impressive, especially for lung cancers,” says William Chu, a radiation physicist at the Lawrence Berkeley lab.

    Those results were achieved with dosages that started at around 2 grays and were raised to as much as three times that level in subsequent trials following an evaluation one to several months after treatment. (One gray equals 1 joule of radiation energy deposited in 1 kilogram of tissue.) But some researchers think that such high per-session dosages could cause harmful, long-term side effects. Yasuyuki Akine, director of the University of Tsukuba's Proton Medical Research Center, worries that possible damage to surrounding tissue may not surface for several years. “Before escalating [the dosage], you need to evaluate late injury 3 to 4 years after treatment,” he says. Hirohiko Tsujii, head of NIRS's department of medicine, admits that the NIRS's team debated the timing but decided it would be impractical to wait 3 to 4 years before trying higher doses.

    These results come, however, with a hefty price tag. HIMAC, the world's only heavy-ion accelerator dedicated to medical use, cost $326 million to build and takes $58 million a year to operate. U.S. scientists don't foresee restarting heavy-ion experiments, says Munzenrider, “because of the expense.” Munzenrider and others hope, however, that many of the benefits of heavy-ion therapy can be achieved with beams of protons, which require less powerful—and much cheaper—accelerators than those needed to generate heavy-ion beams. But Daftari believes that each of the different therapies will eventually find its own niche in treating different cancers. “If effective, [heavy-ion therapy] would justify the cost,” he says.

    A group in Germany is waiting for government approval for a 5-year trial of up to 350 patients. It will be the first human trials for the team, which is based at GSI, the German national lab for heavy-ion research in Darmstadt. “We were very pleased to hear the [HIMAC] results,” says Gerhard Kraft, a physicist who heads the group. And Japan's Hyogo Prefecture is already building a $230 million, medical-use accelerator capable of delivering both protons and heavy ions. Even so, therapy is likely to be restricted to protons until HIMAC produces recommended protocols for heavy ions. Researchers say such protocols could be ready in 4 to 5 years.

    • * Proton Therapy Coordinating Group semiannual meeting PTCOG27, 17 to 19 November in Chiba, Japan.

Log in to view full text