News this Week

Science  27 Nov 1998:
Vol. 282, Issue 5394, pp. 1616

    DNA Suggests Cultural Traits Affect Whales' Evolution

    1. Gretchen Vogel

    Students of animal behavior seeking something akin to human culture could do worse than to look at whales. These social creatures have the biggest brains of any animal on Earth, long lives, and a complex repertory of calls, sung in distinct dialects. Now on page 1708, marine biologist Hal Whitehead of Dalhousie University in Halifax, Nova Scotia, suggests that in sperm whales and some other species, cultural traits—learned behaviors passed on to family members—are affecting the course of genetic evolution, a situation thus far documented only in humans.

    Whitehead has found a pattern of genetic markers in sperm whales implying, he says, that some whale matriarchs teach their groups as-yet-unidentified behaviors that give them a substantial survival advantage. Other marine and evolutionary biologists are greeting the proposal with great interest—and some caution. “It's a provocative idea, a really neat idea,” says marine biologist Bernd Würsig of Texas A&M University, Galveston. But it's hard to make a strong case for such a radical notion because so little is known about whale behavior and genetics, whale experts say. “The idea is intriguing but speculative,” says marine biologist Sarah Mesnick of the National Marine Fisheries Service in San Diego.

    Whitehead admits that a cultural influence on genetic evolution in whales “certainly isn't proven” but says his explanation “fits the data better than any other explanation at the moment.” The idea formed, he says, during a sabbatical spent sailing around the South Pacific with his wife, marine biologist Linda Weilgart, and two young children. Seeking a geographical pattern in order to understand the effects of locally intense hunting of sperm whales, Whitehead and Weilgart collected data on whales' vocalizations as well as tail scars, which may indicate how well an animal fends off predators such as killer whales and sharks. They also collected sloughed-off skin samples for genetic testing.

    The researchers found no clear geographical pattern, but they did find a genetic one: The whales' mitochondrial DNA (mtDNA), which is inherited only from the mother, indicated that groups with similar calls and markings were related. “The only mechanism that made much sense was that the vocalizations were being passed down through the mother's line like mitochondrial DNA,” Whitehead says.

    He concludes that whales learn these and presumably other behaviors from their maternal relatives and that the behaviors affect survival patterns. When he studied published genetic analyses of other whales, he found that species such as sperm, pilot, and killer whales—all of which have matrilineal societies in which offspring spend their lives with maternal relatives—have very low mtDNA diversity, less than one-fifth that of other whales such as humpbacks or bottlenose dolphins. Whitehead proposes that the diversity must have narrowed in the course of whale evolution as mtDNA “hitchhiked” on the success of behaviors passed from older females to calves, such as feeding techniques, methods for fending off predators, and baby- sitting. In a computer model, he shows that a cultural behavior that gives a 10% reproductive advantage and is passed on to 95% of daughters will reduce mtDNA diversity to almost zero in 300 generations.

    Because these whales live as long as humans and travel in stable groups, it makes sense that their social behavior could affect evolution, Mesnick says. But whale genetic data are so sketchy that it's too early to be confident that the reduced mtDNA diversity is real in all species, she says. And even if the data hold up, it's hard to be sure that cultural transmission is responsible, she and others say. A dramatic, temporary drop in population could reduce genetic diversity as well—although Whitehead argues that because killer whales and sperm whales are global species, they are less likely to have suffered a long-term bottleneck than whales with more restricted habitats, such as humpbacks.

    Researchers also question two assumptions Whitehead makes about cultural transmission. In his model, “lateral transmission”—in which an unrelated female learns the behavior and passes it to her relatives—has to be below 0.5%. Otherwise it would dilute the effect of transmission from a mother to her own family, and mtDNA diversity would not be reduced. Whitehead's number is too low to be realistic, say several marine biologists, especially as matrilineal species like sperm whales often have unrelated females in their group. “It's difficult to imagine a mother secretly clicking to her daughter, ‘Feed on squid,’” Mesnick says, while not sharing the information with a nearby unrelated female. Whitehead counters that, as in humans, whales might tend to join groups with similar cultural behaviors, so lateral transmission would not matter.

    Researchers also question whether learning a specific behavior could boost a female's reproductive success by as much as Whitehead assumes. The 10% figure is “optimistic,” says evolutionary biologist Marcus Feldman of Stanford University, even though human cultural practices such as the domestication of animals can confer considerable advantage. Despite all the caveats, whale biologists are fascinated by the proposal. They will now be testing it by studying learned behaviors and their effects on whale survival and genetics, hoping to learn whether whales, like people, are creatures of culture as well as biology.


    Panel Tightens Rules on Mental Disorders

    1. Eliot Marshall

    A presidential panel's call for stronger protection of mental patients who take part in research is drawing fire from clinical psychiatrists and some advocacy groups. The clinicians say that the report, a final draft of which was approved on 12 November by the National Bioethics Advisory Commission (NBAC), would impose too many constraints on research and would further stigmatize an already vulnerable population by singling out people with mental disorders for competency tests. Some patient advocates, on the other hand, complain that the new rules would still permit some research to go ahead without a patient's approval.

    Decision maze.

    NBAC would add steps to the selection of subjects for psychiatric research. A surrogate could give consent in some cases, but the subject can always drop out.


    The commission, which took up the issue 15 months ago on its own initiative, proposes 20 measures to protect people with mental disorders from exposure to risk in clinical trials. The panel—composed of ethicists, biologists, physicians, and patient advocates (but no psychiatrists)—says in its report that it isn't responding to a crisis. But with clinical studies of brain disorders such as schizophrenia and depression increasing, NBAC sought to “clarify the ethical framework” for such research. “I do not believe this will have any adverse effect on the research agenda,” says commission chair Harold Shapiro, president of Princeton University. He predicts that the “public will be much more supportive” of research knowing that stronger safeguards are in place.

    Clinical researchers, however, are alarmed at some of NBAC's recommendations, especially one that would require an “independent, qualified professional” to evaluate the competence of a subject in any study posing greater than minimal risk (see diagram). In general, says NBAC, only people judged capable of making a decision to enroll in such studies should be allowed to do so. Current guidelines, NBAC says, are murky and “inadequate.” NBAC would also like to see the government create a standing committee to set guidelines and review protocols for “exceptionally important” research, for which the consent requirements might be loosened.

    But this and many of NBAC's other proposals, if implemented, would constitute “a tragedy” for mental health research, says psychiatrist Roger Meyer, former vice president for medical affairs at George Washington University in Washington, D.C. Meyer, a consultant at the Association of American Medical Colleges, says the additional procedures would be a roadblock to recruiting subjects. Even noninvasive brain studies using PET or MRI scans would become “untenable” in many cases, he says, because under NBAC's scheme, these probably would be classified as posing more than minimal risk, making them off limits to many patients. Meyer also finds it “very scary” that a presidential commission has singled out this area of biomedicine for controls. He thinks NBAC members seemed “overtly hostile” to psychiatrists who testified publicly about the harm that might come from additional restrictions.

    The report “sets us back 20 years,” says Herbert Pardes, medical dean of Columbia University's College of Physicians and Surgeons. Pardes and federal research administrators met last month with NBAC members to argue that the focus not be restricted to people with mental disorders. They also suggested that the commission define a moderate-risk category of research that would permit brain scans and other routine procedures without a full competency review of each subject, an approach also suggested by the National Institutes of Health (Science, 30 October, p. 857). Both were rejected.

    Constance Lieber, president of a nonprofit advocacy group, National Alliance for Research on Schizophrenia and Depression, also thinks NBAC's plan would hinder research without benefiting patients. She says the new procedures may “drive up costs” and “discourage young investigators.” However, Lieber is pleased with NBAC's recommendations that local institutional review boards involve patients and their advocates more directly in vetting research protocols.

    NBAC is getting some criticism from the opposite flank, too. Vera Hassner Sharav, head of the New York City group Citizens for Responsible Care & Research, calls NBAC's recommendations “outrageous” because they “legitimize nonconsensual, nontherapeutic research.” NBAC's proposal to allow exceptional research to go forward—even when patients are not competent to give consent—violates the basic ethical principles medical researchers have followed since World War II, she says.

    NBAC's report now goes to the interagency National Science and Technology Council and then on to the 17 federal departments that could be called upon to implement the changes.


    Hairy Mice Offer Hope for Baldness Remedy

    1. Elizabeth Pennisi

    Hairbrained as it may sound, a better understanding of cancer could lead to a cure for baldness. Recently, researchers have linked overactivity in one of the cell's major biochemical routes for relaying developmental messages to the nucleus, the Wnt signaling pathway, to colon and other cancers (Science, 4 September, pp. 1438 and 1509). Now, researchers at the University of Chicago have shown that a key player in that pathway, a protein called β-catenin, can stimulate the growth of new hair follicles in mice.

    In work reported in the 25 November issue of Cell, Elaine Fuchs and her team have found that mice engineered so that their skin makes extra β-catenin grow more hair than their normal counterparts. “It's really a striking result,” says Matthew Scott, a developmental biologist at Stanford University School of Medicine in Palo Alto, California.

    Fuchs and her university, which has applied for a patent on the work, see the finding as a possible first step toward harnessing β-catenin or the Wnt pathway to help some 30 million balding men in the United States grow new hair. That's not a sure thing, however, especially because the researchers will have to be very careful that such tinkering doesn't trigger tumors—as happened with the Fuchs team's hirsute mice.

    Fuchs has long been fascinated by hair because it grows out of a structure, the follicle, that forms and regresses periodically, creating cycles of hair growth and loss. “It's one of the most complex forms of differentiation,” she says. She suspected that β-catenin might play a role because of an observation her group made about another protein, Lef1/Tcf, in the skin of early mouse embryos. The researchers found that the protein appears in a dot pattern reminiscent of that displayed by the progenitor cells that produce hair follicles. And because Lef1/Tcf is thought to link with β-catenin to control gene expression, the finding suggested that these two molecules, and the Wnt pathway, might help regulate hair follicle development. That idea was buttressed by what Rudolf Grosschedl's group at the University of California, San Francisco, found when they knocked out the Lef1 gene in mice: The animals had far fewer hair follicles than the controls.

    To test the idea that β-catenin is also involved in hair follicle development, Uri Gat in Fuchs's lab created a new strain of mice carrying extra copies of the β-catenin gene. Before introducing the gene into the animals, Gat had linked it to a regulatory sequence that would cause it to be expressed only in skin cells. He also removed part of the gene so that β-catenin protein could not link up with proteins that would cause it to break down.

    Animals carrying this gene not only were hairy critters, but they also got new hair follicles even as adults. Typically, an individual's full complement of hair follicles is set during embryonic development, but in these mice, new ones began to appear within a month after birth. They filled in the spaces between existing hair follicles, but did not form on areas, such as the foot pads, where no hair existed before. Apparently, only the cells in haired skin still had “properties that would allow them to be primed for new hair follicle [growth],” says Fuchs; these properties remain a mystery. The new hairs stuck out in many different directions, however.

    The additional β-catenin had darker effects as well. As adults, the mice had hind paws three times the normal size and thickened skin, as well as ridges around the ears, eyelids, and nose. And the mice tended to develop benign tumors in the hair follicles. Humans can develop very similar tumors. Their genetic basis is not known, but the mouse results suggest that β-catenin might be involved; Fuchs is looking for signs of excess β-catenin in the human tumors.

    Other goals would be to find the genes that b-catenin turns on to trigger hair follicle development in hopes that they could be activated specifically without causing tumors. Fuchs also wants to determine how b-catenin activation differs in embryonic versus tumor cells. The question is, “can we separate tumorigenesis from hair follicle morphogenesis,” she says. If they can, then perhaps her ideas about manipulating the Wnt-b-catenin pathway to cure baldness won't be so hair-raising after all.


    Improving Gene Transfer Into Livestock

    1. Anne Simon Moffat

    About 10 years before they startled the world by cloning Dolly the sheep, scientists at The Roslin Institute south of Edinburgh had rocked the scientific community by producing the first healthy sheep carrying a human gene. Since then, a few research groups have used similar gene transfer techniques to build herds of sheep, cattle, goats, and pigs that make human proteins, often with the goal of milking them for valuable drugs. Now, a new method developed by a team of researchers in Wisconsin and California promises to make production of such transgenic livestock much easier than it is today.

    Current gene transfer procedures for large animals are time-consuming and expensive, mainly because their efficiency is low: Only 1% to 10% of the animals that develop from eggs inoculated with a foreign gene carry it, and many of those who do can't transmit it to their progeny because they are mosaics that have the gene in some cells but not in others, including those of the germ line. Each transgenic livestock animal has cost about $500,000 to produce.

    Those low success rates prompted the Roslin researchers and others to turn to cloning experiments in which they replicate individuals from somatic cells to increase the numbers of transformants. But in the 24 November issue of the Proceedings of the National Academy of Sciences, Robert Bremel, formerly of the University of Wisconsin, Madison, and now managing director of Gala Design, a biotech firm in Sauk City, Wisconsin, his former Wisconsin student Anthony Chan, and their colleagues report that they have achieved a transformation efficiency approaching 100%. They did this by introducing a foreign gene into cow eggs before they were fertilized rather than shortly after, as is currently done.

    The increased efficiency should be welcome news to researchers who want to introduce genes into livestock, either to improve the strains or to use the animals to produce medically valuable proteins, such as monoclonal antibodies or vaccine proteins. “This is good work,” says geneticist Robert Wall of the U.S. Department of Agriculture's Agricultural Research Service in Beltsville, Maryland. He notes that because the gene transfer work is very costly, only about a dozen labs do it now. Improved efficiency might draw others in, however.

    In the older techniques, researchers introduce a gene in a carrier, such as the DNA of a retrovirus that can insert itself into the host cell DNA, into an egg that's already been fertilized. But if the DNA doesn't insert until after the egg starts dividing, as is often the case, it ends up only in the descendants of the cell where it integrated, which might be a small minority of the total in the embryo.

    To counter this problem, Bremel and his colleagues decided to use unfertilized bovine oocytes isolated in metaphase arrest, when the membrane that normally surrounds the nucleus is absent. The researchers reasoned that this would make it easier to get the foreign gene to the chromosomes so that it could integrate. In addition, putting the gene in the chromosomes of the egg itself would ensure that the gene would end up in all the cells, including the germ cells, of the animal produced when the egg was subsequently fertilized. In cows, “the DNA is incorporated better when inserted earlier,” says Bremel.

    And that's what the team found. First, they introduced the gene coding for the hepatitis B surface antigen into a retroviral carrier. They chose this gene, Bremel says, partly because the antigen makes an easily detected marker and partly because any transgenic animals could produce the antigen, which is used in hepatitis B vaccines. The researchers then injected the retrovirus into the oocytes, allowed them to mature, and fertilized them.

    Of the 836 eggs injected, 174 developed into embryonic blastocysts, and 10 of these were implanted into five foster mothers. This yielded three pregnancies and four healthy calves, two males and two females, now about 2 years old. Tests on skin and blood cells revealed that all four animals carry the hepatitis B gene. In addition, the females, Cressy and Buttons, secrete the antigen in their milk. And mating one of the bulls, Gremlin, with a nontransgenic female produced twin offspring, both transgenic. The researchers say that this technique should work in other species, including primates, where immature egg cells can be manipulated during metaphase. Indeed, if the technique proves as efficient as it now appears, it might even make livestock cloning obsolete.


    AFMs Wield Parts for Nanoconstruction

    1. Robert F. Service

    Nanotechnologists dream of creating useful machines the size of a virus, but for the time being they are in the position of a tinkerer who has a pile of parts but no workbench for assembling them. They have created a handful of molecular blocks, spheres, and rods but don't have a means of manipulating and joining the tiny components. But at the Sixth Foresight Institute Conference on Molecular Nanotechnology held in Santa Clara, California, last week, a team from Washington University in St. Louis and Zyvex, an instrument company based in Richardson, Texas, showed off their technique for wielding individual carbon nanotubes with a trio of robotic arms.

    The researchers welded opposite ends of tiny, straw-shaped tubes of carbon atoms to two tiny silicon probes normally used to map surface contours by “feel” in an atomic force microscope (AFM). They then moved the probes to twist, tug, and bend the tubes and even brought in a third arm to give the nanotube a nudge in the middle. All of this was accomplished inside a scanning electron microscope (SEM), allowing them to view and control the nanomanipulation as it happened. “This is very, very beautiful work,” says Anupam Madhukar, a materials scientist at the University of Southern California in Los Angeles. Cees Dekker, a nanotube researcher at Delft University of Technology in the Netherlands, says the most exciting demonstration was one showing two probe tips tugging on a nanotube until it broke. “That means they can now determine exactly how strong the nanotubes are,” which has long been a goal of the community, he says.

    Nanotubes, essentially just rolled-up sheets of graphite with diameters as small as 1 nanometer but lengths up to 100 micrometers, have been proposed for a wealth of jobs in the nanotoolkit (Science, 14 August, p. 940). Research teams have already hooked nanotubes to the apex of an AFM tip, a tiny silicon pyramid suspended from a cantilever arm, in an effort to improve the microscope's resolution, but the acrylic adhesive used to bond tube to tip has only moderate strength. For the new work, the Washington-Zyvex team had to figure out how to fasten the tube securely to two AFM tips while imaging it at the same time.

    First the team developed a kind of nanoscale jig, incorporating three independently movable AFM tips in different orientations. To attach nanotubes to these tips, the team used its SEM imager as a welding gun. SEMs image an object by scanning an electron beam over its surface and detecting the reflected electrons. Previous teams had shown that an SEM's focused beam can break up stray organic molecules floating in the vacuum chamber. This creates negatively charged hydrocarbon fragments that pile up on positively charged surfaces. So graduate student MinFeng Yu decided to see if those fragments could “weld” the nanotube to their AFM tip.

    Yu first dipped an AFM tip into a tiny pile of nanotubes, relying on static electricity to hold one end of a nanotube to the tip. He then focused the SEM beam so that it directed organic fragments in the vacuum toward the junction of the tube and the tip, which was given a positive charge. The result was a tiny carbon-based mound at the base of the nanotube. Then he steered a second AFM tip up next to the other end of the nanotube and repeated the nanowelding procedure. When he then moved the tips relative to one another, the nanotube curved and stretched, showing that the welds were secure—so secure, in fact, that when he pulled the tips apart with enough force, the nanotube broke. “That was a surprise, because we expected that the welds would be weaker than the nanotubes,” which are theorized to be stronger than steel, says physicist and team leader Rodney Ruoff of Washington University.

    Ruoff says it's too early to tell how much force it takes to break a single nanotube because the resolution of their SEM is insufficient to judge whether they are looking at a single nanotube or several. The team is hoping to answer that question using a more sensitive SEM or a higher resolution transmission electron microscope. After that, Ruoff says they plan to try welding nanotubes to each other, and then the real construction will begin.


    Looking South to the Early Universe

    1. Donald Goldsmith*
    1. Donald Goldsmith's most recent publication is The Ultimate Planets Book (Quality Paperback Book Club/Byron Preiss, 1998).

    A flash of news from the Hubble Space Telescope: The distant universe looks about the same in two opposite directions. When the Hubble was aimed at a small patch of northern sky for 10 days in 1995, astronomers believed that their time exposure had captured a typical sliver of the distant universe. But it never hurts to check. At the beginning of October, they followed up on the original Hubble Deep Field (HDF) with a 10-day exposure of a nondescript patch of sky near the south pole—and found similar swarms of faint galaxies, some of them among the most distant and earliest ever seen.

    That outcome may sound prosaic, but it's very welcome news to astronomers. “It was crucial to check on our assumption that the HDF is typical of the universe” with a second line of sight, says Alex Filippenko, a galaxy expert at the University of California, Berkeley. And the new view is more than just a reprise of the first: Instruments installed on the orbiting telescope since 1995 have enabled it to harvest far more detail this time around.

    “We should call the new results not the Deep Field South but the Southern Fields,” says Robert Williams, former director of the Space Telescope Science Institute in Baltimore and now a staff astronomer there, who devoted much of his “director's discretionary time” to the northern and southern deep fields. “This time we obtained three separate images, and comparisons among them will yield significant new results” about how galaxies formed and evolved.

    One of the southern images was made with the same camera system used in 1995. Equipped with color filters, it recorded the galaxies' colors, which hold clues to their distances. The reddest galaxies, their light “redshifted” to longer wavelengths by the expansion of the universe, are likely to be the most distant. A second field, slightly offset from the first because it was made with a different instrument, the NICMOS infrared camera, may have captured even more distant galaxies, their light stretched all the way into the infrared. And a third field, recorded with the Space Telescope Imaging Spectrometer (STIS), broke light from the early universe into spectra that may yield new details about galaxy formation.

    The HDF and the Southern Fields both record cosmic history, because they offer not a snapshot but a palimpsest of cosmic epochs, seen one behind another out to the most distant galaxy. Already, astronomers studying the HDF have traced how galaxy shapes and numbers change over time. “Look at the [most distant] galaxies: There's not a normal-looking one among them” in comparison with nearby galaxies, seen after 12 billion to 14 billion years of cosmic evolution, says Harry Ferguson, an associate astronomer at the Space Telescope Science Institute.

    The STIS image in the new Southern Fields could flesh out this picture by showing how clouds of intergalactic gas fed galaxy formation long ago. Williams and his associates hit on the idea of studying gas clouds by centering the STIS image on a quasar—a young galaxy with a brilliant beacon at its center—about 10 billion light-years from Earth, located on the sky about 0.1 degree from the basic deep-field image. As it observed the quasar's spectrum, STIS recorded dips in the amount of light produced by absorbing clouds of gas that lie along the line of sight. The redshifts of these absorption lines enable astronomers to map the distribution of intergalactic gas all the way out to the quasar. Although the lines of sight to the quasar and the southern deep field are not identical, they are close enough for astronomers to assume that the distribution of intergalactic matter is similar.

    Astronomers have long sought to explain how galaxies formed from such clouds of gas when the universe was only a few billion years old. Once observers measure the exact redshifts of the Southern Fields galaxies from ground-based telescopes in Chile, “we'll be looking for correlations between the [galaxies' and clouds'] redshifts,” says Williams. “This is going to provide an extremely important way to test our ideas of how the intergalactic medium turned into galaxies.”


    High Court to Review Standard for Appeal

    1. David Malakoff

    How expert is the patent office? In a surprising move, the U.S. Supreme Court has agreed to rule on a tug-of-war over patent law that is being watched closely by computer and biomedical inventors and investors. Its decision, expected sometime next year, could limit the ability of inventors to appeal if the government rejects their patent application.


    The case, Lehman v. Zurko, pits the U.S. Patent and Trademark Office (PTO) against a special federal court that hears appeals from inventors who have had their applications denied. PTO officials believe that judges for the U.S. Court of Appeals for the Federal Circuit, which hears cases ranging from patent challenges to government contract and employment disputes, have too much leeway to second-guess the government's rejections, which are often based on highly technical grounds. They would like the judges to show more respect for decisions reached by the PTO's patent examiners, many of whom hold advanced science and engineering degrees. “It's ironic that the court does not grant deference to an agency that has 400 Ph.D. scientists,” says PTO Commissioner Bruce Lehman.

    Lehman wants the appeals court to tell his agency to reconsider a patent rejection only if it finds the PTO acted in an “arbitrary and capricious” manner. Currently, the appellate judges can order a reconsideration if the agency was, in the court's opinion, “clearly in error” in interpreting the facts in the case.

    The patent office argues that it deserves the less intrusive standard under a 1946 law, the Administrative Procedure Act (APA), which was designed to impose uniform judicial review standards on all federal agencies. But the 11-judge appeals panel, which includes several members with scientific training, has rebuffed the patent office's efforts to rein in its oversight powers. Its position is backed by many patent attorneys and business executives, who say that changing the rules could disrupt the patent appeals process and discourage research investments. The PTO hasn't “presented a compelling reason for turning a consistent system of appeals on its head,” charges the Biotechnology Industry Organization in Washington, D.C., which represents about 750 companies and research institutions and has lined up with the appeals court.

    The controversy stems from a 1990 patent application for a software program from computer scientist Mary Ellen Zurko, now with Iris Associates in Westford, Massachusetts, and eight colleagues then working for the Digital Equipment Corporation (DEC). The software is intended to protect transactions between secured and unsecured computer networks. In 1994 one of the agency's 2500 examiners decided that the code was too “obviously” a variation on earlier inventions to merit legal protection. In 1995 the PTO's internal Board of Appeals upheld the denial, which DEC then took to federal court.

    Two years later a three-judge panel found that the factual basis for the denial was “clearly in error” and ordered the agency to reconsider the application. In a footnote to its decision, however, the court invited the PTO to request a rehearing of the case before the full appellate panel in hope of settling the standard-of-review conflict. Last May the full 11-member panel unanimously upheld the initial ruling, finding that Congress never intended the APA to limit the court's oversight authority. “Courts do not set aside long-standing practices absent a substantial reason,” it concluded, noting that adopting a more deferential standard might make the PTO's patent denials “virtually unreviewable.”

    Such a unanimous decision normally dooms an appeal to the Supreme Court. But earlier this month the justices accepted the PTO's plea for one more hearing on the matter. The petition complained that the appeals court had “aggrandized” its role in the patent process. It also implied that the judges don't have the technical savvy to review many patent decisions. “There was not a single judge on the [panel] who had technical expertise in the field involved” in the Zurko case, notes Nancy Linck, until recently the PTO's top attorney and now an executive at Guilford Pharmaceuticals in Baltimore, Maryland.

    Such arguments are “interesting but irrelevant,” says Ernest Gellhorn, who will present oral arguments this winter for Zurko and Compaq, the Houston-based company that recently purchased DEC. The key issue, says Gellhorn, a law professor at George Mason University in Fairfax, Virginia, is whether the APA allows the judges to go beyond the law's “arbitrary and capricious” standard in reviewing patent decisions. In his view, it does. Attorneys familiar with the case expect Antonin Scalia and Stephen Breyer, who have written extensively on the APA, to be influential in the decision.

    Any ruling that changes the appeals process is likely to affect just a handful of cases directly. Although patent examiners reject over half of the more than 200,000 patent applications submitted each year, fewer than 100 denials end up in the appeals court. Still, patent attorneys say, those few cases can have a disproportionate influence on patent law. That's why, says the biotechnology association, inventors and investors have taken “a special interest in this issue.”


    Program Luring Foreign Talent Gets a Boost

    1. Richard Stone

    Nizhny Novgorod, RussiaFor half a century this city was shrouded in mystery, closed to foreigners for much of the Cold War. But in a sign of how life has changed since the Soviet Union broke up, Nizhny Novgorod is now spearheading an effort to lure foreign scientists to Russia for short stints of teaching or research. The idea is that distinguished colleagues from abroad will fire up students—and, perhaps, preserve high-level science in the fraying former superpower. The program “helps students feel they aren't doing provincial science,” says Alexander Litvak, dean of the University of Nizhny Novgorod's (UNN's) Advanced School for General and Applied Physics.

    Centers of gravity.

    INCAS pulls in top guns.


    In a tribute to how highly regarded the 4-year-old program—called the International Center for Advanced Studies (INCAS)—has become, it is now being replicated in two other regions. Earlier this month, the Open Society Institute (OSI), a foundation set up by financier George Soros to support reform in Eastern Europe, approved plans to fund INCAS centers in Saratov and Moscow, in partnership with local authorities. “INCAS is one of the little success stories that flies in the face of conventional wisdom that all is terrible in Russia now,” says Neal Abraham, vice president for academic affairs at DePauw University in Greencastle, Indiana.

    The idea for INCAS was born in late 1994, when Mikhail Rabinovich, a physicist at the Institute of Applied Physics (IAS) in Nizhny Novgorod and the University of California, San Diego, concluded that budding Russian researchers needed exposure to “aggressive, active scientists.” Rabinovich first successfully pitched the idea to his former student, Boris Nemtsov, then the governor of Nizhny Novgorod region. Next he wooed Valery Soyfer, director of a Soros program that gives stipends to Russian educators and students. Soyfer praised INCAS to Soros, who told OSI to take a look. INCAS's merits and its shoestring budget—about $100,000 a year—convinced OSI to share costs with the regional government, now headed by Ivan Sklyarov, and Russia's science ministry.

    Since then Rabinovich, IAS Director Andrei Gaponov-Grekhov, and dedicated volunteers have run annual grant competitions open to all institutes and universities in Nizhny Novgorod region. INCAS so far has awarded some 60 grants for local labs to host foreign scientists, who do everything from studying the optical properties of fullerenes to giving lectures on herbivorous insects. The typical research grant averages $4500, lasts less than 6 months, and pays expenses of visiting scientists, as well as stipends for a few young Russian students and researchers.

    Foreign researchers say the visits produce results. Patrick Weidman, a physicist at the University of Colorado, Boulder, says his 3-month stint in 1996 for research on soap films yielded two peerreviewed papers. But he also found that experimenting in Nizhny Novgorod can be a challenge: Supplies are scarce, and he had a hard time making devices. “The machine shop had no schedule,” he says. “It just depended on whether the poorly paid machinists wanted to show up for work or not.” Nevertheless, Weidman gives the program high marks and plans to return next summer.

    Some visits have forged lasting ties. For instance, Princeton University and UNN have set up a joint graduate program in plasma physics. That way Princeton “won't just take away the best and brightest young people, but will use those young people as bridges” for future collaborations, says Princeton's Nathaniel Fisch. INCAS has also set up exchanges with the Catalan Society of Chemistry in Spain and the University of Bremen in Germany.

    Building on its success, the program last year began seeking partners in other regions. It selected Saratov and Moscow, which along with Nizhny Novgorod will receive INCAS support for 2 years. INCAS-S, as it's called, will be run out of Saratov State University and be slanted toward the region's world-class research on nonlinear dynamics. INCAS-M will be headquartered at the Kurchatov Institute, the well-known nuclear physics center. These two regions beat out others interested in hosting INCAS centers because they anted up matching funds. Establishing a center in Moscow seemed to run counter to INCAS's philosophy of supporting science in the struggling provinces, but Kurchatov director Evgeny Velikhov persuaded his friend Rabinovich that Moscow was having as hard a time as anywhere else. “Usually we are trying to find partners in the provinces,” admits Vyacheslav Bakhmin, director of OSI-Russia's culture division. “But sometimes exceptions take place.”

    INCAS officials say it's too soon to tell whether the program is convincing young scientists to make a go of it in Russia. But their effort is being applauded. “Without input and close interactions involving scientists from other countries, the once-powerful scientific activities within Russia may come to ruin,” says John Eaton of Baylor College of Medicine in Houston. “It is precisely programs such as those at INCAS which hold promise for reversing the downward spiral in the quality and quantity of Russian science.”


    Outsmarting HIV Drug Resistance

    1. Michael Balter

    For many HIV-infected people, a cocktail of antiviral drugs is all that stands between them and the immune system collapse that characterizes full-blown AIDS. And sooner or later, this defense falters. Many of the drugs act by interfering with a key HIV enzyme called reverse transcriptase (RT), and eventually the replicating virus mutates into strains whose RT is resistant to the drugs, forcing patients to move on to new drugs or drug combinations. Now a team from Harvard University has obtained an atomic portrait of the enzyme that should give new clues to how the virus foils existing drugs, along with targets for new drugs that might be harder to thwart.

    On page 1669, chemical biologists Gregory Verdine and Huifang Huang and structural biologists Stephen Harrison and Rajiv Chopra present the x-ray crystal structure of RT in a complex with the molecules with which it normally interacts in the HIV life cycle. They include one that binds to the same site as many of the existing anti-HIV drugs that work by inhibiting RT, including AZT and 3TC. As a result, says virologist Douglas Richman of the University of California, San Diego, “we can now visualize how the enzyme interacts with [RT inhibitors], which provides new insights into the mechanisms of resistance.” Adds virologist Jaap Goudsmit of the University of Amsterdam, “This is a good paper, and it's very helpful.”

    Successfully blocking RT is critical to antiviral strategies because the enzyme catalyzes a vital early step in HIV infection: the copying of the virus's RNA genome into DNA, which is then integrated into the host cell's chromosomes. To do this, RT first copies its RNA strand into a DNA strand and then, using the DNA strand as a template, recopies it to make a DNA-DNA double helix. Although other groups have made x-ray structures of RT, no one had ever captured it as it acts on its natural substrates.

    The Harvard team began by trying to crystallize a three-part complex made up of the RT protein; a DNA template, to which a short additional piece of DNA “primer” was bound; and a deoxynucleoside triphosphate (dNTP), a precursor building-block molecule that is repeatedly added onto the end of the primer to make the second DNA strand. Many RT inhibitors work by taking the place of dNTP and acting as DNA “chain terminators,” gumming up RT function by bonding to the end of the growing DNA chain and barring the addition of new dNTPs.

    Several groups, including Harrison's, had tried to crystallize this ungainly molecular complex for many years without success, apparently because the DNA primer-template combination associates only loosely with the RT protein. Harrison then asked Verdine's lab to help out, and Huang, after engaging in what Verdine calls “chemical biology heroics,” succeeded in tethering the primertemplate to RT with a disulfide chemical bond. The resulting complex was stable and uniform enough to form crystals, which the team took to two U.S.-based synchrotron sources to determine the structure of the enzyme, with all its substrates in place, to the detailed resolution of 3.2 angstroms. “This puts them all together and adds a critical piece to the puzzle [of resistance to RT inhibitors],” says biochemist Bradley Preston of the University of Utah, Salt Lake City.

    This three-dimensional view of RT in action, combined with earlier studies of the location of drug-resistant mutations along its polypeptide chain, is already yielding new information about how RT foils the inhibitors. For example, those mutations already known to confer resistance directly to the drugs are all clustered around the dNTP site, which the inhibitor occupies when it terminates DNA chain growth. The authors propose that these mutations interfere with the drug's ability to attach to the DNA, either by making it harder for the inhibitor to get into the right position or by reducing its stability or reactivity once it is bound.

    According to the Harvard team, the new structure also points to at least one possible target for new RT inhibitors: a small “pocket” in the enzyme near one portion of the dNTP site. Researchers say that drug companies are unlikely to wait long before following up these and other hints provided by the RT structure. “This opens a path to structure-based drug design,” says Preston. “It really wasn't feasible before.”


    Experiment Stopped After Safety Concerns

    1. David Malakoff

    Nuclear physicists will have to wait a bit longer for long-sought data on the structure of the neutron. In a decision that has stunned members of an international research team, the United States' flagship nuclear science center, the Thomas Jefferson National Accelerator Facility in Newport News, Virginia, has pulled the plug on a major experiment to chart the distribution of the charged particles—quarks—that make up the neutron. The cancellation, announced 12 November, came after an accident last month that heightened tensions between visiting researchers and managers of the 2-year-old facility. The decision, which reflects a growing attention to safety, “has left grown men crying,” says team spokesperson Donal Day, a physicist at the University of Virginia, Charlottesville.

    Day is one of several dozen researchers working on the $2 million GEN experiment, seen as a key to proving decades-old theories about how the neutron—the neutrally charged particle in an atom's nucleus—is put together. It involves smashing a beam of electrons accelerated through a kilometer-long circular tunnel against a barrel-shaped target containing supercooled ammonium atoms. By monitoring the collisions, researchers hoped to tease apart the configuration of the neutron's quarks. Indeed, conducting the experiment was one of the main reasons the Department of Energy (DOE) built the $600 million, state-of-the-art lab. “It wasn't the only experiment scheduled to address the question, but it was extremely important,” says Don Geesaman, a physicist at DOE's Argonne National Lab in Illinois and head of the Jefferson lab's user committee.

    However, that experiment had been plagued by delays since it began earlier this year. And its undoing came on the morning of 7 October, after a surveying team entered the experimental hall to make sure that the electron beam was correctly aligned before the next run. A powerful magnet that is part of the particle-scattering target had been accidentally left on, and its force pulled a surveyor's metal tripod through a thin aluminum window into the target. The collision caused an explosive release of the supercooled helium and damaged the sensitive machine. Although nobody was hurt, and the target was repaired, the mishap prompted an investigation into safety practices.

    That investigation—and the researchers' reaction to the findings—prompted Jefferson Lab Associate Director Lawrence Cardman to call off the experiment. In a 12 November memo widely distributed to lab users, Cardman wrote that he had uncovered a potentially dangerous design flaw in the target's helium release valve, as well as numerous violations of safety procedures. But he was most troubled by signs that the visiting scientists and Jefferson staff “had not developed a good working relationship.” In particular, he cited reports that a senior scientist had “ridiculed” a recent safety memo and that the researchers had been slow to submit a safety plan.

    By the time the plan was done in early November, he says, it was “too late. Experiments like this require cooperation, and they weren't taking their share of the responsibility,” although he also faulted his own staff. Another factor in his decision, he says, is that DOE has taken “a lot harder line” on safety in recent years.

    Cardman and Day say that such friction between visiting researchers and lab management is not uncommon at a large user facility and can usually be worked out given enough time. In this case, however, the researchers were up against a tight deadline: The target is scheduled to be shipped to the Stanford Linear Accelerator Center in Palo Alto, California, at the end of the year for experiments there aimed at understanding how quarks behave. “We were running out of time,” says Day, who notes that his team had safely operated the target in the past.

    It is not clear when Day's team will get a chance to complete its work. The target is scheduled to return to Virginia late next year, but the lab's experimental schedule is full. On the bright side, Cardman believes Day's group should be able to demonstrate that it can operate safely and says he doesn't hold a grudge: “We haven't banned them for life.”


    Cuba's Billion-Dollar Biotech Gamble

    1. Jocelyn Kaiser

    Fidel Castro has staked the lion's share of his country's science resources on biomedicine; surprisingly, Cuba's foray into this high-risk capitalistic arena appears to be paying off

    Havana, CubaOn a balmy fall afternoon in this city of crumbling Spanish piazzas, a power outage has sent office workers out the door early, to climb on bicycles or thumb rides on streets tangled with beaten-up Soviet Ladas and 1950s Chevrolets. Hunkered down inside the Center for Genetic Engineering and Biotechnology (CIGB), however, researchers work with the buzz of a beehive, their incubators and centrifuges humming along thanks to the center's emergency power plant. Many of the 700 researchers at this sprawling complex on Havana's western edge won't get home until late in the evening, wiped out from another long day.

    Although Cuba remains an impoverished nation hobbled by ill-conceived central planning and the U.S. economic embargo, the country has begun to attract attention for a surprising reason: its huge investment in biotechnology. President Fidel Castro has poured over $1 billion in the past 8 years into a palm tree-lined biomedical campus that's like a hybrid of the U.S. National Institutes of Health (NIH) and a generic drug company. He is staking much of his nation's science resources on a roller-coaster industry dominated in the United States, at least, by venture capitalists. The communist leader's improbable devotion to biotech has raised eyebrows among industry and academic experts. “It's a very risky investment for the Cuban government,” says Allan Bernstein, a molecular biologist at the Samuel Lunenfeld Research Institute in Toronto, Canada.

    But, perhaps equally improbably, this grand capitalistic experiment shows tantalizing hints of succeeding. Several thousand scientists at Havana's biomedical campus and at satellite centers have already developed a couple of dozen products, including monoclonal antibodies, streptokinase—a drug used to break up blood clots—and the world's only available vaccine against meningitis B (see table). Under development are cancer vaccines and other compounds that would be considered cutting-edge in U.S. labs.

    “It's a very impressive place, very vital,” says NIH director Harold Varmus, who visited CIGB in 1993. “They've got an excellent pool of scientists, the best in Latin America, no question,” adds James Larrick, president of the Palo Alto Institute for Molecular Medicine in California. And the clean rooms, fermenters, and purification lines in Cuba's drug factories are top-notch, Larrick says: “some of the best in the world outside of the United States and Britain.”

    View this table:

    But Cuba's fledgling industry faces major obstacles to competing with its rivals in developed countries. The 38-year-long U.S. embargo has isolated researchers from colleagues and pharmacy shelves in the United States (see sidebar), and Cuban biomedical institutes are only haltingly gaining the acumen needed to market products. Cuban biotech officials admit they have a long way to go to secure a place in the world market. “This is a new industry in Cuba,” says CIGB director Manuel Limonta. “In many places in the world, biotechnology companies don't even have revenues.”

    Cuban scientists don't blink when asked the inevitable question: How does the entrepreneurial spirit that drives biotech elsewhere survive—even flourish—in a communist dictatorship? The answer: They are driven by a desire to help their community, not to make money, they say. “It is the dream of any scientist” to develop a new drug or transgenic crop, says Cristina Mateo of the Center of Molecular Immunology (CIM). But the bottom line is still a consideration, Limonta says. “We have the idea of doing business the way it is done by any biotechnology company in the world.”

    Building a human machine

    Behind the biotech boom is Fidel Castro himself, who since seizing power in 1959 has made public health a priority—Cuba's infant mortality rate is the lowest in the developing world. “Fidel is like our principal scientist. He has encouraged and pushed all research in the biotechnology field,” says René Robaina of the Center for Immunoassays.

    What sparked Castro's interest was a visit to Cuba in 1980 by R. Lee Clark, former president of the M. D. Anderson Cancer Center in Houston, who talked up the wonders of interferon, then seen as a possible cure for cancer. Castro dispatched six scientists to the lab of Finnish virologist Kari Cantell, who had developed a method for making interferon from white blood cells. Back in Cuba, working in a house converted into a lab, “in less than 2 months we had interferon produced by our own hands,” says Limonta, who headed the group. A dengue epidemic hit the country in 1981, and Cuban doctors found that the homegrown interferon deterred a complication of the disease, internal bleeding. “The government could see that this kind of work did give some return to society very quickly,” says Pedro López-Saura, CIGB's clinical trials director.

    And that was just for starters. Limonta's group began making recombinant interferon and churning out antibodies to it. (The drug never panned out as a cancer cure-all, but it has found use against other diseases.) Despite losing a bid for a new United Nations biotech center, Cuba in 1986 invested $120 million to build CIGB and launched several institutes nearby. The labs began training a cadre of scientists, most of whom won the coveted privilege of working abroad for a year or two.

    The fledgling campus was under immense pressure from Castro to catch up with the rest of the world. CIGB staff grew accustomed to mandatory 14-hour days and a top-down agenda; one émigré who left CIGB in 1993 calls the conditions there at the time “slavery.” Even today, some young scientists eager to be chained to the bench wind up “frustrated” in quality control, says one U.S. scientist. López-Saura defends the system. “We are a small country and a poor country.” Young people, he says, “know they are not coming to science to win a Nobel Prize.”

    At first the campus focused on products for use in Cuba, cranking out preparations tested elsewhere. But after the Soviet Union collapsed in 1991, depriving the country of billions of dollars a year in subsidies from its patron, Cuba began peddling its wares abroad, especially in Latin America. These products now include everything from a hepatitis B vaccine to immunoassays that require one-tenth as much reagent as standard plates, putting blood tests for neural tube defects, AIDS, and other conditions within reach of dozens of developing countries. Cuba rakes in about $100 million a year from such products—a drop in the bucket, perhaps, to most any U.S. firm with a drug on the market. Nevertheless, says CIGB immunologist Jorge Gavilondo, the sales prove that “we have grown from a scientific institute to a biotech company.”

    Like any biotech company, CIGB and other Havana institutes pride themselves on their pipeline. Basic research is a growing part of CIGB's portfolio, says Limonta. CIGB and other institutes are working on vaccines against hepatitis C, dengue, and cholera, among other diseases. CIM, meanwhile, has pioneered cancer vaccines that trigger an autoimmune response to epidermal growth factor receptors, which are overexpressed in certain tumors, and to gangliosides found on tumor cell membranes. Nicholas Restifo of the U.S. National Cancer Institute calls their ideas “really fresh and interesting.”

    Not everything the Cubans have touched has turned to gold. A much-touted initiative to develop an AIDS vaccine has faced the same stumbling blocks that bedevil similar efforts in other countries. Two years ago, CIGB gave 24 volunteers a cocktail of GP120 HIV coat proteins, which did trigger an immune reaction. The institute plans to carry out further trials next year. But most groups outside Cuba are now combining GP120 with other strategies that prime the body's cell-mediated immune response. “There's more optimism” about this approach “than there is about just using GP120,” says Susan Zolla-Pazner, an AIDS researcher at New York University.

    Still, observers say, the advances clearly are outpacing the setbacks—an amazing feat considering that the country, aside from its booming tourism industry, endures rationed food and gas, a dearth of basic medicines like aspirin, and minuscule wages. (Scientists at the biotech centers get perks like subsidized meals and rides on company buses, but they earn only about $20 a month.) And, in spite of its privileged status, the biotech effort finds itself chronically short of funds in the wake of Cuba's economic crash a few years ago. Researchers scrounge for supplies and rely on foreign collaborators for access to pricey techniques such as x-ray crystallography. “They can't just experiment and waste reagents,” says Eva Harris, a molecular biologist at the University of California, Berkeley, who collaborates with researchers in Cuba.

    Despite these problems, Cuba's biotech researchers enjoy plum conditions compared to scientists in other fields. “The decision obviously has been made to pool resources into this one area to the neglect of other areas,” says U.S. National Science Foundation director Rita Colwell, who visited Cuba last year as part of a mission led by the AAAS, which publishes Science. “The emphasis is put on biotechnology because this is the most promising and able to bring back economic support to the whole system,” explains Ishmael Clark, president of the Cuban Academy of Sciences. Rank-and-file scientists seem to share this outlook. “It's a very strategic area. We don't complain,” say University of Havana physicist Ernesto Estévez.

    After the revolution?

    For scientists who believe in Cuba's biotech dream and are determined to remain in the country, the elusive goal is breaking into markets in developed nations. But Cuba faces many obstacles, including the high costs of getting approval to sell products in such countries. Cuba is “weak” in quality-control standards and marketing skills and has only recently begun applying for patents, notes Mikael Jondal of the Karolinska Institute in Sweden, who 2 years ago served on a European fact-finding mission to Cuba. Cuban leaders respond that their labs now adhere to international standards for quality control and clinical trials.

    A deal that Cuba inked with a Canadian company in 1994 offers reason for optimism but also shows the challenges the country faces. The firm, York Medical Inc., is acting as a partner to get Cuban products through Canada's regulatory hoops and then license them to drug companies. But of the five products it tapped as most promising, one—streptokinase—has lost major ground to another drug, TPA, says CEO David Allan; and a method for selecting the best antibiotic for a patient lost its allure when a U.S. company upgraded its system. Allan's firm is pinning its hopes on four cancer antibody products, now in clinical trials in Canada, as well as a combined antifungal and antibiotic.

    The central concern of many observers is how long Cuba can maintain the seeming paradox of engaging in a high-risk, profit-driven industry in a state-controlled economy. “I don't think they're going to be able to do really cutting-edge biotech in a top-down world,” Restifo says. “There would be hundreds of small Cuban biotech companies if the country was friendly to entrepreneurial endeavors,” adds Larrick.

    But Cuban researchers are optimistic. “We will succeed to sell products in the First World,” predicts López-Saura. And many remain staunchly loyal to the system that made this biotech gamble. “I owe my career, my son's and daughter's careers, my master's degree, my Ph.D. degree in Sweden to the government,” says CIM scientist-turned-marketing executive María Pascual López. Adds her institute's director, Agustín Lage: “These are moral values and sometimes this is difficult to explain.”


    Embargo Impedes Scientific Headway

    1. Jocelyn Kaiser

    They may not always see eye to eye on politics, but scientists in Cuba and in the United States agree that a tightening of the U.S. embargo in 1992 and restrictions on travel between the countries have hindered science—making Cuba's biotech exploits all the more impressive (see main text). Says the University of Havana's Ernesto Estévez, “In science, you feel [the embargo] every day.”

    The embargo is unusual in that it even bans trade in food and restricts sales of medicines. It was tightened in 1992 by the Cuban Democracy Act, which bars business in Cuba by non-U.S. subsidiaries of U.S. companies. As a result, Cuban scientists buy most of their supplies from Europe. This adds delays and can triple costs, especially for items such as restriction enzymes or spare parts that are made only in the United States and must be purchased from a middleman.

    U.S. policy also cramps intellectual growth: Curbs on travel between the countries isolate Cuban scientists from colleagues and conferences in the United States. “It's a blockade of our knowledge,” says Pedro López-Saura, clinical trials director at the Center for Genetic Engineering and Biotechnology in Havana. “The United States is the country with the highest scientific development in the world where most meetings take place, where you get the best training.” He and his colleagues wishing to attend meetings don't always receive a visa from the U.S. State Department in time, or at all. And U.S. scientists who want to do research or attend meetings in Cuba must apply for a license to spend money there under rules imposed 4 years ago (Science, 23 September 1994, p. 1803). “There's tremendous potential, a lot of enthusiasm, and to have them marginalized from the mainstream scientific community is an error and a shame,” says National Institutes of Health director Harold Varmus.

    The embargo has also impeded at least one initiative to improve public health in the United States, so far blocking attempts by the British drug giant SmithKline Beecham to license Cuba's meningitis B vaccine. Although questions remain about whether the preparation protects young children, it is the only vaccine on the market against group B meningococcal meningitis, a disease that accounts for nearly half the 300,000 cases and 35,000 deaths from meningitis each year worldwide.

    SmithKline wants to work with Cuba's Finlay Institute to improve the formulation at the company's vaccine center in Belgium, but the facility is owned by a U.S. subsidiary. The company has applied for an exemption from the U.S. Treasury Department, pledging to compensate Cuba in part with food and medicine. Fourteen members of Congress, including Republican senators Richard Lugar and John Warner, signed a 6 October letter backing the proposal; SmithKline is hoping for a decision by the end of this year.


    Clock Photoreceptor Shared By Plants and Animals

    1. Marcia Barinaga

    Cryptochrome, a light-absorbing molecule first discovered in plants, apparently helps light to set the daily clocks of Arabidopsis, fruit flies, and mice

    You arrive in a new time zone, go to a hotel, and wake up after a long, disorienting sleep. How do you tell if it's day or night? Light is a good cue, and not just for your mind. Deep in your brain, a molecular clock that oscillates with a 24-hour rhythm, pacing your physiology, also relies on light to keep it in synch with the day-night cycle. But even as researchers have discovered many of the components of the clock mechanisms that operate in organisms ranging from bacteria to humans, a major mystery has remained: the identity of the light-capturing molecules that transmit the light signal to the clock. Now, three research teams have fingered a suspect—a light- absorbing protein called cryptochrome that may play that role in organisms ranging from plants to mammals.

    Last week, a team led by Steve Kay of The Scripps Research Institute in La Jolla, California, reported in Science that cryptochrome is a circadian photoreceptor in plants, while another paper in that same issue from Aziz Sancar's team at University of North Carolina, Chapel Hill, suggested that it might be one in mice as well (Science, 20 November, pp. 1488 and 1490). And this week, Jeff Hall and Michael Rosbash of Brandeis University in Waltham, Massachusetts, and their colleagues report in Cell that the protein plays a similar role in fruit flies.

    The results indicate that cryptochrome is not the only molecule that relays light signals to the circadian clocks in these species. And the data from mice are controversial: Some researchers say that rather than proving cryptochrome is a light sensor, they suggest it could be part of the mouse clock mechanism itself.

    But just finding that cryptochrome plays a role in clocks ranging cross-kingdom from plants to mammals is “an amazing development,” says clock researcher Gregory Cahill of the University of Houston, and “the most extreme example of people finding homologies in clock-related genes across species.” Besides filling a gap in our knowledge of circadian clocks, the work could also lead to new remedies for jet lag that might mimic or enhance the process by which light resets the clock mechanism.

    That mechanism, the biological equivalent of the gears and springs in a watch, is a set of proteins whose levels rise and fall in a daily cycle. The proteins regulate their own oscillations by turning their own genes on and off; light can shift or “entrain” a clock by raising or lowering the level of a key clock protein and so influencing that feedback process. For example, in fruit flies, the clock protein Timeless (TIM) reaches high levels at night and turns its own gene off. Light causes quick destruction of TIM, allowing the tim gene to turn on and jump-starting the next daily cycle. To do that, the light must be captured by photoreceptors, which could be either in the same cell as the clock or some distance away, such as in neurons of the eye.

    Clock researchers began to suspect that cryptochrome might be such a photoreceptor shortly after Anthony Cashmore and his colleagues at the University of Pennsylvania in Philadelphia discovered it in 1992, in the plant Arabidopsis thaliana. Cashmore's team showed that the protein, which is sensitive to blue light, is important for a variety of light-based growth responses in plants, such as bending toward light.

    Findings such as those prompted Kay and his postdoc David Somers to test whether cryptochrome transmits light signals to the Arabidopsis circadian clock. To follow the plants' rhythms, they took regulatory sequences from a gene that has a 24-hour activity cycle controlled by the clock and connected them to the gene for luciferase, an enzyme that makes a chemical that glows in the dark. They then introduced the hybrid, clock-responsive gene into Arabidopsis plants. By determining when the plants glowed throughout a 24-hour period, the researchers could follow how mutations in light-sensing proteins affect the clocks' responses to various light conditions.

    Because light under different conditions such as twilight, midday, or deep shade is enriched in different wavelengths, Kay suspected that plants may need several photopigments to cover the spectrum and ensure that their clocks run properly in all light conditions. So he and Somers studied plants with mutations in two types of photopigments: cryptochromes, which specialize in blue light, and the phytochromes, which prefer red. They found that phytochrome mutants could not entrain their clocks to red light, while cryptochrome mutants fail to respond to blue, showing that cryptochromes and phytochromes both have roles in setting the clock.

    Of course, Kay points out, in natural conditions plants would never receive pure blue or pure red light, so both photopigments probably contribute to clock setting normally. But by analyzing the mutants at different wavelengths, they were able to show that both types of proteins are involved. As a result, says Houston's Cahill, “we now know more about circadian photoreception in plants than we do about most things.”

    Researchers haven't reached that level of understanding of clock-setting photoreceptors in animals, but in fruit flies, as in plants, cryptochrome seems to be one of at least two. That discovery came out of work begun by Ralf Stanewsky, a postdoc with Hall at Brandeis. He was searching for new mutations that affect the cycling of the clock protein PER. To do this, he adapted the luciferase assay developed in Kay's lab, in this case linking the luciferase gene to DNA sequences that control per gene expression. He found that one of the mutations that caused PER levels to stop cycling was in a gene that turned out to encode a fly version of cryptochrome. (Cryptochrome had been found in animals, although its function wasn't known.) Hall dubbed the gene crybaby (cryb), after a favorite Janis Joplin song.

    Because cryptochrome is a light-responsive protein in plants, Stanewsky and Hall thought the clock in the fly mutants may be unable to respond to light. That idea fit with an observation made in flies by Rosbash's team at Brandeis, that lighttriggered TIM degradation is most sensitive to light in the blue range—cryptochrome's specialty. So Stanewsky looked at TIM levels in the cryb mutant to see if they dropped in response to light. They didn't. “TIM was constantly high in light-dark cycles,” Hall says. “It was not responding to light.”

    But cryptochrome couldn't be the fly's only circadian photoreceptor. Although the clocks were not cycling properly in most cells of the mutant flies, the flies' behavior followed a normal rhythm, and its timing could be reset by new light-dark cycles. It turned out that the clock was still cycling normally in the brain neurons that control behavioral rhythms.

    That meant that the neuronal clock must be getting light signals from elsewhere, and one candidate was the photopigments in the flies' eyes, which send neuronal signals to the brain. To test this, Stanewsky crossed the cryb mutants with a mutant that lacks all signaling from the visual photopigments. The double mutant turned out to be “circadian blind,” says Hall—the flies' behavioral cycles were unable to reset to light. That suggests that visual photopigments, as well as cryptochrome, can entrain the flies' behavioral clocks.

    Cryptochrome, however, may be more than just a photoreceptor. In normal flies, the clock runs fine in the dark, with the TIM levels oscillating on their usual 24-hour schedule. If the cryb mutation only blocked the light response, Hall says, TIM should still follow its natural circadian cycle as though it were in the dark. But in most cells of the flies' bodies, the cryb mutation stops TIM from cycling at all. That means, Hall says, that cryb “has to be doubly defective. … We have a strong suspicion that the CRY protein is touching the clock works, that it is interacting with the clock factors.”

    Sancar's data in mice suggest cryptochrome might be playing a double role in mammals, too. He and postdoc Yasuhide Miyamoto found cryptochrome in mice in two telling places: the suprachiasmatic nucleus, the brain area that is the seat of the clock in mammals, and in a layer of cells in the retina that is necessary for circadian light responses.

    To see what the protein might be doing there, Randy Thresher in Sancar's lab made mutant mice that lack cryptochrome 2, one of the two cryptochromes in mice, and tested the effect on the animals' circadian clocks. In mice, light activates the production of the clock protein, PER. In the mutant mice, that activation was blunted but not eliminated. That suggests, Sancar says, that cryptochrome 2 is partially, but not wholly, responsible for transmitting light signals to the clock.

    Other data support that view, he says. In behavioral tests done in Joseph Takahashi's lab at Northwestern University, the activity cycles of the mutant mice—in which they run in their wheels all night and sleep by day—at first seemed normal; they could still be changed by a shift in the light-dark cycle. But on closer examination, the timing of the activity shift was much more variable in the mutants than in normal mice, again suggesting a partial role for cryptochrome in the light-induced shifts.

    What's more, mutant mice kept in the dark overreacted to light flashes. Such flashes shift the activity cycle of normal animals by about 2 hours, but in the mutant mice the cycles shifted by 8 to 12 hours. Sancar says this is consistent with what happens in some organisms when one of two light inputs to the clock are knocked out. He concludes that cryptochrome 2 is one of at least two circadian photopigments in mice, the other of which might be cryptochrome 1. His team is presently making double knockout mice to check that hypothesis.

    But other clock researchers, including Russell Foster of the Imperial College of Science and Technology in London, question Sancar's conclusion. Part of their concern is due to a finding in rodents that the so-called “action spectrum,” a plot of the response of the clock to different wavelengths of light, more accurately resembles the absorption spectrum of a family of photopigments called opsins than that of cryptochromes. Opsins include the photopigments the eye uses for vision, but it is clear visual opsins alone aren't responsible for clock setting. Foster's group has created mice that are totally missing all their rods and cones, the cells in the retina responsible for vision. The animals' clocks still entrain to light, says Foster, causing him to conclude that “there has to be something else” transmitting the light signals. It could be cryptochrome, he says, but adds that Sancar's data stop short of proving that.

    To further test the cryptochrome hypothesis, Robert Lucas, a postdoc with Foster, suggests taking a cue from the fly experiments and crossing the rod- and coneless mice with cryptochrome mutants to see if knocking out both light-reception pathways blocks clock entrainment. Another test, says Houston's Cahill, would be to see whether the cryptochrome mutation alters the animals' action spectra for light entrainment of their clocks. If it did, he says, that would be good evidence that cryptochrome is a circadian photoreceptor.

    Without such conclusive results in mice, many researchers won't accept cryptochrome as a mammalian circadian photoreceptor. Indeed, says clock researcher Carla Green of the University of Virginia in Charlottesville, Sancar's results suggest more convincingly that cryptochrome “could be part of the clock itself.” That, she and others say, is the simplest explanation for overresponse to a flash of light. It could also explain the finding that the mice have altered behavioral rhythms in constant darkness, and the presence of cryptochrome in the suprachiasmatic nucleus, which governs rhythms but does not directly respond to light.

    What's more, Sancar has localized cryptochrome to the cell's nucleus, where other key clock components such as PER and TIM go to regulate their own genes, a function that is at the heart of the clock's oscillating mechanism. “If I had got this data set,” says Lucas, “I would be excited that maybe it has something to do with the machinery of the clock.”

    Indeed, what most researchers in the field find most intriguing about the new results is the suggestion that cryptochrome may have begun as a pure photoreceptor, a role it seems to maintain in plants, but during the evolution of animals may have insinuated itself into the mechanism of the clock. That would add cryptochrome to a growing list of clock proteins that evolved from photoreceptors, including a set of key clock components that are evolutionarily related to bacterial photoreceptor molecules.

    One thing is for sure, says clock researcher Michael Menaker of the University of Virginia: “All of these data suggest that cryptochrome is very important. Whether it is important only as a photoreceptor, only as part of the circadian oscillator, or both, are secondary questions.” For a protein discovered as a photoreceptor in plants to wind up involved in the mammalian circadian clock is quite an evolutionary leap, says Cahill: “We don't have that many evolutionary connections between plants and the mammalian nervous system.”


    HIV's Early Home and Inner Life

    1. Michael Balter

    Lausanne, SwitzerlandUntil recently, Europe could boast only one major AIDS meeting: The Cent Gardes Colloquium, held biannually near Paris. This autumn, HIV researchers based in Switzerland inaugurated a second series to alternate with the Cent Gardes. The first meeting,* held here in the opulent Beau-Rivage hotel, attracted 230 researchers to discuss the latest in AIDS research from basic science to vaccine development.

    The Core of the Matter

    HIV's life cycle begins when it attacks target cells and ends when progeny viruses burst out to infect new cells. Current antiviral drugs target two enzymes involved in this cycle: reverse transcriptase, which copies HIV's RNA genome into DNA; and HIV protease, which snips viral proteins into the right sizes for assembly into mature virus particles. But some patients are resistant to these drugs or suffer side effects, and researchers are always looking for new targets to attack. A talk by biochemist Wesley Sundquist of the University of Utah, Salt Lake City, suggests that HIV's poorly understood inner core could present just such a target.

    HIV's basic structure includes an outer coat, which attaches to the membrane of a target cell, and an inner cone-shaped core, which enters the cell. This core is made up of two proteins—a large molecule called capsid and a smaller one called nucleocapsid—along with reverse transcriptase and the virus's RNA genome. The protein core appears to be a vehicle that helps transport the enzyme and the genome into the host cell. In recent years, Sundquist and his colleagues, along with other workers including Hans-Georg Krausslich's group at the Heinrich-Pette Institute in Hamburg, Germany, have shown that, under lab conditions, purified capsid spontaneously self-assembles into long, hollow tubes whose diameters roughly correspond to the varying width of HIV's conelike core.

    In new work presented in Lausanne, Sundquist reported that his team was able, for the first time, to replicate cone-shaped structures similar to HIV's core by adding nucleocapsid and RNA to the capsid proteins in just the right combinations under physiological conditions similar to those in living cells. Indeed, Sundquist showed electron micrographs demonstrating that these artificial cones bear a striking resemblance to those found in actual HIV particles. “These proteins just know how to assemble in vitro,” remarks retrovirologist Mario Stevenson of the University of Massachusetts Medical School in Worcester. And Didier Trono, a molecular virologist at the University of Geneva, comments that the work sheds new light on the mechanism of viral assembly, which is “really the black box of retroviral replication.”

    Sundquist proposed a model for how the cones might be formed from protein subunits. He showed high-resolution electron micrographs of cross sections of the hollow tubes made of pure capsid, which indicated that the tube walls are a honeycomb of hexagonal rings consisting of capsid molecules. He suggested that the addition of nucleocapsid molecules and RNA could tilt the rows of hexagons into a spiral, forcing the entire structure to narrow toward one end. As support for this model, Sundquist cited recent work by physicists Maohui Ge and Klaus Sattler of the University of Hawaii, Honolulu, who showed that fullerenes, which have a similar honeycomb structure of carbon atoms, can also be coaxed into forming conelike structures.

    Stevenson and Trono think that Sundquist's experiments could lead to an in vitro assay system to test drugs rapidly for their ability to disrupt cone formation, and Stevenson suggests that the experiments might even suggest new vaccine strategies. Although previous attempts to stimulate an anti-HIV immune response using capsid proteins have largely failed, Stevenson says that a vaccine based on the artificial cones, which resemble actual viral structures, might be more successful. At the very least, the new work opens up these kinds of possibilities. Says Trono: “Any single event in HIV's life cycle is a valid target for therapy.”

    Finding HIV's First Home

    Like most scientific fields, AIDS research has its share of dogmas. One of these concerns the kinds of immune cells in which HIV can replicate. Researchers have long assumed that T lymphocytes—the virus's primary target—must be in an active state to produce progeny HIV; that is, they must be immunologically stimulated to divide and proliferate. But because T cells are not activated against HIV in the earliest stages of the infection, many researchers have suggested that other immune cells, such as macrophages or dendritic cells—which can be infected and produce virus even when they are not dividing—are the main producers of HIV early on. T lymphocytes, according to this widely held view, become primary targets only after the immune system has begun trying to beat the virus down.

    But in one of the most debated talks in Lausanne, retrovirologist Ashley Haase of the University of Minnesota Medical School in Minneapolis presented evidence that T lymphocytes may in fact be the most important target of early infection. Even more surprising, Haase reported that unactivated T lymphocytes can produce virus, a finding that flies in the face of much current wisdom. If correct, these new results might have important implications for how HIV gains a foothold in infected people, as well as for therapeutic strategies.

    Haase and his co-workers, including research associate Zhi-Qiang Zhang, inoculated rhesus macaques with a strain of SIV, the simian version of HIV, that is capable of infecting both T lymphocytes and macrophages, and then analyzed a wide variety of tissues to see which cells were producing virus. Using molecular probes for SIV RNA, the team found that T lymphocytes made up almost all of the virus-producing cells, even in the earliest days after infection. Moreover, most of these infected cells did not show signs of activation or cell division, usually signaled by the appearance of cell surface proteins such as HLA-DR, Ki67, and CyclinA. Haase and his coworkers then went back and looked at lymphoid tissue from HIV-positive patients, where most T lymphocytes in the body are found, and discovered that they, too, harbored large numbers of unactivated but virus-producing T lymphocytes.

    The macaque results, in particular, show that “T lymphocytes and not macrophages or dendritic cells are the main targets at the very beginning of infection,” says pathologist Paul Racz of the Bernhard Nocht Institute for Tropical Medicine in Hamburg, Germany. Haase told the meeting that these quiescent cells, which produce progeny HIV at a low rate and may be more resistant to anti-HIV therapies than activated cells, could be key vectors for spreading the virus to other unactivated lymphocytes during transmission of HIV and early infection. Moreover, these cells seem to differ from previously identified “reservoirs” of HIV infection: T lymphocytes that harbor latent viral DNA in their chromosomes but produce no virus until activated (Science, 14 November 1997, p. 1227).

    If so, some researchers say, current experimental attempts to “burn out” the latently infected reservoir cells by activating them so they will be destroyed when virus progeny burst out could backfire, because the virus might infect new populations of drug-resistant quiescent cells. “This may be telling us that instead of activating, we should be trying to shut down residual replication in these cells,” says immunologist Giuseppe Pantaleo of the Vaudois Hospital Center in Lausanne.

    As intriguing as these findings are, many researchers are treating them with caution. Brigitte Autran, an immunologist at the Pitié-Salpêtrière Hospital in Paris, told Science she was not yet convinced that Haase's HIV-producing cells are fully quiescent. Autran says that some of the markers Haase used to determine their activation state, such as the appearance of HLA-DR, can lag many hours behind activation. Similar concerns are expressed by molecular virologist Didier Trono at the University of Geneva, who says that T lymphocytes may not fall into simple categories of “quiescent” and “activated” but that there might be a gradient between these two states.

    Although Haase's results need further confirmation, AIDS researchers will be following this story very closely. “This is really a major concern,” says Pantaleo, especially if “these [quiescent] cells are the ones that are not responding very well to antiviral therapy.”

    • * Colloquium of the Lémanique Center for AIDS Research, Lausanne, Switzerland, 26–28 October.


    From Solitaire, a Clue to the World of Prime Numbers

    1. Dana Mackenzie*
    1. Dana Mackenzie is a science and mathematics writer in Santa Cruz, California.

    The strange sort of randomness seen in a simple version of solitaire may hold a key to proving a hypothesis about how primes are distributed

    “I am convinced that God does not play dice,” wrote Albert Einstein in a 1926 letter to physicist Max Born. With this nowfamous quote, Einstein expressed his reservations about the emerging theory of quantum mechanics, which has randomness at its very core. But recent mathematical results might suggest that Einstein simply forgot to finish his sentence: “God does not play dice—He plays solitaire.”

    Solitaire is a subtler game than dice. Although the probability of winning at various dice games can be computed easily, no one knows the theoretical odds of winning at solitaire. “One of the embarrassments of our field,” says Persi Diaconis, a probabilist at Stanford University, “is the fact that we cannot analyze the common game of solitaire.” But a simpler version of solitaire has now been cracked, Diaconis announced at an October workshop on mathematics and the media at the Mathematical Sciences Research Institute in Berkeley, California. In work that is still being refereed, Percy Deift, a mathematician at New York University, along with Jinho Baik of New York University and Kurt Johansson of the Royal Institute of Technology in Stockholm, has proved that a deep similarity exists between a simple form of solitaire and a mathematical tool called random matrices, originally developed to understand the quantum behavior of large atoms.

    The implications could go well beyond card games to some of the most puzzling patterns in mathematics. Other recent work suggests that the same random matrix key might unlock the most important problem in number theory: proving the Riemann hypothesis, which describes how prime numbers are distributed among other integers.

    In the solitaire game that Deift and colleagues solved, the deck is shuffled and the player turns over the cards one at a time, placing each one on top of any higher ranking card that is already exposed. Sometimes there is only one possibility; sometimes the player has to choose among several piles. If no higher card is showing, he places the card in a new pile. The object of the game is to make as few piles as possible, and the group tackled the puzzle of just how many piles a perfect player can expect to make—a number that will depend only on the random order of the cards in the deck.

    Spaced out.

    Sums of dice in many rolls form a bell-curve distribution (right). Eigenvalues of random matrices collect into piles whose even spacing controls the number of piles in a solitaire game (left).


    Mathematicians answer this sort of question with a probability distribution—a function that represents the likelihood of each possible outcome. In dice, the frequency with which you will get particular sums of spots in a large number of rolls forms a Gaussian distribution, or bell curve. But Deift has proved that solitaire is not like dice. In fact, the solitaire game has a probability distribution that Diaconis says is “so esoteric that even mathematicians roll their eyes at it.” More precisely, it's the distribution of the largest eigenvalue of a certain class of random matrices, which are a mathematical tool familiar to quantum physicists.

    A matrix is nothing more than a square table of numbers. Each entry in the table might, for example, show the probability that a photon of wavelength i will emerge from an atom when it absorbs a photon at wavelength j. Often, matrices can be resolved into a “spectrum” of numbers, called characteristic values or eigenvalues—and indeed physicists calculate the spectra of simple atoms from matrices like these. In the physical example, the eigenvalues correspond, roughly speaking, to excited states that the atom “likes” to be in.

    For large atoms, such calculations are hopelessly difficult. But by choosing the matrix at random from a family that has certain symmetry properties, physicists can reproduce the distribution of spectral lines statistically, even if the lines do not exactly match those of the true atom. The approach, first proposed by the Nobel Prize-winning physicist Eugene Wigner, “was an immensely revolutionary thought,” says Deift. “It says there is no mechanism—or that the mechanism is irrelevant. The only thing that matters is the symmetry of the matrices and the probability distribution.”

    Random matrices, naturally enough, have random eigenvalues. But theirs is a very peculiar sort of randomness: The eigenvalues seem to push each other away, as if they were electrically charged atoms in a long tube. Thus they end up spaced at fairly regular intervals on a number line, in a curious limbo between complete regularity and complete randomness. Deift and colleagues have confirmed that the same kind of randomness governs the number of piles in the solitaire game.

    Number theorists, who ordinarily study prime numbers rather than card games, are excited by the solitaire work because the same spacing law seems inherent in their most famous unsolved problem—the Riemann ζ function. The ζ function is part of a remarkable formula, discovered by the German mathematician Bernhard Riemann in 1859, that precisely describes how prime numbers are distributed among the other integers. According to Riemann's formula, the density of primes decreases gradually, with a lot of small fluctuations, as their size get larger. The size and wavelength of the fluctuations are controlled by the “zeros of the ζ function”: in other words, by the numbers × and y that solve the equation ζ(× + y √-1) = 0. Riemann believed, but couldn't prove, that in every solution, × (which controls the size of the fluctuations) equals ½.

    Investigators have used computers to crank out millions of solutions, and so far all of them have queued up on the critical line × = ½. But no one has been able to prove that all the unknown ones fall on that line as well, which would make it possible to predict the full distribution of primes. “The Riemann ζ function is a leftover from the last century,” says Peter Sarnak, a number theorist at Princeton University. “It is the last elementary function we don't understand.”

    Computer calculations by Andrew Odlyzko of AT&T Labs Research in Florham Park, New Jersey, have shown a suggestive pattern, however: The y values in Riemann's equation satisfy exactly the same spacing law that eigenvalues of random matrices do. This suggests that the combinations × + y √-1 are, in fact, eigenvalues of some random matrix. At the October workshop, Sarnak suggested a way to exploit this connection. According to Sarnak, the ζ function is only one of a “zoo” of related functions, called L-functions. He and his Princeton colleague Nicholas Katz were able to match one of the tamer sets of L-functions in this zoo with a family of random matrices, whose eigenvalues are known to lie on the critical line. (Their work is set to appear later this year as a book published by the American Mathematical Society.) If this process could be repeated for the set of L-functions that includes the Riemann ζ function—a big “if”—then the Riemann hypothesis would follow. Sarnak and other number theorists think the methods developed by Deift and colleagues might hold clues to how this could be done.

    These hints that random matrices may hold the key to proving the Riemann hypothesis are adding to what Sarnak describes as a sense of “euphoria” among number theorists these days, which began with Andrew Wiles's proof of Fermat's Last Theorem. “You have the feeling that, if he can do that, then we can do this problem!” Sarnak says.

Log in to view full text