News this Week

Science  22 Aug 1997:
Vol. 277, Issue 5329, pp. 1028

    A Bitter Battle Over Insulin Gene

    1. Eliot Marshall


    The University of California (UC) is learning the hard way that it can be dangerous to charge into court to champion a faculty member's invention. A bruising 7-year patent battle between the university and Eli Lilly and Company resulted last month in a multimillion-dollar setback for the university. Along the way in this high-stakes legal brawl, the scientific integrity of prominent researchers who worked at UC, San Francisco (UCSF) in the 1970s took a battering from Lilly's lawyers, a Nobel Prize-winning scientist advising the company, and a federal judge.

    This vicious fight centers on a landmark discovery by UCSF biologists at the dawn of the biotechnology era: the first successful cloning of the rat insulin gene, reported in Science 20 years ago (17 June 1977, p. 1313). When Lilly—the nation's biggest insulin maker—refused to honor UC's patents on this and other insulin discoveries, the university sued in 1990. Had UC persuaded or forced Lilly to pay royalties, it might have tapped into an insulin business worth, by Lilly's reckoning, “hundreds of millions of dollars.” But last month, the U.S. Court of Appeals for the Federal Circuit ruled that the company had not violated UC's patents. The ruling, written by Judge Alan Lourie, upheld key parts of a decision by U.S. District Court Judge S. Hugh Dillin, who had ruled in Lilly's favor in December 1995. The bottom line for the university: Unless it files and wins another appeal, it has ended up with no royalties and millions of dollars worth of legal bills (see see sidebar).

    For the former UC researchers in the middle of this case—especially team leaders William Rutter, now chair of Chiron Corp. of Emeryville, California, and Howard Goodman, now at Massachusetts General Hospital in Boston—the appeals court's decision did provide some solace. It set aside a key part of Dillin's finding: that UC had won its patents in part through “inequitable conduct.” Dillin had based that ruling on Lilly's contention that the UC scientists had gained an advantage by violating federal gene-splicing rules in force at the time, and that they had “misrepresented the origins” of their insulin data to the public, the National Institutes of Health (NIH), the Senate, and the U.S. Patent and Trademark Office. For this reason, and because Dillin felt that the university had not revealed other adverse information to the patent office, he ruled that UC's patents were “unenforceable.”

    Rutter and Goodman have consistently denied any wrongdoing, and the appeals court has now declared that this part of Dillin's ruling is not relevant to the central question of whether Lilly had violated UC's patents. As a result, the appeals court did not address the substance of Lilly's charges about what happened 20 years ago.

    The UCSF group did its pioneering work in the face of stiff competition from another team, led by Nobelist Walter Gilbert of Harvard, that also was racing to track down the insulin gene. This race unfolded against a turbulent backdrop. The public was just beginning to learn about recombinant DNA technology; some claimed that new organisms might escape from the lab (the risks actually were minuscule), and officials were proposing ill-defined rules to restrict gene splicing. Indeed, fear of engineered organisms was so intense in the 1970s that Cambridge, Massachusetts, banned recombinant DNA work within the city limits for a time, annoying local scientists. During this period, newly published NIH guidelines permitted federally funded researchers to run mammalian gene-cloning experiments only in “vectors”—viruses, DNA loops called plasmids, and other vehicles for replicating DNA—approved by its Recombinant DNA Advisory Committee (RAC) and certified by NIH.

    In January 1977, when they were in the early stages of their rat insulin work, the UCSF group used a modified plasmid called pBR 322 to reproduce the rat insulin gene in bacterial cells. While the hugely efficient pBR 322 had been provisionally approved by RAC, it had not been certified as safe by NIH. This breach of the NIH guidelines came to light later that year, when writer Nicholas Wade reported it in Science (30 September 1977, p. 1342). That fall, when NIH investigated Wade's report, UC scientists said they had been confused by the new rules, and that they had destroyed all the offending research material on 19 March 1977, a few weeks after realizing that pBR 322 had not been certified. They said they later switched to an approved vector (pMB9), which formed the basis of their published findings. The Senate also held hearings in November 1977; at these, Rutter, then UCSF biochemistry chair and a co-investigator on the insulin project, said that pBR 322 had not been used after March 1977.

    Lilly dredged up these events at the patent trial. The company charged that, although the UCSF biologists destroyed some pBR 322 material in March 1977, they retained the DNA for sequencing. The resulting data, Lilly charged, became the basis for the Science paper. The same data were also the basis for UC's patent claiming vertebrate genes for insulin, applied for on 27 May 1977. Lilly charged that the UC scientists simply labeled the pBR 322 data as coming from the approved pMB9 plasmid. By using pBR 322, Lilly alleged, the UCSF biologists had stolen the march on their competitors, winning an early patent date. (Nobody has charged that the UCSF team's use of pBR 322 endangered safety. Indeed, pBR 322 was certified by NIH on 7 July 1977, 2 months after UC filed its patent.)

    Judge Dillin accepted all these arguments when he ruled that the patent had been obtained by “inequitable conduct.” But the appeals court dismissed this reasoning, arguing that “a reasonable patent examiner would not have considered noncompliance with the NIH guidelines to be material to patentability.” The court added that Dillin had given way to “unfounded speculation” when he theorized that, had the university “complied with the [NIH] guidelines,” some other inventor might have beaten UC to the patent office. Within the context of patent law, the appeals court said, there had been no misconduct.

    Lawyers for UC argue that the ruling nullifies all the facts cited by the lower court. But other patent experts—including Rebecca Eisenberg of the University of Michigan, Ann Arbor—say the “facts” in Dillin's ruling should be taken for what they are: the findings of one well-briefed judge, which have now been ruled legally irrelevant.

    Science sought clarification last week from Rutter and Goodman about the origins of the rat insulin data they published in this journal. Attempts to obtain comment from the former postdocs who did the detailed rat DNA analysis were not successful.

    Both of the lead researchers dismiss Dillin's judgment as wrongheaded. Rutter calls it “outrageous,” adding that it “demeaned the basis of an important scientific discovery.” He complains that “it seems like Judge Dillin just copied Lilly's brief.” Goodman says the judge's reading of events is “utter nonsense.”

    Dillin wrote of what he called two “smoking-gun” letters delivered by certified return mail on 22 and 25 March 1977—identical in content, one from Rutter to Goodman, the other from Goodman to Rutter, bearing the names of both scientists. They describe in detail how the two had weighed their options for using or discarding pBR 322 data in 1977, concluding that they felt it best to “keep the cloned DNA since the experiments had already been performed,” and “since the hypothetical danger, if any, is not with the DNA itself.” The judge was troubled that this version of events appeared in letters postmarked after the date on which the clones were said to have been destroyed (19 March). Dillin interpreted this to mean that Rutter and Goodman had knowingly used pBR 322 sequence data in their publications. Furthermore, he wrote that the certified letters, which sat for years unopened in the two scientists' files, “could have had no purpose but to keep either of the writers from attributing the misuse [of pBR 322 data] to the other.”

    Rutter dismisses the letters as inconsequential. “They…reflected our thought processes at the time…. They were sent to each one as a record, for safekeeping,” he says. And Goodman explains: “We tried in that letter to document our thinking as best we could, in anticipation of talking to NIH and deciding what to do.” Rutter adds that “our plans changed” after he spoke privately in the spring of 1977 with NIH official DeWitt Stetten, who kept the violation of NIH rules to himself but urged Rutter to destroy the pBR 322 clones. The letters, Rutter says, were “processed and mailed noncontemporaneously.” Judge Dillin noted in his opinion, however, that Rutter's conversation with Stetten took place no later than 19 March 1977, several days before the letters were postmarked. He wrote he was “far from convinced” that Rutter and Goodman would revise their decision but not the damaging record they subsequently sent each other for safekeeping.

    In reaching his conclusions, Dillin also relied on a set of draft scientific manuscripts written by Goodman. All employ the same language and report essentially the same sequencing data from clones containing the rat insulin gene. But each describes the use of a different type of vector: The first describes the sequencing of pBR 322; the second, pCR1; and the third, pMB9. Testimony during the trial revealed that the UC team never succeeded in cloning the insulin gene into pCR1, which NIH had certified as safe early in 1977. But one manuscript includes a full description of data from a vector described in the underlying text as “pBR 322,” amended to “pCR1,” with corresponding changes in sequence to reflect different DNA-splicing details. In another draft, “pCR1” in the underlying text is revised to “pMB9.”

    Lilly also charged that there were anomalies in the genetic information in these manuscripts. Its arguments on this point were presented to the court by Lilly's star witness, Harvard biologist Walter Gilbert. Gilbert charged that all the draft manuscripts contained sequence data on fragments of the insulin gene that are identical to data obtained from pBR 322 clones, as described in Goodman's lab notes—right down to the identical number of nucleotide “A's” in the sequence “tail.” In addition, Gilbert pointed to data in the final Science manuscript that include typographical sequence errors that appeared in Goodman's lab notes on pBR 322. Citing this evidence, Judge Dillin concluded that “the manuscripts were based on work done with the uncertified vector pBR 322.”

    Rutter responds that he “firmly believes” that pBR 322 data did not end up in the Science paper. He notes that the rat DNA used in the lab's cloning work in 1977 came from a single preparation, and that this might explain why the pBR 322 data were identical to the pMB9 data. He says he cannot be held accountable for sequencing details or errors in “secretarial transcription,” which could have caused some confusion.

    How did the UC team prepare a manuscript with sequence data from pCR1 clones when pCR1 cloning had failed? Goodman says, “I was up in Seattle at the time, writing manuscripts” based on data supplied by the lab in San Francisco. “There was some mix-up in terms of what vectors were which at that point.” He says he wrote “several versions of manuscripts that…were anticipating which vector might work.” Eventually they succeeded with pMB9. (The UC researchers say that after pMB9 was certified as safe on 18 April 1977, they went into high gear, recloning the gene into the new vector, resequencing the DNA, and sending their manuscript to Science on 9 May 1977.)

    Rutter says that Goodman wrote all the manuscripts. He suggests the pCR1 draft may have been done in anticipation of getting data that were not obtained. “It is not uncommon for scientists to prepare manuscripts concurrently with doing experiments,” Rutter says, adding “I know one relatively famous scientist who wrote manuscripts before carrying out the experiments” to sharpen the focus.

    He argues that there is also one strong personal indication that Judge Dillin is wrong: The members of the original UC research team—even those who are no longer friends and have gone into competitive projects—remain “absolutely unanimous,” Rutter says, that the forbidden vector pBR 322 was not the source of the Science data. If anyone doubts that, he adds, the pMB9 clones were deposited “in the bank” at the American Type Culture Collection in Rockville, Maryland, and could be resequenced to see if they yield the data published in the Science article.


    Courts Take a Narrow View of UC's Claims

    1. Eliot Marshall

    When a team of biologists at the University of California, San Francisco (UCSF), reported 20 years ago that it had cloned the rat insulin gene, team members thought they had bagged the biggest prize in the new world of biotechnology. But last month, a federal appeals court in Washington, D.C., may have ended any hopes UC had of cashing in on this landmark discovery. It upheld parts of a lower court ruling that two key patents of UC's were flawed, so Eli Lilly and Company—the nation's biggest insulin maker—doesn't have to pay UC potentially tens of millions of dollars in royalties. UC prevailed on one point, though: It persuaded the appeals court to set aside allegations that its researchers and officials had committed “inequitable conduct” (see main story).

    Patent experts say the rulings may have implications that extend well beyond UC's balance sheet, making it more difficult for inventors to assert broad claims based on the discovery of a single gene. UC's loss also provides a cautionary tale for universities trying to uphold their intellectual property rights. Universities should do “a good deal of soul-searching” before entering a major patent battle, says UC's director of technology transfer, Terence Feurerborn.

    Former UCSF scientists, including William Rutter—a leader of the group that cloned insulin and now chair of the Chiron Corporation in Emeryville, California—are disappointed, too. Rutter says he's upset that a discovery whose technological value seemed clear 20 years ago has received such poor treatment in the patent system. Finding the rat insulin gene, Rutter suggests, opened the way to modern insulin production. This legal decision, he believes, has failed to protect the “truly innovative discovery…on which all the rest is based.”

    Scientists at UCSF under Rutter and co-investigator Howard Goodman, now at the Massachusetts General Hospital in Boston, focused their insulin studies in 1977 on rat DNA, in part because federal guidelines at the time prohibited the use of human DNA. After isolating and cloning a gene for rat insulin and its precursor molecules, they sought patents in May 1977. This was the first time the entire genetic sequence for an insulin gene had been spelled out—making it relatively easy later to “fish out” the human gene. It took two more years of concerted effort at several labs, however, to clone the human gene and coax bacteria to express it.

    A decade after applying for a patent on the rat genes, UC received U.S. Patent Number 4,652,525 in 1987, awarding it commercial rights to the use of plasmids containing insulin genes. As soon as federal rules permitted, the UC team zeroed in on human gene experiments, developed data, and applied for a new “methods” patent in 1979. Awarded in 1984, this one (Number 4,431,740) covers the DNA sequence for human insulin, its precursor molecules, and methods of tailoring the human DNA for expression by bacteria.

    UCSF scientists did not do all this work in isolation, however. For example, John Shine, the team's “wizard of sequencing,” as Rutter calls him, used methods developed in part by a competitor, Harvard's Walter Gilbert. And UC, in turn, had shared technology with Lilly, while Lilly had shared its decades-old expertise in insulin chemistry with the UC team and with a newly formed genetic engineering company in San Francisco, Genentech, Inc.

    Genentech played its own major role in insulin manufacturing. Staff scientists, together with Roberto Crea, Keiichi Itakura, and Art Riggs at the City of Hope National Medical Center in Duarte, California, disclosed in November 1977 a method of tailoring a human gene so that bacteria could efficiently express the protein somatostatin. Building on that work, Genentech researchers David Goeddel and Dennis Kleid in 1978 developed with City of Hope a method of independently expressing two elements of the human insulin precursor molecules (the “A” and “B” chains) and using them to build a synthetic form of insulin.

    After signing an agreement with Genentech, Lilly in 1982 began marketing synthetic human insulin made by the two-chain process. According to a Lilly legal brief, the company sold about $200 million worth of insulin made this way before switching in 1986 to a more efficient technique. The Itakura-Riggs method is used in this technique to express the entire insulin precursor molecule, which is converted to insulin itself in the body. Lilly claims Genentech developed the process in 1978–1979 in connection with work on human growth hormone. But UC claims that its own scientists were first to get bacteria to express the human insulin precursor gene, on which they filed a patent in 1979.

    When Lilly refused to pay royalties to UC, the university sued in 1990, claiming that Lilly was infringing on both its patents. To UC's dismay, the trial was shifted to Indianapolis, Lilly's hometown. There, Judge S. Hugh Dillin came down heavily in Lilly's favor in December 1995, rejecting both of UC's patents. He ruled that the rat gene patent was invalid because the gene's sequence differed from the human DNA sequence that Lilly used in manufacturing. And he declared that Lilly's process was different enough from the one UC patented that it did not infringe the patent. UC appealed early this year, and the U.S. Court of Appeals for the Federal Circuit ruled on 22 July that, while Judge Dillin had gone too far in some respects, Lilly would not have to pay royalties.

    Some patent experts think the decision could have a broad impact, compelling gene hunters to spell out the exact sequence of all the DNA they hope to claim, rather than just the function of the genes. For example, an attorney for one company says, “we're changing the descriptions in all our patent applications to emphasize the chemistry.” And Paul Clark, of Clark and Elbing in Boston, views the decision as “yet another illustration of the poor match between academic research and the patent system.” He thinks the ruling will put scientists working with animal models at a disadvantage in the competition for medical-use patents—or encourage them to delay publishing until they have human data.

    UC's Fuererborn says he's “leaning strongly in favor” of asking the appeals court for a rehearing. And UC could, in principle, ask for a U.S. Supreme Court review. But attorneys say the Supreme Court accepts few patent cases, and UC officials may not want to push their luck. After all, it could have been worse: The lower court had initially ordered the university to pay Lilly's legal bills, estimated at $18.5 million. That penalty was dropped when the appeals court set aside the “inequitable conduct” allegations. Now, UC is stuck only with its own legal costs: about $12 million.


    Project NExT Helps New Ph.D.s In the Classroom--and Beyond

    1. Dana Mackenzie
    1. Dana Mackenzie is a writer in Santa Cruz, California.

    The ceremony, held 2 hours before the opening banquet of the summer meeting of the Mathematical Association of America (MAA), had the festive air of a graduation. The honorees filed forward to receive certificates, returning to their seats with whoops, cheers, and high jinks. But these “graduates” already had doctorates in mathematics. The celebration marked the end of their year as fellows of Project NExT, an innovative new program that represents a generational change in the way mathematicians enter the teaching profession.

    Project NExT, an acronym for “New Experiences in Teaching,” was born amid the ashes of a disastrous job market in the mid-1990s and nurtured by the ferment of pedagogical reform at the grade-school and college level. The program aims to prepare new Ph.D.s for the challenge of classroom teaching, to give them “a jump start into the profession,” says co-director James Leitzel, a number theorist and specialist in teacher preparation at the University of New Hampshire in Durham. But in its 3 years of existence, the program has turned into something more: a nationwide network of young faculty, and a leadership training program for the mathematics community. “It's a rare example of a program that everyone thinks is wonderful,” says Kenneth Ross of the University of Oregon in Eugene, a former MAA president.

    Leitzel and project co-director T. Christine Stevens of St. Louis University conceived of NExT in early 1993 while Stevens was serving as a Visiting Mathematician at MAA headquarters in Washington, D.C. They were motivated, Leitzel recalls, by the sense that mathematicians were not prepared by their graduate training to move into the classroom, especially at a time of fundamental changes in the way mathematics is taught. For example, many new faculty at that time had no experience with calculus reform, a movement to align calculus instruction more closely with theories of how students learn (Science, 23 April 1993, p. 484). In addition, Leitzel and Stevens saw an opportunity to build a sense of community among younger mathematicians, who were fighting over an unexpectedly scarce supply of job openings. “The time was right,” Leitzel says. “There was a disconnect between the preparation that graduate students were receiving and what was actually happening in the classroom.”

    To remedy that “disconnect,” Leitzel and Stevens devised a program in which each year, about 70 Project NExT fellows have their expenses paid to three national mathematics conferences. They choose the fellows on the basis of their potential impact on their department and the department chair's letter of support for the participant. For 2 days before the conference, the fellows attend workshops devoted to various aspects of the teaching profession: Topics range from the latest research on how students learn mathematics to self-organized panels on how teachers can keep their research alive at 2-year or 4-year colleges. That's necessary because relatively few NExT fellows—only 5% to 10%—have jobs at research universities. During the school year, participants keep in touch with each other and with designated mentors through an electronic list server.

    The Exxon Educational Foundation agreed to pick up the program's tab for 3 years, and has been sufficiently satisfied to renew its funding for 3 more years at more than $200,000 per year. “Project NExT, in our mind, is the standard for our programs,” says Robert Witte, senior program director of the foundation. “That doesn't mean it's perfect, but it's way ahead of second place.” Leitzel and Witte note that no formal evaluation has so far been conducted of how NExT participants fare, but that will be done during the second 3-year grant period.

    In an informal survey, project participants say they are more than satisfied. Jose Giraldo, a 1994 fellow, says that in his first year at Texas A & M University at Corpus Christi, his colleagues were not interested in the ideas for change he brought from graduate school—for example, using graphing calculators in the classroom and assigning students small projects throughout the term. After a year of struggling for permission to try these techniques, he recalls, “I was going to quit.” Giraldo says that what he learned at NExT workshops helped bring his colleagues around, however. “It gave me credibility and confidence. If I mentioned an idea for reform in teaching, I'd support it with references,” he says. Now, he adds, his department is “completely behind me.”

    Project NExT has also made it easier for young mathematicians to make the contacts they need to advance in their profession. In the sometimes intimidating atmosphere of a national conference, all they need to do to find a friendly face is look for the colored dots on name tags that signify project participants. “At a large conference, with few people around that you knew, you could just walk up to someone with a dot and start talking—instant companion,” says Heather Hulett of Miami University in Ohio, who attended the first NExT workshop in 1994.

    As a result of such contacts, numerous NExTers have gotten involved in regional activities or committees of the MAA; and the board of the Young Mathematicians Network, an independent Web-based organization, is composed entirely of NExTers. Moreover, their positive experiences keep them coming back to national professional meetings. “By bringing young mathematicians to meetings several years in a row, you show them the value of contacts,” says John Ewing, executive director of the American Mathematical Society. “Most young mathematicians learn that slowly, over many years, or never learn it at all. Project NExT fellows have it handed to them for free.”

    Could Project NExT serve as a model for similar programs in the other sciences? The physicists apparently think so, and have started a similar program (see sidebar). “If you look at the bare-bones structure, it's discipline-independent,” Leitzel says. “There are three principles: to connect new Ph.D.s with the broader community of their discipline, to acquaint them with the issues of teaching and learning, and to provide a support network for them.”


    New Physics Profs Get a Helping Hand, Too

    1. Dana Mackenzie

    Mathematicians aren't alone in focusing new attention on teacher training after the Ph.D. is earned. Concerned that “the lack of attention to good teaching at the research universities sends a subtle but significant message to graduate students,” Kenneth Krane, a nuclear physicist at Oregon State University in Corvallis, borrowed a page from the Project NExT notebook (see main story) and organized the first “Workshop for New Physics Faculty.” Last October, 50 physicists at the beginning of their teaching careers attended the intensive 3-day workshop, held at the national headquarters of the American Association of Physics Teachers (AAPT) in College Park, Maryland.

    The participants, who were nominated by their department chairs, were treated to demonstrations of teaching techniques such as group learning and “peer instruction,” and several talks by specialists in physics education. Unlike the “Project NExTers,” most of the AAPT workshop attendees are employed at research universities.

    Although it is too early to judge the program's success, the participants who reconvened at last week's AAPT meeting in Denver for a panel discussion of their experiences were enthusiastic. Of the innovations they learned about last fall, the clear favorite of the panelists was peer instruction, developed by Eric Mazur, an optical physicist at Harvard University. In this method, the teacher poses conceptual questions every few minutes during a lecture. Students vote on the one of four possible answers, debate their answers with one another, then vote again. Most of the time, the percentage of correct answers increases dramatically, but if the teacher is not satisfied, he or she can cover the point again. Martin Gelfand, a condensed-matter theorist from Colorado State University in Fort Collins, likes the method because “it doesn't require a radical revision of the curriculum,” but gives the teacher a better idea of the students' difficulties.

    The National Science Foundation has funded the first 3 years of the workshop, at about $100,000 per year, through its Undergraduate Faculty Enhancement program, with the possibility that the funding will be extended. If it is, Krane points out that, with 50 new physicists passing through the program each year and only 175 Ph.D.-granting programs in physics in the country, “we'll have one faculty member in each department who's been through the workshop and can encourage the new ones.” He adds: “I look forward to seeing them advance in rank, to tenure and chairmanships. We can have a significant impact on the culture of physics teaching.”


    Snare for Supernova Neutrinos

    1. James Glanz

    How do you see into the heart of an exploding star? Simple, says an international team of physicists: Go almost a kilometer deep into the earth and wait. These researchers hope to convert deep salt deposits into a pair of underground observatories that would capture thousands of the elusive particles called neutrinos, which spray from the very core of a supernova—carrying clues to its workings.

    Led by researchers at four different institutions in the United States and the United Kingdom, the project, called Observatory for Multiflavor Neutrinos from Supernovae (OMNIS), has already settled on a general design and preferred locations; now it faces the struggle of raising tens of millions of dollars from funding agencies. If OMNIS gets up and running, the waiting will begin—years or decades—until a 10-second burst of neutrinos announces a supernova in our galaxy.

    Starry messenger.

    A supernova neutrino strikes a sodium nucleus in the wall of a salt deposit, dislodging a neutron that triggers a flash of scintillation.


    The project's roughly 20 collaborators say it promises more than just a view into exploding stars. The observatories, one in a salt deposit already excavated for a nuclear waste dump in Carlsbad, New Mexico, and the other in the Boulby Salt Mine in the United Kingdom, could also help settle the vexing question of whether the neutrino has mass.

    Unlike existing neutrino observatories, OMNIS could readily detect all three “flavors” of neutrinos, which would differ subtly in behavior if the particles do have mass—a finding that would open up new theories in physics and cosmology. “No question about it: I would love to see [OMNIS] go forward,” says Mark Vagins of the University of California (UC), Irvine, and a collaborator on Japan's SuperKamiokande (Super-K), now the world's largest neutrino detector.

    Existing neutrino observatories, including an earlier version of Super-K, have already demonstrated the principle of OMNIS by detecting a handful of neutrinos—19 ± 1, to be exact—from a 1987 supernova. But Super-K and its peers specialize in drawing the maximum possible information from the low-energy electron neutrinos—one of the three flavors—that stream from the sun (see Science, 10 January, p. 159). They are unlikely candidates for a decades-long watch for the pulse of high-energy neutrinos from a type II supernova.

    Type IIs are thought to explode when a massive star's core runs out of fusion fuel, cools, and collapses. It then rebounds, generating a ferocious shock wave. The shock slows as it plows into the outer layers of the star, but theorists believe that a blast of neutrinos released from the core revives the shock, which blows the outer part of the star into space, says Adam Burrows, a theorist at the University of Arizona in Tucson.

    OMNIS, which germinated around 1990 in discussions between David Cline of UC, Los Angeles, George Fuller of UC, San Diego, and others, “should be a world-beater in many respects” for unraveling the details of this story, says Burrows. Super-K detects faint trails of light when electron neutrinos interact with a huge tank of water, but the high energies of supernova neutrinos would allow OMNIS to use a simpler scheme: detecting the neutrons thrown off when a tiny fraction of the neutrinos crashes into atomic nuclei deep underground. Muon and tau neutrinos—the two other neutrino flavors—would hit either sodium and chlorine nuclei in the salt walls of the mine or iron nuclei in slabs near the detectors. Electron neutrinos, which are expected to shed some of their energy reviving the supernova shock, wouldn't pack enough punch to break up those nuclei. Instead, slabs of lead, which has more fragile nuclei, might be added to generate some of those events.

    The neutrons released by the nuclei would rattle around the tunnel and strike scintillation detectors. “The whole flood of 2000 events lasts only 10 seconds, with 60% in the first 2 seconds,” says Peter F. Smith, a collaborator at the Rutherford Appleton Laboratory in the United Kingdom. Collaborators hope the OMNIS design will be cheap enough to run that it could ultimately see several events.

    The shape of the neutrino pulse would tell astrophysicists whether they really understand how such stars explode and would clear up such mysteries as whether a black hole, from whose gravity nothing can escape, sometimes forms when the star's core collapses. “A sudden cutoff of neutrinos would be strong evidence for black-hole formation,” says Super-K's Vagins. The dual detectors would help pinpoint a supernova's location and ensure that at least one detector is constantly working, says Richard Boyd, a collaborator at Ohio State University in Columbus.

    Meanwhile, OMNIS's ability to pick out tau and muon neutrinos could help show whether neutrinos have mass. For a fixed energy, a particle with even a minute mass will move more slowly and hence take longer to traverse thousands of light-years than a massless particle will. The relative timing and pulse shapes of a supernova's electron-neutrino signal—seen primarily at Super-K and other detectors—and the muon and tau signals at OMNIS should reveal even slight mass differences. The Sudbury Neutrino Observatory in Ontario, Canada, which should begin taking data on solar neutrinos next year, could beat out OMNIS, because it could see as many as a quarter of OMNIS's expected haul of muon and tau neutrinos. But it relies on a tank of heavy water, provided through a lease from the Canadian government that is due to expire in 2001.

    OMNIS collaborators are now putting together funding proposals for submission to the U.S. Department of Energy and the National Science Foundation. Estimates of construction costs range between $20 million and $40 million. One thing the project won't have to pay for is space underground. Wendell Weart of Sandia National Laboratory in Albuquerque, New Mexico, until recently technical project manager of the Carlsbad waste dump, says site managers would be happy to have the physicists as their guests. “It would be nice to think we're using it for some truly beneficial, scientific purpose in addition to disposing of this waste.”


    Rising Damp From Small Comets?

    1. Richard A. Kerr

    First, there were the strange, dark spots in the upper atmosphere, seen this spring in ultraviolet images from a satellite. Now comes evidence that the atmosphere has a relatively wet layer 70 to 80 kilometers (km) up.

    In the standard picture of the atmosphere, water vapor is trapped below 12 km or so by a moisture barrier at the bottom of the stratosphere, keeping the mesosphere—the region between 50 and 90 km—almost bone dry. But last week, a satellite instrument detected signs of as much as 50% more water vapor at those altitudes than is called for by any conventional theory.

    To space physicist Louis Frank of the University of Iowa in Iowa City, the explanation is clear: Fluffy, house-size comets are pummeling the outer reaches of the atmosphere 20 times a minute, releasing water that ultimately ends up in the mesosphere. Other researchers are far more cautious. “You have to give the man credit for predicting something we're now seeing,” says Robert Conway of the Naval Research Laboratory (NRL) in Washington, D.C., principal investigator of the latest orbiting instrument to see signs of abundant mesospheric water. But Robert Meier of NRL, who endorsed Frank's detection of dark spots last spring (Science, 30 May, p. 1333), stresses that “this doesn't confirm snowballs in space. You have to look at alternative explanations” for a moist mesosphere.

    The first hints of excess water in the mesosphere actually came late last year, when James Russell of Hampton University in Virginia and his colleagues reported a reanalysis of data gathered by the Halogen Occultation Experiment (HALOE) on the Upper Atmosphere Research Satellite. The satellite has been flying since 1991, and earlier analyses of data from HALOE, which measures solar absorption by the upper atmosphere, didn't show any unusual concentrations of water. But the latest look at the data revealed a peak in water vapor at an altitude of about 70 km.

    Wet layer.

    Unexpected water in the high atmosphere could help form icy noctilucent clouds.


    “We were very skeptical at first” of the HALOE reanalysis, says Conway. But now his own instrument, NRL's Middle Atmosphere High Resolution Spectrograph Investigation (MAHRSI), has found abundant hydroxyl radicals, a breakdown product of water, in the mesosphere above high northern latitudes. Flown on a satellite deployed by the Space Shuttle, MAHRSI is the first instrument that is able to pick out the sunlight-induced glow of hydroxyl from the glare of scattered sunlight in the mesosphere. Its observations reveal high concentrations of hydroxyl at an altitude of about 70 km, right where HALOE saw the peak in water. Both instruments also show that the mesosphere is dampest in summer at high latitudes, where the added water could help explain noctilucent clouds—the wispy clouds of ice particles seen at 85 km during high-latitude summers. “There's a startling amount of water above altitudes of 65 km,” says Conway—8 to 10 parts per million versus the 6 to 7 parts per million predicted by what is considered quite reliabletheory.

    “There's definitely something very unusual going on in the mesosphere that we don't understand at all,” says theoretician Michael Summers of NRL, “butI'm not even close to saying this supports the small-comet hypothesis.” For onething, he and others are skeptical of Frank's scenario for funneling water from an altitude of 800 km, where Frank says the comets would break up, down to the mesosphere, where increasing atmospheric density would stop the water. The clouds of vapor would have to slam through 700 km of thin atmosphere, leaving hardly a trace of water on the way.

    Summers and others add that even if the mesosphere is damp, it's not nearly as damp as Frank's theory would have it. David Siskind of NRL, Summers, and others have calculated how big an influx of water from above would be needed to explain the 70-km peak seen by HALOE. Frank's small comets exceed it “by at least a factor of three,” says Summers. “My preferred view is that it's off by a factor of 30.” Summers and his colleagues are therefore pursuing other explanations for the water: Perhaps it is deposited by meteorites or created by unexpected chemical reactions.

    The loose ends don't worry Frank. “The most important thing is the finding of excess water up there,” he says. “It's a big step.” Sorting out water fluxes and why the water abundance varies with latitude and season will require close monitoring of the variations he has already detected in the small-comet bombardment, he says. Meteorologist John Olivero of Embry-Riddle Aeronautical University in Daytona Beach, Florida, agrees that it's time to start taking small comets seriously. “It's when we get challenging observations like this that we start to rethink all of our assumptions,” he says. “That's what science is supposed to be all about.”


    Primordial Soup Researchers Gather at Watering Hole

    1. Ricki Lewis
    1. Ricki Lewis is the author of Life, published by McGraw-Hill College Publishers.

    Saratoga Springs, New YorkIn a symposium held here on 23 and 24 June as part of the Northeast Regional Meeting of the American Chemical Society, two dozen researchers met to discuss the events that could have transformed a lifeless Earth into a rich biochemical broth, which could have given rise to the first living organisms. They described experiments pointing to ways in which conditions on a young Earth, such as sunlight and mineral surfaces, could have fostered the first steps toward life, including the formation of the first information-carrying molecules and the metabolic cycles that provide energy for living things. They also discussed a new technique that could pin down when Earth first became hospitable to life.

    RNA Makes Connections

    Cornell University chemist David Usher calls himself an “RNA optimist.” A decade ago, Harvard University's Walter Gilbert and others anointed RNA as the probable first information-carrying molecule of life, playing the same role as DNA does now. Researchers worried, however, that early RNAs would not have formed the kinds of chemical bonds that enable modern versions of the molecule to resist being dismantled by water. But, at the meeting, Usher described work suggesting that under early-Earth conditions, the kind of bonds still seen in today's RNA would have been formed, creating long-lasting polymers.

    Usher's lab recreated RNA polymerization with a “day-night” machine—a glass apparatus exposed to a rotating light source to simulate a cycle of 6 hours of daylight followed by 6 hours of darkness. “We run through lots of cycles of heating and cooling and wetness and dryness, which is exactly what you wouldn't have been able to avoid on the early Earth,” he said.

    When Usher and his colleagues added a mixture of nucleosides (RNA building blocks, each consisting of a sugar bound to one of the four bases found in RNA) and phosphates to the system, phosphate bonds formed both between the number 3 and number 5 carbons of adjacent five-carbon sugar rings—the kind of bonds seen in RNA today—and also between carbons 2 and 5. But the accumulating nucleic acids preferentially contained the 3-to-5 links, because the 2-to-5 links distorted the growing polymers, making them more likely to be broken down by water during the wet cycles. The result: short RNA helices and double-stranded RNAs containing precisely the kinds of durable phosphate linkages seen in today's nucleic acids formed in the cycler.

    Other work suggests that the formation of RNA polymers may have been given a boost by the clays or pyrite minerals that might have been available wherever life got started, such as in warm pools or deep-sea hot springs. A decade ago, researchers had noted that the minerals could have provided a kind of scaffolding on which nucleic acids could assemble. At the meeting, James Ferris and Gözen Ertem, chemists at Rensselaer Polytechnic Institute in Troy, New York, presented evidence showing just how effective such mineral templates can be.

    Last year, Ferris and Leslie Orgel at the Salk Institute reported in the 2 May issue of Nature that RNA monomers trapped between the alternating alumina and silica sheets of common montmorillonite clay can link into chains up to 50 units long, containing a single type of base. “We do not know precisely how clay catalyzes the reaction,” said Ertem. “But, in general, monomers that adsorb on a clay surface are in close proximity to each other, and are oriented in [the right] geometry for [phosphate] bonds to form.”

    At the meeting, she filled out this picture by reporting evidence that polymers formed on clay can reproduce themselves—a crucial step toward life—by aiding the formation of polymers with complementary bases. For example, if the polymer contains only cytidines—one of four kinds of bases found in RNA—it can in turn serve as a template for formation of a second polymer, containing only guanine bases. The finding suggests, said Ferris, that “a community of interacting oligomers may have formed on mineral surfaces,” ready to take on the next steps toward becoming living cells.

    Biochemist Stanley Miller of the University of California, San Diego, whom many credit with founding the field of prebiotic simulations when he generated amino acids in his classic 1953 “primordial soup” experiment, says simulations like these can't offer definitive answers. But they are still valuable, he says. “How do we know whether simulation experiments really recreate what happened? We don't. But the alternative is to sit around and speculate about it.”

    Clues in Moon Beads

    When did life first arise? Although the planet formed 4.6 billion years ago, the earliest hints of life—skewed carbon-isotope ratios in ancient sediments from eastern Greenland—date back only 3.85 billion years. Just how much earlier life could have gotten started depends on when the rain of asteroids and comets that pelted early Earth relented. At the meeting, John Delano of the State University of New York, Albany, proposed a new place to find clues to the end of the bombardment: the tiny glass beads found in samples of lunar soil.

    Looking to the moon is nothing new for investigators trying to reconstruct Earth's history of bombardment. The moon's history of impacts presumably parallels Earth's, but unlike Earth, it doesn't experience the erosion and other geologic processes that tend to erase the evidence—rocks melted in impact cataclysms. Radioactive dating of rocks brought back by the Apollo astronauts suggests that the bombardment of Earth and the moon didn't abate until about 3.8 billion to 3.9 billion years ago, implying that life could not have arisen much earlier than that. But Delano questions these dates.

    Delano, who has been a lunar sample principal investigator for NASA since 1984, notes that the rocks on which the current dates are based are now known to consist of two or more types of materials that may have formed at different times. “Such a ‘melt rock’ might consist of 4.5-billion-year-old crystals in a matrix that is 3.9 billion years old. That could be averaged to an age of 4.2 billion years ago, which would have no physical meaning,” Delano said at the meeting.

    In contrast, each of the beads of glass found by the thousands in the Apollo samples, says Delano, “has an isotopic memory of when it was produced in [a single] impact event.” What's more, since tiny droplets of molten glass can spray long distances from an impact, a single sample of lunar soil can carry records of many different impacts.

    Delano plans to analyze the chemical composition of several hundred glass beads, to be certain they did not pick up impurities after they formed, then send them to Paul Renne at the Berkeley Geochronology Center for argon-argon dating. For now, he says, “Origin-of-life investigators should be reluctant to cede the interval of 4.4 billion to 3.9 billion years ago as having been too hostile for sustainable life.” Says James Kasting, a geoscientist at Pennsylvania State University, State College, “It's hard to prove [the timing of impacts] from a small collection of moon rocks from 6 or 7 locations. If Delano has a better way to look at the data concerning bombardment, that would be very interesting.”

    Mimicking Metabolism

    Anyone who has taken Biology 101 has become painfully familiar with the Krebs or citric acid cycle (CAC). This complex loop of reactions is part of aerobic respiration, which extracts energy from glucose far more efficiently than do alternative pathways that do not require oxygen. To biochemists pondering early life, aerobic respiration poses the same problem as RNA polymerization: How could this sophisticated chemistry have gotten started?

    Maybe sunlight helped set the CAC, or parts of it, in motion, says Tom Waddell, an organic chemist at the University of Tennessee, Chattanooga. At the meeting, Waddell described experiments he did with undergraduates Tod Miller, Barry Henderson, and Sunil Geevarghese. To recreate parts of the CAC, they placed appropriate chemical intermediates from the cycle on a sunny rooftop. “We set up a simple experiment, watched what happened, and let nature teach us,” Waddell said. They found that, in some cases, solar energy drove chemical reactions that produced further intermediates of the cycle. For example, oxaloacetic acid, a compound at the cycle's “end,” broke down in sunlight, releasing citric acid, the compound that starts the cycle anew.

    The findings mesh with analyses of the Murchison meteorite, found in Australia in 1969. In 1974, J. G. Lawless and co-workers at the Ames Research Center, in Moffett Field, California, identified a zoo of organic molecules in the 100-kilogram meteorite, including amino acids, nucleotides, and CAC intermediates. Solar radiation might have driven CAC-like reactions in space, Waddell thinks. His rooftop experiments show that these sun-inspired reactions could have occurred on Earth as well, perhaps in the chemical systems that were precursors to life.

    Although Waddell has reproduced only a few steps of the citric acid cycle, he can't help imagining how the pieces might fit into a bigger picture. “Perhaps evolving cells [relied on] photochemical reactions that were the ancestors of the modern CAC,” he says. Eventually, as enzymes evolved that could harness chemical energy to drive the CAC, organisms no longer had to rely on the sun to keep their metabolisms churning. “It is certainly a reasonable proposal,” says James Ferris of Rensselaer Polytechnic Institute in Troy, New York.


    Novel Campaign to Test Live HIV Vaccine

    1. Jon Cohen

    An AIDS vaccine that, hands down, has had more success in monkey experiments than any other approach has never been tested in humans. The reason: many researchers believe the vaccine, based on a weakened, or attenuated, live virus, would be too risky. Now, the little-known International Association of Physicians in AIDS Care (IAPAC), convinced that the potential benefits outweigh the risks, is conducting an unusual campaign to recruit “a few hundred” volunteers for a safety study of this approach that the group hopes to organize by the year 2000.

    Heading the drive to sign up volunteers is AIDS clinician Charles Farthing, one of IAPAC's 5500 members and medical director of the AIDS Healthcare Foundation in Los Angeles, California. Farthing says he has been “progressively irritated” by the lack of movement toward clinical trials of an attenuated HIV vaccine—an approach that has worked wonders against diseases such as smallpox and polio. The Chicago-based IAPAC made the call for a live, attenuated HIV trial in the August issue of its journal; the editor, Gordon Nary, announced that he would be among the volunteers. IAPAC also has posted a registration form for the trial on its Internet site, and says more than a dozen people already have stepped forward.

    Ronald Desrosiers of the New England Regional Primate Research Center in Southborough, Massachusetts, first showed the power of the live, attenuated approach in a monkey study published in Science nearly 5 years ago (18 December 1992, p. 1938). Monkeys given the vaccine did not become infected later, when given a lethal strain of SIV, the simian cousin of HIV. Desrosiers, who has worked with Therion Biologics of Cambridge, Massachusetts, to develop a potential product, has spent the past several years deleting various genes from SIV and HIV to find a weakened form that is as safe as possible, yet still able to protect animals from disease-causing isolates of the virus.

    A live, attenuated AIDS vaccine would have three potential pitfalls, however. The weakened virus would still be able to replicate and might cause AIDS after, say, 30 years. It's also possible the virus could mutate into a virulent form, although Desrosiers thinks this risk can be all but eliminated by deleting enough genes. Finally, the weakened HIV would still integrate with a host cell's DNA, which theoretically could trigger cancer by a process known as insertional mutagenesis.

    Farthing says he hopes the safety trial will show after a year or two that people, like the monkeys, can control replication of the vaccine virus and not suffer any immunological damage. Still, AIDS experts say the trial won't answer some of the biggest safety questions. “We're really concerned with what happens when you vaccinate 20 million people and 10 years later, 5% or 10% get lymphoma,” says Anthony Fauci, head of the National Institute of Allergy and Infectious Diseases. “You're not going to know that from [IAPAC's proposed test].”

    Farthing recognizes the risks, and acknowledges that regulatory agencies such as the Food and Drug Administration may never approve his proposed test. But “if you just assume everybody's going to say no, you don't do anything,” he says. Margaret Johnston, head scientist for the International AIDS Vaccine Initiative, a group started by the Rockefeller Foundation to speed the search, thinks the safety issue are paramount, but says IAPAC's efforts might help. IAPAC's move, says Johnston, “will stimulate debate, which I do think is sorely needed.”


    DNA Ventures into the World Of Designer Materials

    1. Robert F. Service

    Organic molecules have an unparalleled ability to seek out and latch onto particular partners. Just witness antibodies' proficiency at homing in on precise molecular targets. Inorganic materials, meanwhile, have their own fortes, such as strength and electrical conductivity. Over the past year, chemists in the United States and Ireland have taken initial steps at forging a new link between these two chemical families by creating designer materials in which DNA and other organic molecules help connect tiny inorganic particles to create exquisitely organized structures with surprising new properties.

    Now, on page 1078, researchers at Northwestern University in Evanston, Illinois, present the first large-scale device embodying this strategy: a sensor, made from a web of DNA and gold particles, that changes color when it detects a precise strand of DNA. The sensor showcases the talents of both components: DNA's ability to recognize and bind matching sequences stitches the web together, while the electronic properties of the inorganic gold are responsible for the color shift. This easy-to-read color change could lead to simple and cheap detectors of pathogens for use everywhere from doctors' offices to the battlefield, says Northwestern University chemist Chad Mirkin, who along with colleague Robert Letsinger led the research project. “It's really marvelous,” says Paul Alivisatos, a University of California, Berkeley, chemist. Alivisatos is co-leading a related effort in this field with colleague Peter Schultz.

    Moreover, Alivisatos, Mirkin, and others say this approach is destined to yield more than just sensors: The new work represents an early step in transferring DNA from the biological to the material world. Researchers have long exploited electrical interactions and other mechanisms to help them guide nanoparticles into forming structures, says Christopher Murray, a chemist at IBM's T.J. Watson Research Center in Yorktown Heights, New York. But such interactions are not selective, and therefore cannot place individual particles exactly where they're wanted. The highly specific interactions of DNA and other organic molecules, however, makes combining them with nanoparticles “a good approach to controlling the architecture of materials,” says Murray. Already several teams are racing to use that specificity to organize metal and semiconductor nanocrystals into ultra-small electronic devices that essentially assemble and repair themselves in solution.

    “It's a very exciting time right now,” says Donald Fitzmaurice, a chemist at University College Dublin in Ireland who is leading one of these hybrid efforts. “There's been a huge amount of activity in the last few months in this area.” Ed Chandross, a chemist at Lucent Technologies' Bell Labs in Murray Hill, New Jersey, agrees. “It's an area with a tremendous amount of promise. People are making structures that could not readily be made before.” But he and others acknowledge that the research has a long way to go before such complex structures can be made with ease.

    The Northwestern researchers started simple, building their DNA sensors from three different fragments of single-stranded DNA and a slew of tiny gold particles just 13 or so nanometers in diameter. One set of DNA strands acts as the “target” strand—the sequence that the sensor is designed to detect. The other two are probe strands, each of which has a sequence complementary to half of the sequence of the target.

    The researchers glued the probe strands to the gold nanoparticles. They attached sulfur containing organic groups called thiols onto one end of their probe strands. Next, they added the gold particles to two separate reaction baths, each containing one of the families of DNA probe strands. Sulfur's affinity for gold then caused the DNA probes to bind to the particles, resulting in fuzzy gold nanoparticles coated with dozens of DNA strands each.

    To create the sensor, the researchers then combined the two sets of DNA-coated particles into a single bath. They tested it by mixing in the target DNA. The result: the first probe linked to half of the target DNA strand, and the second probe linked to the other half, causing the target strand to bridge the two probes. Repeated millions of times, the process glued the nanoparticles together in a three-dimensional web.

    The formation of this web changes the electronic behavior of the particles. When the particles are separate, the electrons in any one particle are more or less free to move independently. But as the particles approach one another, electrons spontaneously moving around one particle induce movements in the electrons of neighboring particles. This choreographed movement influences which wavelengths of light the material absorbs. As a result, the formation of the network prompts a color change, which is the key to the network's sensor application. Unlinked, the probe-coated nanoparticles appear red in color when dried on a special glass plate. When the “right” target links the coated particles into a web, the dried spot turns blue. Because the probe sequences can be tailor-made, the sensor can be designed to detect any DNA sequence.

    Mirkin believes that since such a DNA detection system is easy to read—just look for the color change—and cheap to make, it could serve as a rapid screen for pathogens, which could prove useful for everyone from doctors quickly testing patients for infections at the bedside to soldiers scanning for biological warfare agents on the battlefield, where current lab-based diagnostics cannot be used. However, he acknowledges that the sensitivity of the current sensors is only “moderate.” Making the color change visible to the naked eye requires millions of copies of the target DNA, so it may not show up if the target DNA is present in minute quantities.

    Equally important, Mirkin and others say, the nanoparticle sensor is a proof-of-principle for a strategy to make nanoparticles arrange themselves into tiny devices. In the 15 August 1996 issue of Nature, where the Northwestern team first laid out its strategy for festooning nanoparticles with dozens of DNA strands, Alivisatos, Schultz, and their Berkeley colleagues reported linking single DNA strands to particles. The strands assembled the particles into nanoparticle “molecules” containing two or three nanoparticles each. And now, says Alivisatos, the researchers are working to use the DNA to precisely order a series of nanoparticles into a wire.

    Fitzmaurice and his Dublin colleagues, meanwhile, have already made progress towards a similar goal, using semiconducting titanium dioxide particles linked to a modified RNA building block called uracil, which is in turn linked to an electron-hungry group known as viologen. In a February 1997 paper published in Chemistry, A European Journal, Fitzmaurice and his team described preparing titanium dioxide particles so that they automatically bind to the uracil-viologen combo in solution, forming an organic-inorganic molecule. Exposing the particles to light boosted the conductance of electrons in the particles, which then jumped to the viologens.

    That electron flow, says Fitzmaurice, demonstrates that it may be possible to coax these particle-biomolecule hybrids into assembling themselves into ultra-small electronic circuits. Such circuits would be many times smaller than those housed by the millions on semiconductor chips, which are reaching a practical limit of miniaturization. If bioparticle-based circuits do prove possible, this lab-grown marriage between organic molecules and inorganic nanoparticles could prove to be a happy one indeed.


    NGF Signals Ride a Trolley to Nucleus

    1. Marcia Barinaga

    Neurons have a special communication problem: The length of a nerve cell, from the tip of its axon to its main cell body, can be many centimeters or even a meter—”an amazing distance” for a molecular signal to travel, says Johns Hopkins neuroscientist David Ginty. Yet that's the distance that nerve growth factor (NGF), a nurturing elixir that bathes the axon tips of some neurons, must send its signal to regulate genes in the nucleus. New work by Ginty and his colleagues suggests that NGF delivers its long-range message by boarding a subcellular trolley that shuttles it to the cell body.

    Researchers have known for decades that NGF is swallowed up and packaged in vesicles that travel up the axon, and that this “retrograde transport” is important for NGF signaling. But no one knew just what role it played. On page 1097, however, Ginty's team shows that the transport system is apparently necessary for NGF to activate CREB, a protein that regulates the genes that respond to NGF.

    “It is a really elegant set of experiments,” says neuroscientist Story Landis, of the National Institute of Neurological Disorders and Stroke. “The big hole in the NGF field has been what is the nature of the signal. This makes it clear that NGF itself has to be transported retrogradely to get the response.” The finding could have medical significance, notes neurologist William Mobley of the University of California, San Francisco (UCSF). His group recently showed that mice with Down syndrome have a failure in retrograde transport. If that leads to defective NGF signaling, it might help explain the neuronal abnormalities of the syndrome.

    Researchers have been intrigued with retrograde transport since its discovery in the 1970s, because neurons clearly need a specialized way to get signals from axon tip to cell body. In ordinary cells, when a protein binds to a receptor, its message is relayed by a short biochemical signaling cascade that traverses the cytoplasm from membrane to nucleus. That works fine in a round compact cell, but such a cascade, triggered in a neuron's axon tip, would be way out of striking range of the nucleus. Retrograde transport could help by shuttling NGF or other growth factors directly to the cell body, where they can trigger a signal cascade that would easily reach the nucleus.

    To see if retrograde transport indeed works that way, Ginty's team cultured rat neurons in chambers that allow the neurons' cell bodies and axons to grow in different fluid environments, so that each could be exposed to particular treatments. To detect the effects those treatments had on TrkA, the main membrane receptor for NGF, and on CREB, they stained the neurons with antibodies that recognize only the active forms of the two molecules.

    The speed with which CREB was activated depended on where the researchers applied the NGF. When they put it on the cell bodies, they saw active TrkA and CREB in the cell bodies within 5 minutes. But NGF applied to the axon took 20 to 40 minutes, depending on the length of the axons, to produce active TrkA and CREB in the cell bodies. That, Ginty says, resembles the time it takes for NGF to travel up the axon, suggesting that its transport is needed for CREB activation.

    On track.

    NGF and TrkA may ride together in vesicles to the cell body to turn on genes.


    To test that idea, they treated the axons with NGF bound to plastic beads, which allow the molecule to activate TrkA receptors but prevent it from being taken into the cells and transported. They saw activation of TrkA in the axons but no TrkA or CREB activity in the cell bodies—further evidence that transport and CREB activation are linked.

    To confirm that activated TrkA must be in the cell body to trigger the events that turn on CREB, the team flooded the cell bodies with an inhibitor of TrkA activity. As expected, it prevented CREB activation by NGF added to the axons. That finding meshes with work by Bob Campenot's group at the University of Alberta in Edmonton and Rosalind Segal's at Harvard Medical School in Boston. Both found active TrkA in cell bodies after NGF was given to the tips of axons. (Campenot's results appear in the 26 July issue of the Journal of Cell Biology, and Segal's are in press at the Journal of Neuroscience.)

    Ginty's explanation for the appearance of activated TrkA in the cell body is that it rides in from the axon with NGF. Although no one has directly shown TrkA to make that trip, Mobley's group at UCSF discovered that NGF-containing vesicles also contain TrkA. But Campenot sees a bit of TrkA activation in the cell body too soon after NGF exposure for retrograde transport to explain it. He thinks this fast signal travels by another method, perhaps a chain of TrkA molecules phosphorylating—and thereby activating—other TrkA molecules all the way up the axon. And both researchers may be right. “It may be that there is more than one signaling pathway,” says Segal, a view held by others in the field.

    Researchers can now investigate these possibilities by, for example, blocking the movement of vesicles up the axon, or by blocking the phosphorylation reaction that Campenot proposes. And so the answers to long-held questions about how signals span vast molecular distances may itself be just around the corner.

Log in to view full text