News this Week

Science  04 Sep 1998:
Vol. 281, Issue 5382, pp. 1422

    Britain Hunts Down CJD Epidemic in Removed Appendixes

    1. Nigel Williams

    London–Ever since the first deaths in 1995 from a new form of brain disorder thought to result from eating the meat of cattle infected with mad cow disease, one critical question has remained unanswered: How many cases are likely to show up beyond the 27 diagnosed so far? Given the long latency period for the disease—perhaps 10 years or more—it is hard to tell whether the country is facing a silent epidemic or just a smattering of cases. Last week, the British government announced that it is about to launch a novel study to try to find out.

    It will examine hundreds of stored tonsils and appendixes, removed during surgery in the past several years, for signs of the rogue prion proteins that have been linked to the brain disorder, known as new-variant Creutzfeldt-Jakob disease (nvCJD). Kenneth Calman, the government's chief medical officer, who announced the study at a press briefing last week, said it “is only sensible to explore” what he called “the biggest question” surrounding the disease. But the study, which is still at the planning stage, will confront an ethical minefield over such questions as whether to tell healthy patients whose organs test positive.

    The study was prompted by the case of Tony Barrett, a 45-year-old coast guard from southwest England, who died of nvCJD in June. According to a paper in the 29 August issue of The Lancet, Barrett developed the first symptoms of the disease—numbness of his face and right hand—in May 1996. Eight months earlier, he had had his appendix removed at a local hospital. “I saw that Mr. Barrett had an appendectomy in September 1995, and we felt some of the lymph nodes contained in the organ could reflect traces of the disease,” says David Hilton, a neuropathologist at Derriford Hospital in Plymouth. “Tests revealed that we were correct, and these were corroborated at the National CJD Surveillance Unit in Edinburgh.”

    It was the first demonstration that nvCJD can be detected before symptoms appear, says James Ironside of the National CJD Surveillance Unit, a co-author of the Lancet paper. John Collinge of London's Imperial College, a member of the government's spongiform encephalopathy advisory committee (SEAC) who has developed tests for the disease in the tonsils, says the discovery of prions in the appendix was not surprising, however. Earlier studies on scrapie, the sheep equivalent of CJD, revealed that lymphoid tissue, such as in the gut, tonsils, and spleen, were infected with prions a third of the way through the incubation period and long before symptoms developed. From the lymphoid system, the prion protein is thought to pass into the nervous system and then the brain. Researchers do not know, however, at what stage lymphoid tissue becomes involved during the incubation period of nvCJD.

    The Barrett case suggests that it occurs early enough to justify a broad study to look for silent infections. The government plans to test more than 1000 of the 800,000 samples of tonsil and 45,000 samples of appendix removed and stored each year. Comparison of tissue removed before and at the height of the BSE epidemic in the late 1980s may provide a rough indication of the extent to which the population harbors nvCJD prion proteins. But the size of the study may be a problem. If just one in 1000 cases proves positive, in a population of 50 million, that would imply that 50,000 people are infected. “Interpretation of the studies will require considerable caution,” says Calman.

    The new finding “provides us with an opportunity which it is only sensible to explore.”

    –Kenneth Calman

    The Medical Research Council is considering a proposal by Collinge for a study of tonsils, while Ironside would coordinate the appendix studies through the Department of Health. Ironside says the study would use antibodies to detect the prion proteins in the stored tissues: “The [antibody] staining technique is a straightforward method, and with appropriate approval and resources we could start in the next few months.”

    Government officials and their advisers have not yet worked out the ethical rules that will govern the testing program, however. One option is for a scheme similar to one used in Britain for HIV testing of donated blood, where unlinked, anonymous samples are examined. It is impossible under this protocol to trace the person donating the sample. “If you want to have an HIV test, you cannot get the result from this program,” says a Department of Health official. Britain's large bank of surgical tissue samples stored at hospitals around the country, which contains material more than 50 years old, is often used anonymously in new histological studies for the analysis of cancer, for example.

    If the study uses tissue that is not anonymous, it will have to deal with the issue of whether to tell patients if researchers do detect prions in their removed organs. There is currently no treatment for nvCJD. Although SEAC is still considering what advice to give the government, Calman says the Department of Health is looking at the best way to carry out anonymous studies on the appendixes and tonsils.

    Aside from the ethics of the testing program, the discovery that prions can lurk in organs long before people show signs of disease has another alarming implication: that surgical instruments used on those people could pass on the disease. It has long been recognized that the infectious agent responsible for the classical form of CJD can be transmitted that way. In one case, brain electrodes that had been sterilized passed the infection from one patient to a second. They were sterilized again but passed the disease on to a third patient. SEAC says spread by surgical instruments is an “unlikely mode of transmission” after an appendectomy, particularly as the scalpel blade is thrown away. But Calman has set up a new expert group to carry out further assessment of procedures used to decontaminate instruments which “will start shortly.”


    Lunar Prospector Probes Moon's Core Mysteries

    1. Robert Irion*
    1. Robert Irion is a science writer in Santa Cruz, California.

    Most planetary scientists suspect that far beneath the cold craters and dusty seas of the moon lies an important clue to its fiery past—a dense core. Now the modestly outfitted Lunar Prospector spacecraft has gathered gravitational and magnetic data that offer support to those suspicions and hint at the size of this small metallic nugget. The new findings, presented with other Prospector results on page 1475 of this issue, fit with a theory that the core is the remnant of a violent crash between the infant Earth and a Mars-sized body that spawned the moon 4.5 billion years ago.

    The discoveries lengthen a string of successes by NASA's desk-sized $63 million robot, launched in January, which spotted signs of frozen water at the moon's poles last winter (Science, 13 March, p. 1628). Skimming just 100 kilometers above the moon's surface, Prospector has put together detailed magnetic maps as well as the most thorough lunar gravitational atlas to date, which reveals hidden concentrations of mass. “These [gravity field] images are remarkable in their clarity,” says geophysicist Gregory Neumann of the Massachusetts Institute of Technology and NASA's Goddard Space Flight Center in Greenbelt, Maryland. “They resolve aspects of the moon that we couldn't see before.”

    Researchers charted the moon's gravitational peaks and valleys by using Earth-based radio telescopes to track gravity's subtle tugs on Prospector. The resulting map let them calculate how mass is parceled out in the moon's interior five times more precisely than before, says team leader Alex Konopliv of NASA's Jet Propulsion Laboratory in Pasadena, California. The calculations indirectly constrain the core's radius to between 220 and 450 kilometers—toward the small end of the range if the core is pure iron, and toward the larger end if it is made of a less dense alloy such as iron sulfide.

    Such a core would hold 1% to 4% of the moon's mass, says the team. That's good news for proponents of the leading theory of the moon's birth: that it coalesced from the debris of an impact between a Mars-sized protoplanet and the half-formed Earth, 50 million years after the solar system arose. “The giant-impact theory has no problem explaining a core of that size,” says planetary scientist Robin Canup of the Southwest Research Institute in Boulder, Colorado. The iron came mostly from the shattered core of the impactor, according to models by Canup and astrophysicist Alastair Cameron of the Harvard-Smithsonian Center for Astrophysics in Cambridge, Massachusetts. Other scenarios of lunar origin, such as coformation with Earth, capture of a large asteroid, or fission from the young Earth's mantle, call for a larger iron core or none at all, Canup notes.

    Lunar Prospector's gravity survey also exposed several new “mascons,” dense blobs of rock, beneath impact basins that are not filled with smooth lava. This suggests that the stronger pull of gravity over these areas results from plugs of dense material that rose toward the surface from the mantle, rather than from lava fills, Konopliv says. “That's a surprising result,” says planetary geophysicist Roger Phillips of Washington University in St. Louis. “It means the moon's outer layers must have cooled off rapidly, within half a billion years, to become rigid enough to support mantle upwarpings.”

    Refining the moon's early thermal history may help scientists understand how long the core itself stayed hot. Prospector data suggest it remained molten until at least 3.6 billion years ago, when the era of giant impacts on the surface came to an end. The probe's magnetometer picked up traces of magnetism locked into patches of crust—perhaps sites where big impacts shocked the rocks strongly enough for them to capture an imprint of the magnetic field. The strength of this fossil field implies that at the time, the dynamo action of flowing metal in the core was generating a magnetic field perhaps as strong as Earth's today.

    One patch of lunar crust is so intensely magnetized, in fact, that it deflects the solar wind's charged particles away from the surface, just as Earth's own magnetic field does. The phenomenon, first seen in less detail by the Explorer 35 orbiter in 1967, could be “a signature of a strong dynamo field from a molten iron core in the past,” says physicist Robert Lin of the University of California, Berkeley. However, magnetic-field expert Norman Ness of the University of Delaware, Newark, cautions that the crustal imprints could have come from a strong interplanetary magnetic field rather than one internal to the moon.

    The gravity and magnetic data seem to “tell a consistent story” about the moon's core, says planetary scientist Lon Hood of the University of Arizona, Tucson, a co-author on both studies. Future probes such as next year's Japanese Lunar-A mission, which will take seismic x-rays of the moon by implanting seismometers on opposite sides, may offer the definitive word on the core's mass and size. Still, Prospector scientists say that NASA's first moon mission since Apollo has taken more than a small step forward in lunar exploration.


    New HIV Strain Could Pose Health Threat

    1. Michael Balter

    The human immunodeficiency virus (HIV), which causes AIDS, comes in so many genetic varieties that it is hard to keep them straight without a scorecard. Now, a team of AIDS researchers from France, Cameroon, and Gabon has added yet another branch to HIV's convoluted family tree: It has isolated a version of the virus from a Cameroonian woman who died of AIDS in 1995 that is sufficiently different from known strains that it may evade current blood tests. This new strain—which currently seems to be rare and localized—may provide new clues about HIV's origins, including when and how some viral strains might have jumped to humans from other primates.

    Worldwide, most AIDS patients are infected with HIV-1, although a second virus, HIV-2, is responsible for some cases of a milder form of the disease in West Africa. HIV-1 itself is further subdivided into two genetically distinct groups, each of which is split into numerous subtypes separated roughly according to their geographical distribution. The new variant, designated YBF30, appears to be a member of an entirely separate third HIV-1 group, according to analyses published in the September issue of Nature Medicine.

    The new strain was identified by a team led by virologist François Simon of the Bichat Hospital in Paris. The group includes French AIDS research pioneers Françoise Brun-Vézinet, also of Bichat Hospital, and Françoise Barré-Sinoussi of the Pasteur Institute in Paris. In 1994 and 1995, the researchers were conducting a study of HIV-1's genetic variability in Cameroon when they encountered a patient infected with a viral strain that was not detected by tests for the two already known groups of HIV-1: The M (“major”) group, which accounts for the overwhelming majority of infections with HIV-1, and the O (“outlier”) group, which is found almost exclusively in Cameroon. YBF30 did, however, react positively in a test for a strain of SIV—the simian version of HIV—isolated earlier from a chimpanzee in neighboring Gabon.

    When YBF30's genome was later sequenced, the results confirmed that it belonged to a previously unknown group, which the team proposes calling the N group. Moreover, the sequence of one key part of YBF30's genome placed it much closer on the evolutionary tree to chimpanzee SIV than to either M group or O group HIV-1, although some other sections of the genome appear midway between chimpanzee SIV and M group HIV-1. AIDS researchers who spoke to Science found no reason to doubt that the YBF30 strain is the first discovered representative of a new HIV-1 group. “The data look good to me,” says molecular virologist Beatrice Hahn of the University of Alabama, Birmingham.

    The similarities between YBF30 and chimpanzee SIV suggest that the evolutionary ancestors of N group viruses might have been transmitted to humans from nonhuman primates. A similar scenario is thought to be responsible for the evolution of HIV-2, which is genetically similar to SIV strains that infect sooty mangabeys. Hahn says that chimpanzee-to-human transmission is a “likely” explanation for the existence of N group HIV-1, although she adds that it would be “hard to prove.” Simon Wain-Hobson, an AIDS researcher at the Pasteur Institute, also cautions against drawing such conclusions too quickly, pointing out that only a few genome sequences of chimpanzee SIV are available for comparison. “I think it's too close a call,” he says.

    Although Hahn warns that the appearance of yet another viral strain is a “public health concern,” researchers say that it is an open question whether N group viruses will spread beyond Cameroon. For example, when the team tested 700 blood samples from HIV-1-infected patients living in Cameroon to get an idea of YBF30's prevalence, only three samples were positive for the strain. And group O viruses, which were first identified in Cameroon in the early 1990s and have remained almost entirely restricted to that country, accounted for only 9% of the 700 infections.

    Wain-Hobson believes that it might be largely a matter of “serendipity” which viral strains predominate in an epidemic, although he cautioned that any strain could become a danger under the right social and behavioral circumstances. “The M group probably exploded because it got into urban areas and got there sooner. If you introduce N group viruses into New York, you will get an epidemic.” As a result, the paper's authors urge that HIV tests be modified to pick up the new strain. According to Simon: “A continuous search for new variants is necessary to assure the safety of blood donation. … These viruses are on standby, waiting for favorable conditions.”


    Study Finds 10% of Tree Species Under Threat

    1. Nigel Williams

    Hundreds of tree species worldwide are in imminent danger of being wiped out, according to a new report published last week by the World Conservation Monitoring Centre (WCMC), an independent nonprofit organization based in Cambridge, U.K. The report* points to 976 species that are critically endangered and facing extinction unless urgent action is taken, and it says thousands of other species are under threat. Indeed, the report's grim bottom line is that habitat destruction around the globe now threatens the survival of about 10% of the world's 100,000 species of trees.

    “Trees are the fundamental components of many ecosystems and human economies, and this report highlights the plight of many species,” says the WCMC's chief executive, Mark Collins. The 3-year study—funded by the Netherlands government with support from leading botanical and forest institutes internationally—follows a major study by Bird Life International in 1994 of the plight of birds around the globe that found 11% of species to be endangered. “The similarity in tree and bird species numbers under threat counters claims that ecologists are being alarmist and shows that habitat destruction is having a critical impact on wildlife,” says population biologist John Lawton at Imperial College London. “Fossil evidence suggests that over most periods in the past, species extinctions occurred at the rate of one or two per year, so the present findings are extremely worrying.”

    Researchers found that more than three-quarters of the thousands of threatened tree species are not subject to any conservation measures. Only 12% of all tree species are recorded in protected areas, and only 8% of species are known to be in cultivation. “Unless conservation action is taken immediately, some species face certain extinction and many others will be joining the list of threatened trees,” says WCMC's Sara Oldfield, who compiled the report. The list includes many globally well-known species such as African and big-leafed mahogany, ebony, and frankincense. The South American tree Pau brasil (Caesalpnia echinata)–a source of wine-red dye traded since the 16th century, which gave Brazil its name—is also threatened. Some of the most critical species have been reduced to a single specimen whose future is highly uncertain (see table).

    View this table:

    Although 80% of tree species are in the tropics, some species are under threat in virtually every country. Malaysia has the highest number of critically endangered species at 197. Indonesia, recently ravaged by forest fires, has 121, India has 48, and Brazil, the most heavily forested country on the planet, has 38. In the United States, Hawaii has many threatened species. The report highlights critical gaps in its coverage because of difficulties in determining the species within certain families of trees and in surveying some regions of the world such as Papua New Guinea. But the design of the database will allow continual updating as work will continue to tackle these problems, the report says.

    Conservationist Steve Howard of the World Wide Fund (WWF) for nature says that, with 77 species already known to have become extinct in the recent past, this report has confirmed fears of a widespread problem. The report itself suggests that the sustainable management of forests is a top priority, and the WWF is backing a scheme by the Forestry Stewardship Council to independently certify well-managed forests. The WWF, which is also campaigning for each country to declare 10% of its forest cover protected by 2000, says that without care for trees there is little hope for many other organisms. Studies have found that up to 300 species of insect may depend on one tree species. So far, 22 countries have signed on to the WWF forest protection plan, including Canada, Brazil, and China.

    The report was made public in Geneva last week, as the Intergovernmental Forum on Forests—an official organization—met there to discuss how to tackle the continuing crisis facing the world's forests. The forum is following up commitments made by countries to prevent species losses in the Convention on Biological Diversity drawn up in Rio de Janeiro in 1992.

    • *The World List of Threatened Trees, World Conservation Press, Cambridge, U.K. ISBN: 1 899628 10 X.


    Theory Debate Gets Literary, and Ugly

    1. Peter Rodgers*
    1. Peter Rodgers is editor of Physics World.

    Research papers are rarely a lively read. The results being reported might be of immense significance, but the papers are almost always written in the formal, impersonal style that has come to characterize scientific publications. Not so for a recent paper in Physical Review Letters (PRL), which compared one “not so beautiful” theory of high-temperature superconductivity (HTS) to a “figure in a cartoon” and made withering use of literary references.

    The HTS theory world is “very delicate, with a lot of bad blood and infighting.”

    –Julius Ranninger

    Soon after researchers showed in 1986 that complex copper-oxide ceramics could conduct electricity without resistance at temperatures far higher than the metal alloys that held the record at the time, it became obvious that the existing theory of superconductors could not explain the new materials. When the maximum superconducting temperature stalled at about 125 kelvin, researchers looked to theorists to provide some guidance around the impasse, but so far they have shed little light. Indeed, it is often remarked that there are as many HTS theories as there are theoretical physicists working on the problem. As a result, says Julius Ranninger of the Centre for Very Low Temperature Research in Grenoble, France, the world of HTS theories is “very delicate, with a lot of bad blood and infighting.”

    Ranninger should know. He was an author, with Benoy Chakraverty and Denis Feinberg of the Laboratory for the Study of the Electronic Properties of Solids in Grenoble, of the PRL paper that has sparked the latest debate, which is as much about style of discourse as it is about science. The paper discusses a theory that attributes superconductivity to bound pairs of polarons—the distortion in the crystal lattice of an HTS material caused by the charge on an electron. The formation of electron pairs is an essential feature of any theory of superconductivity. In 1981, before the discovery of HTS, Ranninger and Sasha Alexandrov, now at Loughborough University in the United Kingdom, published a paper suggesting that the pairs of polarons, or bipolarons, might be superconducting. Alexandrov has continued to champion the bipolaron approach, but Ranninger has since abandoned it.

    The title of the paper by Chakraverty, Ranninger, and Feinberg, “Experimental and theoretical constraints of bipolaronic superconductivity in high-Tc materials: An impossibility,” hinted at its tone. And there was no circumspection in its statement that “We shall show that extending the bipolaron theory for superconductivity to [HTS] materials is fallacious.” The paper claims to show that the bipolaron theory is unable to predict superconducting transition temperatures any higher than 10 kelvin—clearly too low to explain HTS. But what has most angered proponents of bipolaron theory is the authors' parting shot: “The tragedy of beautiful theories, Aldous Huxley once remarked, is that they are often destroyed by ugly facts. One perhaps can add that the comedy of not so beautiful theories is that they cannot even be destroyed; like figures in a cartoon they continue to enjoy the most charming existence until the celluloid runs out.”

    It was Thomas Huxley, not Aldous, who mused on the fate of beautiful theories at the hands of ugly facts, but the mistake did not lessen the sting. On 13 July, the same day that the paper appeared in PRL, Alexandrov submitted a “comment” on the paper to PRL and the Los Alamos e-print server in which he argued that the objections in the paper were “the result of an incorrect approximation … and the misuse of the bipolaron theory.” He concluded by stating: “What is clear, however, is that any theory, beautiful or not, cannot be destroyed by ‘ugly’ artifacts [such] as those in Chakraverty et al.” In an interview, Alexandrov added that he found the final paragraph “unhealthy and not motivated by any reason.” Moreover, he adds, he has heard the same reaction from many other physicists.

    Ranninger says the last paragraph of the paper was written specifically to “calm the situation” and does not think that it was provocative. Reaction to the paper in the HTS community has been mixed. In a letter to Ranninger, Alexei Abrikosov of the Argonne National Laboratory in Illinois wrote: “I would like to express my pleasure upon reading your paper about bipolaronic superconductivity. I completely agree with it, and I appreciated the last two sentences.” Alan Bishop of Los Alamos National Laboratory in New Mexico, however, calls the tone of the PRL paper “unhelpfully polemic.” Bishop adds, “I might comment in the same vein [that] ‘Beauty is in the eye of the beholder.’ In this case there are several beholders.”


    Proposed Scope Takes Mirror Size to the Max

    1. Robert Irion*
    1. Robert Irion is a science writer in Santa Cruz, CA.

    Last week, astronomers peered into their crystal ball and saw a lot of glass: a telescope with a 30- to 50-meter mirror, on which the largest telescopes being built today would fit like so many crackers on a plate. These 8- to 10-meter eyes on the sky will lose much of their sparkle next decade if the proposed Next Generation Space Telescope (NGST) is launched. Combining the distortion-free seeing of space with a powerful 4- to 8-meter mirror, the half-billion-dollar NGST would peer into the universe with unparalleled detail. But the astronomers who met last week* to consider a “maximum-aperture telescope” (MAXAT) aren't ready to cede the future to space.

    They hope to persuade their colleagues, especially those who will prepare the next set of spending priorities for U.S. astronomy, that a mammoth ground-based telescope costing up to $1 billion is both feasible and scientifically justified. “It's not quite time to pull up our stakes on the ground,” says workshop chair Jay Gallagher of the University of Wisconsin, Madison. “The scientific case for a large aperture is strong, and we see the combination of NGST and MAXAT as particularly powerful.”

    Depending on its size, MAXAT could surpass the light-gathering power of NGST by 15- to 150-fold. That would open new vistas, especially in the near-infrared part of the spectrum, where astronomers can best correct for atmospheric blurring. MAXAT could take high-resolution spectra of distant galaxies and “decompose them into their building blocks,” says Frank Bash of the University of Texas, Austin. The giant scope might also spot the first supernovae, study the life histories of stars, and peer into the hearts of nascent planetary systems to spy infant and mature Jupiter-sized planets.

    If NGST flies, MAXAT could follow up its discoveries just as the twin 10-meter Keck Telescopes in Hawaii work with the Hubble Space Telescope: NGST would capture sharp images, and the huge ground-based instrument would scoop up light to make spectra that reveal what distant objects are made of and how they behave. “Once NGST is flying, this is the next obvious facility,” says Matt Mountain, director of the multinational Gemini 8-Meter Telescopes.

    The biggest technological challenge would not be MAXAT's gargantuan mirror, which would consist of a mosaic of hundreds of segments, says Roger Angel of the University of Arizona, Tucson. Rather, it's likely to be the adaptive optics controls that would adjust the telescope's optics to compensate for Earth's rippling blanket of air. “We would probably spend the better part of a decade figuring out how to make that affordable,” says Angel, who wants to build a 15-meter prototype by putting thin glass into a dish resembling that of a radio telescope.

    Gallagher's colleagues plan further meetings before writing a report in time for astronomy's decadal review committee, which will convene next year to set the field's priorities for 2000–10. Alan Dressler of the Carnegie Observatories in Pasadena, California, who hopes to take part in the review, is keeping an open mind until he sees the report. “We're just learning what to do with the 8- to 10-meter telescopes,” Dressler says. But because big telescope projects take many years, he notes, “this is an ideal time to consider the next step.”

    How astronomers would pay for that step is another matter. MAXAT cost estimates range from $250 million to $1 billion depending on the technology adopted. For instance, a dish of segmented mirrors resting on a fixed mount similar to the Arecibo radio telescope in Puerto Rico could cut the costs dramatically and still bring 70% of the sky within view, says Bash. Even so, NASA and the National Science Foundation are unlikely to foot such bills alone—especially with NGST in the pipeline. “Our feeling was that this would have to be a world telescope,” Mountain says.

    Astronomers in Europe already are thinking along the same lines. Torben Andersen and Arne Ardeberg of Lund Observatory in Lund, Sweden, will hold an international meeting next June to discuss telescopes as large as 50 meters. And Roberto Gilmozzi of the European Southern Observatory (ESO) in Garching, Germany, is pushing his vision of a 100-meter behemoth dubbed OWL, for “overwhelmingly large.”

    Europe is a rival as well as a potential partner. ESO is pouring $800 million into its Very Large Telescope array of four 8.2-meter mirrors in Chile (Science, 1 May, p. 670), and most observers expect ESO to continue pushing the ground-based envelope. “Europe is seizing the leadership in ground-based astronomy, and not even in a subtle way,” says Bruce Margon of the University of Washington, Seattle, who did not attend the workshop. “If we are going to concede leadership to them on the ground, we should at least do so by having thought about it.”

    • *Maximum-Aperture Telescope workshop, 28–29 August, organized by the Association of Universities for Research in Astronomy, in Madison, Wisconsin.


    Galaxy's Oldest Stars Shed Light on Big Bang

    1. Alexander Hellemans*
    1. Alexander Hellemans is a writer in Naples, Italy.

    The early universe, fresh out of the big bang, would have delighted those who hated chemistry at school. The only things around were hydrogen, helium, a bit of lithium, and a few other elements no heavier than boron. All the other elements that fill the periodic table arrived later, forged in the nuclear furnaces of stars and dispersed when the stars exploded as supernovae. Cosmologists studying the element-forming processes in the big bang have been trying to look back through the clutter of more recently formed elements to learn the exact composition of that primordial star stuff. Now they have a rare sample of it: a collection of the very oldest stars in our own galaxy, some of which are more than 13 billion years old, formed just 1 or 2 billion years after the galaxy itself was born.

    At an astronomy meeting* in Australia last month, Sean Ryan of the Royal Greenwich Observatory in Cambridge, U.K., Timothy Beers of Michigan State University in East Lansing, and John Norris of Australia's Mount Stromlo and Siding Spring Observatories announced the culmination of a 20-year survey: the identification of a tribe of 1000 stellar Methuselahs. “The ancient stars we are studying formed so early in the life of the galaxy that they contain very little [heavy elements]. That is why they are special,” says Ryan. François Spite of Paris Observatory at Meudon notes that these stars “are valuable to cosmology because they provide additional tests of the age of the universe and its initial composition.”

    Previously, the only way to study the universe's original makeup was to look at very distant galaxies. Because it takes so long for their light to reach us, we see them as they were soon after they formed, before many generations of stars had forged large amounts of heavy elements. But even the most distant galaxies appear to have been enriched in heavy elements by earlier stars. Also, because these galaxies are so faint, it is hard to discern much detail about the universe's primordial composition in their spectra.

    To get a better sample of the early universe, Beers and his colleagues searched our own galaxy's halo, a spherical region of gas, dust, stars, and invisible “dark matter” surrounding the galactic disk, looking for stars whose spectra revealed very small amounts of elements heavier than boron. The astronomers made images of patches of the sky through an instrument called an “objective prism,” which smears each star's point of light into a spectrum. “The survey was initiated in 1978, so we are 20 years into it,” says Beers, who joined George Preston and Stephen Shectman of the Carnegie Institution's observatories in Pasadena, California, in the first phase of the search.

    At first, the astronomers picked likely candidates by eye. Now they scan the plates digitally. “This allows us to find more candidates,” says Beers. “Once we have a list of candidates, then we have to go to another telescope and take medium-resolution spectroscopy.” By doing so, Beers, Ryan, and Norris identified about 1000 stars that have an iron content 100 times lower than that of the sun—a star that has been preceded by several tens of generations of stars. These stars survived so long because they are small, so they burned their nuclear fuel very slowly, says Ryan.

    The old stars aren't completely pristine, however. At least one generation of short-lived stars must have preceded them, because even these oldest stars contain traces of heavy elements—including thorium, which enabled the team to “carbon-date” the stars. Thorium is a radioactive element with a half-life of 14.1 billion years; by measuring its abundance, “we get a direct estimate of the age at which the thorium was formed in presumably a supernova” prior to the star's formation, Beers says. The amount of thorium that has been decayed is determined by comparing it to the amount of europium, a stable element formed during the same nuclear process in the star. According to Beers, John Cowan at the University of Oklahoma, Norman, and Chris Sneden at the University of Texas, Austin, have determined the age of one star to be 13 billion years—close to the age that cosmologists have estimated for the universe as a whole from other data, such as the rate of cosmic expansion.

    These stars are already living up to their potential as time capsules. Analyzing the stars' spectra to determine how much lithium their surface layers contain, says Ryan, “allows us to measure how much lithium was produced in the big bang.” Beers says the early measurements match predictions based on theorists' picture of the element-forming processes in the primordial fireball.

    • *The Third Stromlo Symposium. The Galactic Halo: Bright Stars and Dark Matter, Canberra, Australia, 17–21 August.


    New Timepiece Has a Familiar Ring

    1. Marcia Barinaga

    Pendulums, quartz crystals, oscillating atoms—human beings have invented many different ways to keep track of time. Mother Nature, however, seems early on to have hit on one good design for the molecular clocks that govern circadian rhythms and used it repeatedly. The latest evidence for this comes from Masahiro Ishiura and Takao Kondo at Nagoya University in Japan and their colleagues, who on page 1519 describe the workings of the biological clock of the single-celled organisms known as cyanobacteria, or blue-green algae.

    The cyanobacteria clock, which paces 24-hour cycles of activities such as nitrogen fixation and amino acid uptake, is based on the same working principle as are those of fruit flies, mammals, and the bread mold Neurospora. At its core is a genetic oscillator, in which a gene produces a protein that accumulates for a while and then feeds back and turns off the gene, causing the protein's concentration to oscillate over a roughly 24-hour cycle. But despite that common scheme, the proteins that make up the cyanobacteria clock are completely different from those of other organisms.

    The findings help settle a debate over whether all biological clocks are descended from the same evolutionary ancestor, or whether clocks have arisen more than once during the course of evolution. The cyanobacteria discovery is “the best evidence yet for [clocks'] independent evolution,” says Northwestern University clock researcher Joe Takahashi. Keeping track of day-night cycles is apparently so essential, perhaps because it helps organisms prepare for the special physiological needs they will have at various times during the daily cycle, that clocks seem to have arisen multiple times, recreating the same design each time.

    To identify the cyanobacterial clock proteins, Kondo and Ishiura began by isolating more than 100 strains with mutations that either abolished or altered the organism's daily activity cycles. The researchers identified the mutated genes by chopping up the cyanobacterial genome and searching for pieces that would restore the normal rhythms when introduced into the mutant bacteria. They found one DNA segment, containing three genes the team called kaiA, -B, and -C—“kai” is Japanese for “cycle”—that could restore all the mutants tested.

    All three genes proved essential for the cyanobacterial clock; the Kondo-Ishiura team found that inactivating any one disrupts the organism's rhythms. Other work suggested that the proteins made by the genes are themselves part of the clock mechanism. For one, the researchers found that, as expected for clock components, the activity of the genes oscillates with a 24-hour cycle.

    When the team took a closer look at the clock gene activity patterns, their findings suggested a familiar scenario: Early in the day the kai genes begin to produce RNA that is translated into protein. Next, KaiA protein apparently turns up the activity of the kaiB and -C genes. Then after a delay, KaiC seems to step in and turn the genes off. That causes protein levels to fall. As a result, KaiC stops repressing the genes and they come on again. If that model, which fits all the current data, proves true, says Ishiura, that means “regulation of the clock genes is analogous” to the three other known clock systems, in fruit flies, mammals, and Neurospora. In fruit flies, for example, the proteins Period and Timeless feed back to turn off their genes.

    Many questions remain to be answered about the cyanobacterial clock. Ishiura and his colleagues don't yet know, for example, what causes the delay before KaiC feeds back to turn down its own gene. The question is a critical one because the delay before the gene shuts down is what allows protein levels to oscillate with a 24-hour rhythm. The researchers are also on the trail of other proteins that seem to control the kaiA gene and help regulate kaiB and -C as well.

    While Ishiura and his colleagues work to fill out the picture, researchers in the field will want to put this new revelation into its evolutionary context. They already knew that the clock proteins of fruit flies and mammals are strikingly similar, making it clear that these clocks evolved from the clock of a common ancestor (Science, 5 June, p. 1548). But the Neurospora clock has only weak similarities to the animal clocks, raising the possibility that it might have had an independent origin. Researchers disagree about that but say that the cyanobacteria discovery confirms at least two independent origins for clocks.

    More may be in the offing. Just this June, researchers got their first glimpse of a plant clock when two teams, one led by George Koupland at the John Innes Centre in Norwich, UK, and the other by Elaine Tobin at UC Los Angeles, identified two different mutations that disrupt the circadian rhythms of the plant Arabidopsis. Both affect transcription-control proteins known as MYB proteins, which are unrelated to any known clock proteins. That begins to smell like yet another independent clock, says Steve Kay of The Scripps Research Institute in La Jolla, California.

    Yet Mother Nature didn't veer far from her tried-and-true system for telling time when she built the Arabidopsis clock: The MYB proteins both apparently go back and turn off their own genes. Perhaps, says Kay, feedback on gene transcription has always won out over other possible mechanisms because proteins that regulate their own production are common and therefore readily available to be crafted into clocks. Alternatively, speculates clock researcher Michael Young of Rockefeller University, “maybe this is the only way you can make a clock.”


    Breathalyzer Device Sniffs for Disease

    1. Robert F. Service

    Boston—A whiff of a patient's breath can sometimes be enough to tell a doctor what's wrong. The fishy smell of compounds called amines can indicate kidney problems, while the sweet smell of acetone can mean diabetes. Now two scientists have developed a machine that could turn this practice into a high-tech diagnostic technique, they announced at the American Chemical Society meeting here last week.

    In just minutes, the device can analyze a puff of breath for the trace gases that signal diabetes, kidney failure, ulcers, and possibly even cancer. Commercial versions could be available within a few years, providing fast, noninvasive diagnosis. “These are very exciting results,” says Michael Henchman, a chemist at Brandeis University in Waltham, Massachusetts, who is not involved in the work. “[This] technique could be as important to medicine as MRI [magnetic resonance imaging].”

    The telltale compounds find their way into the breath when they build up in the blood. As the blood circulates through the lungs, gases in the lungs and blood equilibrate, and the breath carries them out. But developing a diagnostic device that's better than a physician's nose at picking up these wafting fumes has not been easy.

    Researchers have tried sniffing out illness, for example, with an instrument known as a gas chromatography-mass spectrometer, or GC-MS. But a GC-MS has trouble dealing with some of the complex mixtures of trace chemicals in the breath. To convert the uncharged organic compounds into charged ions that can be propelled through the mass spectrometer by electric fields, the device bombards the compounds with electrons. The bombardment often breaks down different trace gases into similar smaller components. “It makes it impossible to deconvolute your data” to determine the exact parent compounds, says David Smith, a chemical physicist at Keele University in Staffordshire, United Kingdom.

    Two decades ago, when Smith was studying trace gases known to be present in interstellar gas clouds, he developed a gentler way to attach a charge to molecules. His device, known as a selected ion flow tube, or SIFT, reacts compounds in a test sample with carefully selected ions instead of electrons. The reactions change the original compounds only slightly, and each one produces a unique signal in a mass spectrometer.

    Three years ago, while working on his technique during a brief stint at the University of Innsbruck in Austria, Smith realized that the same technique might be useful for sorting out the chemicals in breath. Smith and Patrik Spanel, a physicist with the J. Heyrovsky Institute of Physical Chemistry in Prague, Czech Republic, teamed up to adapt the SIFT technique. But they soon faced a new challenge: The ions they intended to react with the trace compounds also reacted with substances that are abundant in breath, such as oxygen, nitrogen, water, and carbon dioxide, which depleted the ions and yielded confusing results. So Smith and Spanel identified a new set of ions—HO3+, NO+, and O2+–that don't react readily with the basic ingredients of breath.

    Instead, they found that each ion reacted only with certain trace breath components, producing a unique chemical signature for each molecule. And because the SIFT device can generate all three ions simultaneously and feed them into the reaction tube one right after the other, Smith and Spanel were able to obtain complete profiles of all the target compounds from just a single breath. Smith says that he and Spanel have already converted their instrument from a hulking tabletop device to a portable machine that can be wheeled into hospital rooms, and they are currently working to shrink the equipment further.

    When Smith and Spanel tested their instrument on patients with various disorders, the results jumped out. Patients with kidney failure, for example, showed levels of ammonia more than 10 times higher than those in controls, because of the waste compounds in their blood. “It essentially delivers an instantaneous diagnosis,” says Henchman. The device enabled researchers to watch those levels fall to normal as the patients received dialysis treatment.

    Smith also reported that the machine can gauge a subject's stress level by tracking isoprene, as well as track markers for diabetes and ulcers. Preliminary data even suggest that it could detect hydrocarbons (he declined to say which ones) associated with bladder and prostate cancer. In addition, Smith believes the new machine will prove to have a versatile nose for trouble and could monitor air quality and food freshness as well as disease.


    Which of Our Genes Make Us Human?

    1. Ann Gibbons

    For decades scientists have known that at least 98% of human DNA is identical to that of chimpanzees. Now they have at last begun to explore which genes separate us from the apes

    We humans like to think of ourselves as special, set apart from the rest of the animal kingdom by our ability to talk, write, build complex structures, and make moral distinctions. But when it comes to genes, humans are so similar to the two species of chimpanzee that physiologist Jared Diamond has called us “the third chimpanzee.” A quarter-century of genetic studies has consistently found that for any given region of the genome, humans and chimpanzees share at least 98.5% of their DNA. This means that a very small portion of human DNA is responsible for the traits that make us human, and that a handful of genes somehow confer everything from an upright gait to the ability to recite poetry and compose music.

    Aping it.

    Chimpanzees may adopt the occasional two-legged pose, but they differ dramatically from humans in anatomy and behavior.


    But what are these genes, so few in number yet so powerful in effect? Until now there has been little funding or research to track them down, and the primate genome has been almost virgin territory. “You could write everything we knew about the genetic differences in a one-sentence article,” quips neuroscientist Thomas Insel, director of the Yerkes Regional Primate Research Center of Emory University in Atlanta.

    Now that is changing, as a persistent band of geneticists and evolutionary biologists launches new studies to find differences in the genes, chromosomes, and biochemistry of humans and chimpanzees. Next month, researchers will report finding the first significant biochemical variation between humans and other apes: Humans lack a particular form of a ubiquitous cell surface molecule found in all other apes. Other teams are reporting newfound differences in the arrangements of DNA on the chromosomes of humans and other primates.

    And new sequencing projects are starting to compare primate and human DNA base by base. Two new centers—at Yerkes and in Leipzig—opened simultaneously last year to study the molecular evolution of the great apes; next month, at an international meeting in Chicago, researchers will push for an organized, international primate genome project (see sidebar). Because primates are less susceptible than humans to certain diseases, including cancer and AIDS, the sequence differences are of more than evolutionary interest and are already drawing commercial attention. One Denver, Colorado, biotech company has already submitted patents on key human and chimp genes.

    For the moment, no one can tie the few known molecular variations with the familiar litany of chimp-human differences, such as body hair, language, or brain size. But the leaders of genomic and evolutionary research alike say that now is the time to explore those links. “This is one of the major questions that those of us interested in our own biology would like to ask—what does that 1.5% difference look like?” says Francis Collins, director of the National Human Genome Research Institute.

    Vive la difference

    That question was first raised in print in a landmark 1975 paper by geneticist Mary-Claire King and the late biochemist Allan Wilson, both then of the University of California, Berkeley. They surveyed protein and nucleic acid studies and found that the average human protein was more than 99% identical to its chimpanzee counterpart; the coarse DNA hybridization methods of the time showed that the average nucleic acid sequence was almost as similar (Science, 11 April 1975, p. 107). Thus, King and Wilson concluded, humans and chimpanzees were genetically as similar as sibling species of other organisms, such as fruit flies or mice.

    This left a great paradox: Our DNA is almost identical to that of our chimp cousins, but we don't look or act alike. “The molecular similarity between chimpanzees and humans is extraordinary because they differ far more than many other sibling species in anatomy and way of life,” the pair wrote.

    What's more, much of the DNA in any organism is so-called “junk DNA” that has no apparent function, and mutations in these regions do not change the function of genes. Thus, many of the genetic differences between humans and chimps probably don't affect the organisms at all. The challenge is to find those few mutations that do make a difference—either by altering genes that code for proteins or by changing how genes are regulated, King and Wilson said.

    But although many labs have since confirmed that our nuclear DNA is 98% to 99% identical to that of chimpanzees, few have taken on the quest to find the differences that matter. “It was one of those fields that fell through the cracks,” says Ajit Varki, a glycobiologist at the University of California, San Diego (UCSD), who has recently surveyed the known differences between humans and apes.

    The tools and funds for sequencing large amounts of DNA rapidly weren't available until recently. And this line of work required a bold shift in thinking for most labs. “Most people in comparative genetics ask what's similar and conserved,” says molecular evolutionist Caro-Beth Stewart at the State University of New York, Albany, who trained with Wilson. “Just a few of us have been trying to ask: What's different? What makes us human?”

    The biochemical trail

    One way to answer that question is to start with biochemical differences, and then trace them back to their genetic origins. That approach has yielded its first big payoff, to be reported in the October issue of the American Journal of Physical Anthropology. After studying tissues and blood samples from the great apes and 60 humans from diverse ethnic groups, Varki and his colleagues Elaine Muchmore and Sandra Diaz at UCSD were surprised to find that human cells are missing a particular form of sialic acid, a type of sugar, found in all other mammals studied so far, including the great apes. “Now you've got something that is changing the surfaces of all cells in the body,” says Varki.

    The sialic acid molecule is found on the surface of every cell in the body, and previous work has shown that it can take on a surprising variety of roles. In some cases, it acts as a receptor for messages from other cells, but pathogens including those that cause cholera, influenza, and malaria also use it to gain a foothold on the cell. Chimpanzees are not as susceptible as humans to some of these pathogens, and the researchers speculate that this molecular change may be part of the reason why. There are even hints that sialic acid may be involved in cellular communication during brain development and function, says Varki.

    The chimp and mammalian form of sialic acid, known as N-glycolyl-neuraminic acid (Neu5Gc), is modified from the basic form of the compound (called Neu5Ac) by the addition of an oxygen atom. But the human form is simply the basic acid, lacking the additional oxygen atom. That changes the shape of the molecule in a region that could alter how it is recognized by other molecules, whether pathogens or cellular messengers, says Varki. Other researchers are intrigued: “I think this is really nice work,” says Svante Pääbo, a molecular geneticist at the University of Munich and leader of a German effort to sequence ape genomes. “It's really the first difference [in the expression of a gene product] that has come up.”

    In recent months, Varki's team has traced this difference back to a gene that codes for a hydroxylase enzyme in apes, which adds the extra oxygen atom. Humans are missing a 92-base pair section of this gene, according to new results from a team of Japanese researchers, led by glycobiologists Akemi Suzuki and Yasunori Kozutsumi at Tokyo Metropolitan Institute. Varki's lab has a paper in press in the Proceedings of the National Academy of Sciences that includes similar studies on the gene in the great apes; they are now also working on human fossils with Pääbo's lab to see if this change was recent in humans.

    Yet the question still remains: Does this biochemical difference matter? No one has yet identified a specific function altered by the loss of this particular version of the molecule. “Until we know what this gene does, I remain skeptical of its importance,” says Harvard University molecular anthropologist Maryellen Ruvolo. In search of clues, Kozutsumi is raising mice where the hydroxylase enzyme gene is knocked out, as it is in humans, to see if they produce the simpler form of sialic acid seen in humans—and if they have any anatomical or behavioral differences. “Maybe their mice will speak,” jokes Varki, whose own lab is raising transgenic mice that overexpress hydroxylase in the brain to see if it affects anatomy or behavior.

    But Ruvolo doesn't expect dramatic results: “Somehow, I can't believe that switching off the expression of this gene in adult humans is responsible for a myriad of important changes in human evolution.” Most researchers, including Varki, agree that the difference won't come down to a single genetic change. “There won't be one magic gene that makes us human,” says King, now at the University of Washington, Seattle.

    Remodeling chromosomes

    Another tactic for finding the key differences is to start with chromosomes, because even a simple karyotype—a picture of the chromosomes—reveals that other apes have 24 pairs of chromosomes while humans have 23 pairs. While 18 of the 23 pairs are virtually identical in humans and other apes, it has long been known that the remaining pairs have segments that have been reshuffled since the great apes went their separate ways (Science, 19 March 1982, p. 1525). In recent years, several labs in the United States and Germany have been homing in on this chromosomal remodeling.

    For example, the gene mutated in a rare human disorder called adrenoleukodystrophy—made famous by the movie Lorenzo's Oil–turns up on the X chromosome in chimpanzees and humans. But nonfunctioning copies of it also have been found in different places in the chimpanzee and human genomes (Science, 12 June, p. 1693). Chromosomes 4, 9, and 12 have also been remodeled differently in chimps and humans, according to work published last month in Genomics by human geneticists David Nelson and Elizabeth Nickerson of Baylor College of Medicine in Houston. For example, the researchers spotted a chunk of DNA that sits on chromosome 4 in all apes and humans, but in chimpanzees it has moved to a new spot on the same chromosome and been inverted. The translocated chunk includes a gene called AF4, which codes for a transcription factor—and is mutated in some forms of acute leukemia in humans. Because apes are much less prone to certain cancers, including leukemia, this is an intriguing finding, raising the possibility that the inversion alters the factor's expression in chimps and so helps protect them from leukemia, says Nelson.

    Still, the functional significance of this and other chromosomal differences between humans and other apes is unknown. One possibility, says Nelson, is that similar remodeling disrupted specific genes in our primate ancestors, altering human physiology or function. Because sperm and eggs can't mingle their genetic material unless the chromosomes line up properly, he adds that such rearrangements could have created a reproductive barrier between our ancestors and other primates—the first step in creating new species like our own.

    But others, such as Pääbo, think that chromosomal rearrangements at influential sites are rare and so are skeptical that they play a major role in the differences between chimp and human. Pääbo and King think instead that the most promising research avenue is to identify small sequence differences that subtly change the expression of genes that regulate the timing of development, such as those that code for transcription factors that might lengthen the growth period of the brain and, hence, allow more complex brain structure in human fetuses.

    The sequencing efforts that may reveal these differences are now under way. Pääbo's group in Munich and Leipzig has sequenced a 10,156-base pair segment of DNA in the X chromosome of humans and chimpanzees, confirming again that they are about 99% similar. Now they're seeking differences in the expression of the identified genes in the brain and in the immune system.

    And at GenoPlex Inc., a Denver-based company founded last year by University of Colorado Health Sciences Center geneticists Jim Sikela and Tom Johnson, researchers have come up with a rapid method to find meaningful sequencing differences between humans and chimps. After sequencing a stretch of DNA in each species, they count two different types of nucleotide differences: those that change the structure and function of a protein product, and silent substitutions that don't. If the ratio of replacement to silent substitutions is high, they consider that the gene sequence is likely to have undergone a functional change that was selected for in humans.

    Preliminary results suggest that they have found uniquely human genes involved in AIDS susceptibility and learning and memory, says Walter Messier, an evolutionary biologist at the company. The firm has submitted patents on novel uses of these gene sequences, which they hope may become targets for new drugs.

    Much of this work is in its infancy, but researchers say they are poised on the verge of a brave new world where they will be able to identify and tinker with the DNA that makes us human—and will face new ethical dilemmas. “What happens if scientists identify a human gene that controls development of the larynx—a gene that might give chimpanzees the anatomy needed for speech?” asks Edwin McConkey, a molecular biologist at the University of Colorado, Boulder. “Can you imagine the ethical debate involved in whether or not to create transgenic chimps? It will open a real Pandora's box.”


    Pushing a Primate Genome Project

    1. Ann Gibbons

    Paleoanthropologists have long mined the stones and bones left by ancient humans for evidence of our past. But locked in the DNA of apes in zoos and tropical forests is an untapped treasure trove of clues about how we became human. “All sorts of attention is lavished on every new early human fossil out of Africa,” says Edwin McConkey, a molecular biologist at the University of Colorado, Boulder. “But chimpanzees are astonishingly close to us genetically. Isn't it time to study these living links as well?”

    McConkey is one of the leaders of a push for an international program to sequence DNA from chimpanzees and other great apes. The effort would supplement existing searches for the key molecular differences that set humans apart from our primate relatives (see main text). And now is the time to start, McConkey and molecular evolutionist Morris Goodman of Wayne State University in Detroit argued in a recent article in Trends in Genetics. Next month in Chicago, at a meeting organized by Goodman, a group of influential researchers will make the case for such a project, pointing to the current ability to scan genomes quickly and the ballooning amount of human genetic data. “All the technology is in place now,” says Mary-Claire King, a geneticist at the University of Washington, Seattle.

    Genome leaders say they are on board, although in the United States no one is talking about funding anywhere near the scale of the $1.5 billion Human Genome Project. “The timing is right for some pilot projects” to lay the groundwork for a complete primate genome project perhaps 5 years from now, says Francis Collins, director of the National Human Genome Research Institute (NHGRI). But while American scientists are talking about a project, work is already under way in Germany, where the German human genome project has awarded $1.1 million to the newly formed Max Planck Institute for Evolutionary Studies in Leipzig for comparative studies of human and primate genomes. They've already started with an effort to sequence comparable segments of DNA from six chromosomes in humans and chimpanzees, says Svante Pääbo, a molecular geneticist at the University of Munich who heads the work. “I think the majority of primate comparative genomics in the next few years is going to be done in Germany at Leipzig,” says McConkey.

    Other groups are gearing up in the United States, including one at Yerkes Regional Primate Research Center in Atlanta, where researchers opened a Living Links Center with $250,000 from Emory University to compare chimpanzees and humans using genetic, neuroanatomical, behavioral, and cognitive approaches. No sequencing is under way yet, but the center is negotiating an agreement with a Denver biotech firm, GenoPlex Inc., to do high-volume DNA screening. “What's strange is that several people have had the same idea at exactly the same time,” says Thomas Insel, director of Yerkes.

    So far, researchers have done little more than prove the worth of their tools and affirm the overall similarity of the human and chimp genomes. But the pilot projects are promising. For example, Collins's team at NHGRI tested DNA chips—a DNA-scanning technology made by Affymetrix of Santa Clara, California—to compare the sequence of a 3400-base pair segment of the human breast cancer gene BRCA1 with the same gene in chimpanzees, gorillas, and orangutans. Using a large segment of human DNA as a reference on the chip, the researchers searched for sequence variations in the same gene in other apes. The technology was remarkably fast and accurate, Collins's group reported in February in Nature Genetics. “The DNA chip worked very well,” says Collins. “It's a way to look at primate sequences faster than sequencing each primate's DNA de novo” and should keep down the cost of a full-scale primate genome effort once DNA chips become more affordable.

    But chips aren't the only candidate technology, says Collins: “Other people are playing with gel-based methods and mass spectrometry.” When the methods are refined and more affordable in 5 years or so, he predicts that human genome sequencing will have yielded a list of hot genes to study in primates, such as those that help equip humans for language, higher order brain function, and upright walking. Says King: “It's the next logical step of the Human Genome Project.”


    Vocal Critic Gets Chance to Put His Ideas Into Practice

    1. Dennis Normile

    Akito Arima, a physicist turned politician, joins the government at a time of growing pressure on the country's education system

    Tokyo—Physicist Akito Arima is one of Japan's most persistent critics of the country's science and educational policies. As president of the University of Tokyo from 1989–93, he led politicians, business leaders, and journalists on tours of the university's decrepit labs to showcase the nation's neglect of basic science. As a member of governmental advisory councils, he has pushed for a fixed-term system to replace the lifetime employment of university professors, campaigned for bright students to be allowed into college a year early, and advocated outside reviews of teaching and research activities at universities and national labs.

    Now, says Arima, those recommendations “have boomeranged.” At age 67, he has become head of Monbusho, the Ministry of Education, Science, Sports and Culture (Science, 7 August, p. 759). The first bona fide scientist to hold the job in 50 years, Arima will have a chance to implement some of his own advice. He must also confront a plateful of issues, from the merger of Monbusho with the Science and Technology Agency (STA) to reforming Japan's rigid educational system.

    His colleagues are betting on him to succeed. “I am hoping he will be able to do a lot for basic science,” says Sakue Yamada, a director of the High-Energy Accelerator Research Organization (KEK) in Tsukuba. Yoji Takikawa, a physics teacher at International Christian University High School and an education activist, says, “It's the first time we've ever had a minister who really understands the problems and challenges facing [primary and secondary] science education.”

    Educated at the University of Tokyo, Arima joined the faculty and compiled a distinguished record in theoretical nuclear physics. But it was only after being elected its president in 1989 that he became a force in science policy. “He really succeeded in raising science budgets,” says Yamada. When his term ended, he became president of the Institute of Physical and Chemical Research (RIKEN), a world-class facility outside Tokyo run by STA. He also served on numerous governmental advisory committees, using them as a forum to push his reform agenda.

    With two papers last year in Physical Review Letters, Arima says, “I still am enjoying physics.” A prize-winning haiku poet, he is also a regular at school science fairs—including an appearance as Galileo to explain the movement of the planets. That playfulness, combined with his diminutive stature and ever-ready grin, have made him the most widely recognized spokesperson for Japan's scientific community.

    That reputation helped to earn him a place at the top of the ticket for the ruling Liberal Democratic Party in the July campaign for the upper house of the Diet, assuring his election. He received the Monbusho portfolio in the government formed 30 July. Although he's new to electoral politics, Arima is intimately familiar with the issues facing the ministry. For example, he served on the government-wide Administrative Reform Council that recommended its merger with STA. Until this spring, Arima also chaired the Central Council for Education, a Monbusho advisory body that has proposed a shorter school year.

    A major challenge for the new minister is reconciling Japan's widely reported plan to dramatically boost spending on science with efforts to rein in a budget deficit swelled by 8 years of economic stagnation. The clash has created a situation in which overall science spending will rise by 21% at the same time that the operating budget for many big science facilities has been cut by 15% (Science, 1 May, p. 669). Arima says that plans for an additional 15% cut have been shelved, although it is not clear whether budgets will be restored to previous levels. And giving institute directors more discretion in shifting money between budget categories is “beyond my power,” Arima says.

    In the following edited transcript of a 25 August interview in his office, Arima explores four major issues he must face.

    On support for academic science:

    I'm very happy about the rapid increase in the budget for science and technology as a whole. … But I have one criticism. [Monbusho's] share is too small. The most important contributions in basic science and technology come from university professors. However, [the largest] share of the budget for science and technology goes to other ministries. Only 21% of [total government spending on science] goes to [Monbusho's] national laboratories and universities. I want to increase this.

    On program evaluation:

    I'm stubborn on this, continuously saying we need [accountability] plus external reviews. But I'm very happy [with the progress we've made]. When I introduced [external reviews] to the University of Tokyo in 1993, almost all the professors were very unhappy. However, many universities and many national laboratories now have undergone external reviews. So, the question now is if these external reviews are really effective. Monbusho is trying to establish an institute to oversee and participate in the external reviews. If this institute gives a good report about a certain project, then that project will [get greater] support. But if this institute says no, this university's activities are very bad, then we will try to reduce their budget.

    On merging STA and Monbusho:

    STA has its own culture and the Ministry of Education has its own culture. At the Ministry of Education, basically, individual professors or individual researchers propose projects, [reviewers] argue which is best, and finally the Ministry of Education decides. On the other hand, at STA, [new initiatives] are mostly top-down policies. So last year I was very reluctant to merge these two ministries. However, now I see the good aspects. For example, the Ministry of Education is interested in education, and STA is interested in science education. Up to now, these two [activities] have been done separately. Now we have already started to merge them. Or take STA's big budget for space science, accelerators, oceanographic [research], and so on. We have similar activities in universities. If we merge these two ministries, big science will get stronger support and we can avoid duplications and inefficiencies.

    [The idea of keeping STA as a separate agency under Monbusho] could happen. In that case, the different cultures will be preserved. We also have a project team that is [discussing whether] to merge, say, the Institute of Space and Astronautical Science with the National Space Development Agency.

    On educational reform:

    We're very good at elementary education. If you look at international comparisons, Japanese students do very well—third in the world in science and mathematics. However, the problem is that students have to learn [by memorizing] and have no chance to practice their knowledge. So we decided to close schools on Saturdays. We will also try to increase the number of [educational activities] outside school. I hope kids will have more chance to learn by themselves and to develop their individuality, creativity, and originality.

    At the same time, our Central Committee for Education has recommended two things. If there are certain students who learn things slowly, our teachers should educate those slow students more carefully. At the same time, we should give [bright students] an advanced education.


    Legislators Get Creative With New Crop of Earmarks

    1. David Malakoff

    A budget surplus and a fall election invigorate the traditional practice of channeling money to specific universities

    A researcher who wins a multimillion-dollar federal grant can usually recall in excruciating detail exactly what it took to prevail over some stiff competition. But not Jim Bose, an engineer at Oklahoma State University in Stillwater. Bose says he can't explain the intricacies of how his lab received a $2.5 million grant earlier this year to design a “smart bridge” that automatically energizes ice-melting heat pumps when the weather turns nasty. The reason: The project didn't have to go through the labyrinthine peer-review process, because one of his state's senators tacked the 3-year award onto a mammoth federal transportation spending bill signed into law in June.

    “I'm not sure how it happened, but the university is real pleased, and so am I,” Bose says about getting a chance to work on the innovative concept. He is, however, a bit uncomfortable about how the windfall came about. “When you get the money, you think it's a great way to go,” he says. “But when you see someone else get it, you kind of wonder about the process.”

    Bose isn't the only scientist with mixed emotions about the long-controversial practice of earmarking. Each year thousands of researchers benefit from such efforts by well-positioned lawmakers to funnel money to local institutions, often over objections from Administration officials. Budget woes and efforts by Republican leaders to make good on promises to cut government waste had curtailed such spending in recent years. But legislators' taste for bacon has been revived by a predicted budget surplus, the Supreme Court's rejection of the president's authority to veto individual items in a spending bill, and the fall election campaign. “The political trend is very strong; … there is a tendency toward spreading the largesse,” says Nils Hasselmo, the new president of the Association of American Universities, a group of 62 leading research institutions that has taken a strong stand against earmarking. Congress won't finish work on the 1999 budget until next month, but its actions so far suggest that earmarks for science projects are running close to last year's total of a half-billion dollars which, according to a survey by the Chronicle of Higher Education, represented a 67% jump from the 1995 level.

    Exhibit A in this year's panoply of pork is the $2.3 billion Transportation Equity Act for the 21st Century. In addition to providing funds for campus buildings with no apparent connection to transportation studies, the law created what amounts to a mega-earmark: $132 million to 23 specific institutions—including Oklahoma State—interested in studying everything from global warming to auto accident injuries. Although projects like the self-heating bridge may have merit, says Tom Maze, head of the Center for Transportation Research and Education at Iowa State University in Ames, the earmarks mean that a growing share of a stagnant transportation research budget is not subject to peer review. In a nod to the value of academic competition, Congress did stipulate that the department spend another $60 million for 10 university-based centers chosen by peer review. But that doesn't compensate for the growing restrictions on the research budget, says Maze: “They used to hand out bridges; now they are doling out transportation research centers with a minimum of oversight.”

    Just how much fat is added to a bill often depends on the temperament of the committee chairs who oversee the 13 individual appropriations bills that fund all federal activities. For instance, relatively little biomedical research pork has made its way into the House version of the key spending bill, mainly because Representative John Porter (R-IL), who chairs the appropriations subcommittee for education, labor, and health and human services, persuaded his colleagues that they should allow peer reviewers to make funding selections. Indeed, the report his subcommittee produced in June—which recommended a 9.1% increase for the National Institutes of Health—pledges to give NIH staffers “no directives” on centers or on “particular diseases.” But the Senate version of the bill, to be crafted this week by a panel led by Pennsylvania Republican Arlen Specter, is expected to be more explicit about its preferences.

    Even the relatively clean NIH bill, however, indicates that lawmakers are getting more sophisticated about avoiding earmarking controversies by using seemingly vague recommendations. The House report, for example, calls on the National Cancer Institute (NCI) to fund a new center for proton beam therapy, an exotic type of surgery that uses high-energy particles for scalpels. Two such centers already exist, one on each coast, so the committee encouraged NCI to “assist in efforts to convert an existing online accelerator into a proton beam therapy center to serve populations which do not have access to this therapy.” Translation: NCI should help Indiana University convert its old physics cyclotron in Bloomington to a medical unit. University staffers say they have been searching for a backer for the project ever since the National Science Foundation (NSF) began to trim support for nuclear physics a couple of years ago.

    One traditional argument in favor of earmarks is that they channel funds to institutions, states, and regions that have received less than their fair share of federal R&D support (see sidebar). This year, that argument underpinned a Senate panel's attack on an elite group of schools that get the lion's share of peer-reviewed NSF grants. Lawmakers on the panel, which sets spending on NSF and other independent agencies, approved language that would restrict the “top 100” from competing for $18 million in funding for a half-dozen new, university-based research and training centers, three focused on information technology and three on applied molecular biology. Such a competition, according to the committee, will help to overcome “any bias toward more established institutions.”

    The restricted competition is intended to stem a geographic brain drain that shrinks the country's overall R&D capacity, says Gary Strobel, a microbiologist at Montana State University in Bozeman. “Most universities in rural states are better positioned to do work in biotechnology than in any other field because of their roots as land-grant agricultural schools,” he says. “But the problem is that most of our young people get trained and go elsewhere for jobs because there aren't enough opportunities at home.”

    NSF officials, who point out that geographic diversity could be one of several factors in making an award, say barring the top universities from a competition is a bad idea. “We'd never do it,” says Mary Clutter, head of NSF's biology directorate. “It would be sheer folly to exclude the best universities in the country.”

    A better alternative to earmarks, say some scientists and policy-makers, would be for Congress to provide funds aimed specifically at improving the research capacity of so-called “have-not” institutions and regions, and then use peer review to select individual award winners. NSF began such a program almost 20 years ago, called the Experimental Program to Stimulate Competition in Research (EPSCoR), and it has grown to a $100 million effort at eight agencies. “You might call EPSCoR an earmark, but it's not damaging to the system,” says Erich Bloch, a former NSF director now at the Council on Competitiveness in Washington, D.C.

    Such programs are unlikely to dampen the taste for scientific pork, however, and Maze believes researchers must learn to live with the practice. The rush to adorn the transportation bill with earmarks, he believes, “clearly demonstrated the power of winning university sponsorship through political muscle rather than through superior intellectual resources.”


    Big Bucks for Big Sky Country

    1. Andrew Lawler

    Researchers at Montana State University studying the geology and biology of Yellowstone National Park, just across the Wyoming border, are expecting a $2 million geyser of federal funding, thanks to the kindness of a U.S. senator. If it materializes, the money would almost double their current budget. And that's not all. The funding joins three other Montana earmarks—unrequested funding targeted to a specific institution—in the Senate's version of the 1999 spending bill for NASA that total $7.5 million, and another project intended to benefit Montana that's included in a portion of the bill that covers the National Science Foundation (NSF).

    These earmarks come courtesy of Senator Conrad Burns (R-MT), a member of the appropriations subcommittee that funds NASA and NSF and chair of the panel that authorizes spending for both agencies. “Our senator made it clear he wanted to do something,” recalls Steve Running, a forest ecologist at the University of Montana (UM), Missoula, which is in line for one of the earmarks, a $3.5-million-a-year natural resources center to apply data from NASA's Earth Observing System to everything from fire detection to rangeland productivity. “I could have kept quiet, but my university president encouraged me to come up with ideas.” Adds Bob Swenson, the recently retired vice president of research at Montana State who helped with the Yellowstone proposal, “Everyone else is doing it, so we're figuring out how we can do it, too.” The university's efforts to secure funding have been assisted by Van Scoyoc Associates, a Washington lobbying firm.

    Burns's prominence makes space agency managers reluctant to criticize his pet projects, despite the fact that NASA—which has a declining budget—did not request the project funding. NASA Administrator Dan Goldin has, in fact, assiduously courted Burns, visiting Montana in a bid to influence his agency's annual budget and the space station program. And 2 years ago a bevy of top federal science officials, including Neal Lane, then NSF director and now the president's science adviser, visited the university's biological field station in Flat Head Lake for the annual meeting of a federally funded program to boost the state's research capacity.

    Burns is not the first politician to bring home the federal research bacon to Montana. But he has become one of the more active participants on the national science scene. MSE-Technologies in Butte, for example, has received pork funding for several years to work on a variety of aerospace engineering efforts. Its latest slice, $2 million for “high-priority aerospace technology,” continues that arrangement. “Congress feels this particular organization is the best venue for the government to do this work,” says Dennis Bushnell, chief scientist at NASA's Langley Research Center in Hampton, Virginia, which oversees the work.

    In addition to the Yellowstone-related research, Montana State is in line for $2 million to further develop a technology that could someday help record the simultaneous actions of thousands of neurons. Swenson says the technique has attracted some commercial interest but that the main goal of both projects is to carry out good research. “The bottom line is that we can't embarrass ourselves or our senator by doing crappy science.”

    The NSF earmark is less obvious—and less welcome. The committee orders NSF to fund, “through a competitive process, an additional LTER site for the study of a pristine, inland, mountain wilderness area. Preferences should be given to sites with established research facilities.” LTER stands for Long-Term Environmental Research, a network of 21 sites from Alaska to Antarctica at which scientists collect and exchange data on biodiversity in a variety of ecological settings. In fact, the language was tailored by Burns's staffers to cover work going on at Flat Head Lake led by UM ecologist Jack Stanford, who already has NSF funding for a range of ecological studies.

    But Stanford says his team “doesn't need any special arrangement” to extend its work. Although he says he has discussed his research with Burns and believes that the LTER program “has holes in the northern Rockies,” he says any LTER competition should be open to anyone. He says he was not aware of the language in the Senate bill. NSF officials say they welcome the “vote of confidence” in the program but have no plans to hold a competition anytime soon. Indeed, says Doug Collins, who runs the program, “we're gearing up for a 20-year review … and it wouldn't make sense to add new sites just before we do that.” But Collins says NSF will abide by any congressional mandate.

    Whatever the ultimate fate of the earmarks, Swenson, Running, and other Montanans argue that they are simply trying to level the playing field. “It's our turn,” says Running. “That's probably not a really legitimate justification, but we've not been benefiting like others have.” Adds Jon Lindgren, Burns's deputy press secretary: “We have people who can do excellent research. So if it can be done in Montana, we will try to make sure it is.”


    How a Growth Control Path Takes a Wrong Turn to Cancer

    1. Elizabeth Pennisi

    As researchers work out how the Wnt pathway controls growth and development, they are getting a better grasp on the causes of cancer

    Biologists these days often find themselves exploring isolated corners of the cell's molecular labyrinths. But every so often, the trails they have been following converge unexpectedly, and a unified picture emerges of a previously mysterious cell function. Recently, researchers studying one of the cell's key developmental and growth regulatory pathways—called the Wnt pathway after the protein that sets it in motion—have had that happy experience.

    Over the past year or two, a confluence of evidence from molecular and cell biology as well as from research on development, cancer, and the immune system is providing a good look at how the pathway conveys signals all the way from the cell surface, where the Wnt protein binds to its receptor, to at least one gene in the nucleus. Out of this confluence is coming a better understanding not just of embryonic development in species ranging from fruit flies to humans but also of cancer.

    Because activation of the Wnt pathway stimulates cell growth, researchers had long suspected that too much Wnt signaling could cause problems. The new work bears out those suspicions, showing how damage to a well-known tumor suppressor gene could turn on an equally prominent oncogene via the Wnt pathway and lead to cancer.

    The tumor suppressor is the adenomatous polyposis coli gene (APC), which is lost or inactivated in some 85% of colorectal cancers. And as described on page 1509, the oncogene is the c-MYC gene. Although researchers linked inappropriate c-MYC activation to Burkitt's lymphoma and lung, colon, and other cancers 20 years ago, they hadn't been able to figure out exactly what causes c-MYC expression to go awry in most cases. But cancer geneticists Tong-Chuan He, Bert Vogelstein, Kenneth Kinzler, and their colleagues at Johns Hopkins University School of Medicine in Baltimore, Maryland, now report that they have identified one of the genes turned on by Wnt signals—and it's none other than c-MYC.

    Normally, it seems, APC instructs the Wnt pathway to keep c-MYC expression in check until the right signal comes along, say, to stimulate the cell growth needed for embryonic development. But if APC is missing or inactive, c-MYC will be active all the time, causing tumor growth. “This paper is a major, unexpected contribution to our understanding of not only normal c-MYC control but [also] how the loss of a tumor suppressor can result in abnormal activation of c-MYC,” comments Kenneth Marku, a molecular biologist at the State University of New York (SUNY), Stony Brook.

    As they learn more about the Wnt pathway's involvement in cancer, researchers are also becoming more optimistic that they can put their results to work developing new anticancer drugs that act by blocking c-MYC activation. “We have a pretty good shot at doing something about colon cancer, now that we know a lot about this pathway,” says Paul Polakis, a biochemist at Onyx Pharmaceuticals Inc. in Richmond, California. The potential benefits may not be limited to colon cancer, because other recent work suggests that Wnt pathway malfunctioning may contribute to the development of additional cancers, including the dangerous skin cancer melanoma and cancers of the prostate, liver, and possibly the breast.

    Following the Wnt trail

    Cancer researchers originally discovered the gene for Wnt, a protein that conveys growth and developmental signals between cells, 16 years ago in mouse mammary tumors. At the time, the gene was called Int-1 because it became activated when the mouse mammary tumor virus inserted—or integrated—next to it in the genome. This abnormal activation led to tumors in the mice, marking Int-1 as an oncogene. But a 1987 discovery showed that the gene has a role in normal embryonic development as well: It is the mouse version of wingless (wng), a developmental control gene first found in the fruit fly. Since then, much evidence has shown that Int-1 (which was subsequently rechristened Wnt-1, a melding of wingless and Int) and its relatives control such aspects of development as the formation of the central nervous system.

    Signaling the nucleus.

    In normal colon cells (top), β-catenin is either held by E-cadherin or destroyed, preventing it from reaching the nucleus and relieving groucho and CBP's repression of c-MYC's growth signals. But Wnt signals (bottom left) or mutations that block formation of the complex that targets β-catenin for destruction (bottom right) allow β-catenin buildup, c-MYC activation and thus cell proliferation.


    While the mouse mammary tumor link suggested that Wnt-1 might also be an oncogene in humans, no evidence has been found for that. Beginning in the 1980s, however, evidence began building that the pathway that conveys developmental and other signals from Wnt-1 to the nucleus could have its own role in cancer.

    One clue came in the 1980s from Walter Birchmeier, a cell biologist at the Max Delbrück Center for Molecular Medicine in Berlin, Germany. At the time, his group was studying E-cadherin, a protein that projects out of the cell membrane and provides the “glue” that helps cells stick to one another. As a way of finding out more about E-cadherin function, Birchmeier began looking for molecules that interact with it in the cell. One of the proteins his search turned up was β-catenin. The fruit fly geneticists had already identified this protein as a component of the Wnt pathway by showing that adding β-catenin to insects with Wnt mutations restores proper development.

    Soon Birchmeier's group had expanded its initial study of cell adhesion and was screening for additional components of the Wnt pathway by looking for molecules that interact with β-catenin. They found several. One, Birchmeier and Jürgens Behrens reported in 1996, turned out to be a member of the Lef/Tcf family of transcription factors, which regulate gene expression in certain immune cells. Hans Clevers, an immunologist at the University of Utrecht in the Netherlands, made a similar discovery at about the same time.

    While everyone expected that Wnt signals would have to alter gene expression in order to play a role in development and cancer, no one had known how until then. “That was a major contribution in figuring out how the Wnt signal [worked] in the nucleus,” says Polakis. Researchers could find no role of E-cadherin itself in Wnt signaling, however, since it normally holds onto its β-catenin. The β-catenin that makes it to the nucleus and interacts with Lef/Tcf comes from a different cellular pool.

    Meanwhile, another discovery had also helped draw researchers' attention to the Wnt pathway. In 1993, the Polakis and Vogelstein groups both showed that the tumor suppressor gene APC was likely to be part of the pathway. The truncated, inactive form of the APC protein seen in most colon tumor cells, they found, resulted in β-catenin accumulation in the nucleus, where it might turn on genes. The normal form of APC, in contrast, prevented that accumulation. The result provided a possible explanation of why APC loss leads to cancer: by turning on the Wnt pathway—and the expression of certain genes—even under conditions when it would normally be shut down.

    Indeed, subsequent experiments confirmed this scenario. Clevers and the Kinzler-Vogelstein team have shown that in cells lacking a working APC protein, β-catenin accumulates in the nucleus, where it gets attached to Tcf-4, the Lef/Tcf transcription factor active in gut tissue, and presumably stimulates the tumor cell growth. In about 10% of colon cancer cells, the Kinzler-Vogelstein team found, mutations in β-catenin itself have the same effect. “In a sense, you can't get colon cancer unless you disrupt this pathway,” says molecular biologist Jan Kitajewski at Columbia University in New York City.

    And not just colon cancer. Polakis and his colleagues have evidence that the Wnt pathway is disrupted in about one-third of melanomas, and Christine Perret's group at the University of Paris has recently found mutations in the β-catenin gene in mouse and human liver tumors. Some prostate and breast cancers may also arise from faulty Wnt-pathway components.

    Checking Wnt's signal

    Now researchers are learning just how APC and other components of the Wnt pathway work together to transmit or silence Wnt's growth-stimulating signal. Researchers including Randall Moon and David Kimelman at the University of Washington School of Medicine in Seattle have shown that unlike tumor cells, nonproliferating cells have very little free β-catenin in the absence of Wnt signaling. Much of the β-catenin is tethered to cadherin molecules that protrude from the cell membrane. The rest is bound up in what several research teams have shown is a multiprotein complex with APC and two other proteins: a kinase enzyme called GSK-3β that tags β-catenin for degradation by adding phosphate groups to it and a protein newly discovered this year that is known either as axin or conductin, depending on the cell type.

    Birchmeier and Behrens found that conductin apparently assembles this complex by linking the components together. This allows β-catenin phosphorylation and degradation to proceed, keeping the Wnt pathway in check. But as shown by the combined efforts of many labs, when Wnt binds to its receptor, that signal is relayed to a protein called disheveled and from there to GSK-3β. As a result, β-catenin degradation is blocked and it can travel to the nucleus and interact with transcription factors, thereby regulating genes.

    Damage to any part of this complex—not just APC—can lead to abnormal signaling. Developmental biologist Frank Costantini's team at Columbia University showed, for example, that mice with a mutated axin gene grow a second head and neural tube. Without axin, it seems, the protein complex can't form. As a result, cells divide when they shouldn't, resulting in extra growth, and in the case of the mutant mouse embryos, two body axes instead of one. “Axin is there to keep the [Wnt-1] pathway quiet until the [Wnt] signal comes along,” Costantini says.

    Inappropriate activation of the Wnt pathway is apparently so dangerous that the cell has evolved multiple ways of keeping it in check. Last year, results from Kimelman and Moon's team suggested that the nuclei of developing frog embryos contain factors that repress the transcription of genes activated by the Wnt pathway. In April, at a cell biology meeting in Heidelberg, Germany, both Mariann Bienz, a developmental biologist at the Medical Research Council Laboratory of Molecular Biology in Cambridge, United Kingdom, and Clevers described two more proteins that interact with Lef1/Tcf proteins much as β-catenin does, but that exert the opposite effect, squelching any gene activation by the transcription factor.

    The inhibitory protein identified by the Bienz group is the CREB binding protein (CBP), which is part of the complex of proteins that regulates gene transcription, while the one found by Clevers's team, in collaboration with U.S. researchers, is called groucho because in fruit flies the mutated protein leads to thick bristles reminiscent of Groucho Marx's mustache. “[The] model says that Tcf with [groucho or CBP] is sitting on the target genes and repressing their transcription,” he explains. “β-catenin overrides this repression.”

    Still, failure of the Wnt pathway can be just as catastrophic as too much activity. In the August Nature Genetics, Clevers and his colleagues describe how they shut down the pathway in mice by knocking out the gene for Tcf-4. The animals died shortly after birth, apparently because their gut cells failed to maintain a source of immature cells. Instead, the gut lining was made up solely of differentiated cells. “With no Tcf-4 … you fail to maintain the proliferative [cell type],” Clevers explains. As a result, the digestive tract could not develop properly and thus could not absorb food.

    β-catenin gone wild

    Although these experiments revealed how Wnt's instructions are carried out in the nucleus and also provided information about APC's role as a tumor suppressor, they left a major mystery: the identity of cancer-promoting genes at the end of the Wnt pathway. And that's where the new results from the Kinzler-Vogelstein team come in.

    To track down the genes regulated by β-catenin, Kinzler, Vogelstein, and their colleagues at Johns Hopkins used a screening procedure called Serial Analysis of Gene Expression (SAGE), which they developed in 1995 to assess which genes are turned on or off in response to APC or other regulators. Because only active genes get transcribed into messenger RNAs, the researchers first made short 15-base DNA copies of all the mRNAs in colon cells lacking a functional APC gene. Then, they transferred the APC gene into the cells and again made DNA copies, or “tags,” of the mRNAs.

    By comparing all the tags in the cells with and without the APC, the researchers were able to get an idea of which genes were affected by the tumor suppressor's presence. One of the genes whose activity it reduced most was c-MYC, which was shut down within 3 hours of the APC addition.

    “I was surprised,” Kinzler recalls. Researchers had found excess c-MYC protein in many types of colon tumors but could never tell whether that excess was a cause or a consequence of the rapid replication of those cancer cells. But now it appears that c-MYC activation is a direct consequence of APC loss and thus a primary cause of tumor cell replication.

    Further evidence for that idea came from experiments in which the Hopkins team attached DNA containing the regulatory region for the c-MYC gene to a reporter gene that codes for the luciferase enzyme, which can make cells glow in the presence of another protein, luciferin. When they then introduced the hybrid gene, with luciferin, into human colorectal cancer cells, the cells did in fact glow, indicating that the c-MYC regulatory region was picking up an activating signal. But when they added APC protein to these cells, the glow faded—the reporter gene was shut down.

    As the first clear case in which the direct loss of a tumor suppressor leads to the activation of an oncogene, “[the Hopkins team's result] helps explain the biology of the tumors,” comments Harold Moses, a cell biologist at Vanderbilt University in Nashville, Tennessee. Adds SUNY's Marku, “This is an important result for the c-MYC people and for the APC people.”

    Important though this finding may be, much of the Wnt-cancer connection remains to be worked out, Kinzler cautions. For one, just as there are multiple controls on the activity of genes at the end of the Wnt pathway, there seems to be more than one way to get to those target genes. Shoukat Dedhar, a molecular biologist at the University of British Columbia in Vancouver, and his colleagues have been studying an enzyme called integrin-linked kinase (ILK) that some reports have implicated as a trigger for cancer. Dedhar's team has found that ILK, which is usually found attached to a cell surface protein called integrin, frees up β-catenin in the cell, but by a different route than does the Wnt signal.

    When Dedhar's group added extra ILK genes to mouse intestinal epithelial and mammary cells growing in culture, the researchers found that the excess ILK protein produced lowered the amount of E-cadherin. As a result, not only did the cells stop adhering to one another, but the β-catenin that had been bound to the E-cadherin moved into the nucleus, turning on genes that respond to Lef1/Tcf. “It was a pretty thorough demonstration that there may be Wnt-independent ways to activate the Lef1/Tcf pathway,” says Kitajewski about Dedhar's results. And ILK may deliver a double whammy, because loss of cell adhesiveness may make a tumor cell more likely to grow into other tissues.

    But the surprising convergences that have marked Wnt studies thus far show every sign of continuing. In an upcoming report in the Proceedings of the National Academy of Sciences, Dedhar shows that ILK may feed into the WNT pathway after all by inhibiting GSK-3β. “[The pathway] is not as linear as we have thought,” says Kitajewski. But he's quite pleased about the progress to date. “We've worked out the skeleton,” he says, “and now we're filling in the details.”

    Additional Reading

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution