News this Week

Science  06 Dec 2002:
Vol. 298, Issue 5600, pp. 1862

You are currently viewing the .

View Full Text

Log in to view the full text

Log in through your institution

Log in through your institution


    Inquiry Turns Into OK Corral for U.K. Primate Research

    1. Keri Page*
    1. Keri Page is an intern in the Cambridge, U.K., office of Science.

    CAMBRIDGE, U.K.—University of Cambridge officials were hoping last week for a staid review of their controversial plan to build a $36 million neuroscience center on the outskirts of town. Instead, a hearing meant to be limited to zoning issues devolved into a circuslike referendum on primate research. The inquiry was a key step in a drawn-out process that is expected to culminate in a decision on the center's fate early next year.

    The proposed facility would bring all of the university's primate research under one roof and expand work on potential treatments for everything from substance abuse and schizophrenia to Parkinson's and Alzheimer's diseases. A failure to win approval for the center—which has already landed $12 million in grants and could hire up to 60 scientists —“would have a big negative impact on neuroscience in the U.K. and Europe,” warns John Capitanio, a psychologist at the California National Primate Research Center in Davis. Prominent U.K. voices agree. Primate research would “eventually be conducted elsewhere, possibly in countries with less rigorous legislation and lower standards of animal welfare,” asserts Nancy Rothwell, president of the British Neuroscience Association.

    Such arguments appear to have convinced Prime Minister Tony Blair, who warned in a speech last May that “we cannot have vital work stifled simply because it is controversial” and has said that the center's work would be in the national interest. But the university has had a hard time convincing local authorities that it would be in the city's interests as well. The inquiry is the university's third attempt to win approval from the South Cambridgeshire District Council, which had rejected previous proposals mainly on the grounds that anticipated protests against the center would further snarl Cambridge's notorious rush-hour traffic and perhaps pose a public safety risk. After the university appealed, the central government appointed a “planning inspector” to adjudicate the matter. The inspector will send a recommendation to the U.K.'s deputy prime minister, who will have the final say.


    Protesters (top) gather outside a hearing (bottom) last week on the University of Cambridge's controversial proposal to build a primate research center; a decision is expected early next year.


    In September, the inspector, Stuart Nixon, ruled that he would not “hear evidence on public health, animal welfare, or moral arguments,” setting the stage for what university officials had expected would be a straightforward hearing on how their revised proposal accounts for protests. Those hopes were dashed on the inquiry's first day, 26 November, when Nixon reversed himself and allowed animal-rights groups to present their case. First up was Richard Wald, a lawyer representing six such organizations, who denounced the center as “scientifically flawed and unreliable” and “neither of national importance [nor] necessity.” The university had left itself in an awkward position to rebut such charges: It had barred its own scientists engaged in primate research from testifying, out of fear for their safety. That left the university's chief academic witness, Keith Peters, head of its Clinical School, virtually alone on the firing line. Peters assured the inspector that as few animals as possible would be used in experiments and that a state-of-the-art facility would be “more likely to provide better [conditions] for animal welfare.” But Peters also stated, vaguely, that a national need for primate research is “self-evident” and had gone through “particularly stringent” peer review.

    Wald pounced. He charged that Peters was “unable to say scientifically whether the underlying science can be applied to humans,” and he asserted that experimentation that “may or may not lead to a cure … basically amounts to scientific dabbling.” Peters shot back, “If you knew what the answer was, there would be no point in doing the research.” But it was evident that the animal-rights campaigners had drawn first blood.

    It was the opponents' turn to stumble when they attempted to support provocative claims that new drugs are not necessarily safer if they are tested in primates. Claiming that primate research has not yielded any insights into diseases such as atherosclerosis, cancer, and stroke, Ray Greek, medical director of a group called Europeans for Medical Advancement, concluded that “the abandonment of animal models is absolutely vital for medicine to advance.” As evidence that primate research is unnecessary, Wald referred to an Alzheimer's vaccine that had moved directly from mouse experiments into clinical trials last year. Apparently, he was unaware that in January, the clinical trials were halted after 15 patients developed severe brain inflammation. Peters knew this, however, and noted, “You will find you have shot yourself in the foot, Mr. Wald.”

    Lost amid the spectacle of such repartee was testimony on the more substantive zoning issues, primarily traffic and safety. Representatives from the local council expressed support for the center in principle but demanded that the university choose a new site farther from the city center. The current site, a police officer said, risked increased “crime and disorder.” The university held firm, however, arguing that the site's proximity to an imaging center and other labs would foster collaboration and better experiments. But the university was on less-sure footing when one official claimed that because center opponents are drawn from the “less radical end of the animal-rights movement,” their demonstrations are likely to be “less frequent, smaller, and less aggressive.” University officials did not explain how that jibed with their decision to bar researchers from testifying on safety grounds.

    The inquiry was set to wrap up on 6 December, after which Nixon will compile a report for Deputy Prime Minister John Prescott. Although his boss has embraced the center, Prescott—who is expected to make a ruling early in 2003—has not expressed an opinion publicly. His decision will be final and binding on the university and the council.


    Sequence Tells Mouse, Human Genome Secrets

    1. Elizabeth Pennisi

    Sequencing a genome is a little like opening a present. Researchers have been tantalized for 3 years as they've unwrapped mouse DNA. Now, they're examining the contents and finding them even more exciting. This week, researchers from around the world describe what they have found so far: discoveries about the mouse genome that are providing insights into our own genetic code and making mice an even better biomedical research tool. “The data are turning out to be as valuable as we hoped,” notes Karen Artzt, a geneticist at the University of Texas, Austin.

    The mouse genome is only the second mammalian one sequenced to date. The ability to compare it to the closely related human genome “makes the [work] most meaningful,” says Maja Bucan, a geneticist at the University of Pennsylvania in Philadelphia. An analysis of the sequence, published in the 5 December issue of Nature, is accompanied by five papers delving into the mouse's genetic characteristics and comparing them to those of humans. Some genes and other bits of sequence are similar across the board; elsewhere, there are intriguing differences.

    Humans and mice have about 30,000 genes each—about 80% of which match up, reports a team led by geneticists Robert Waterston of the Washington University Genome Sequencing Center in St. Louis, Missouri, and Kerstin Lindblad-Toh of the Whitehead Institute/MIT Center for Genome Research in Cambridge, Massachusetts. But the human genome is longer—with about 3 billion bases, compared to the mouse's 2.5 billion. And the types of genes vary: The mouse has many more genes involved in reproduction, immunity, and olfaction, for example, and the genes in the first two categories have evolved much faster in mice than in humans.

    Researchers have taken advantage of the similarities between human and mouse DNA to study where in the body genes are active. “Knowing when and where a gene is expressed facilitates enormously further functional analysis,” explains Marie-Laure Yaspo of the Max Planck Institute for Molecular Genetics in Berlin. A team led by Andrea Ballabio of the Telethon Institute of Genetics and Medicine in Naples, Italy, matched mouse sequence against the known genes along human chromosome 21. The researchers tracked the expression of each of these mouse genes—about 160 in all—at several stages of development, viewing expression in three dimensions in whole embryos. They also monitored which of these genes were active in 11 types of adult tissue, such as brain, muscle, and heart. In this way, they identified genes of interest, including one called Adarb1 that's involved in heart development and might play a role in Down syndrome.

    Rodent roundup.

    The newly analyzed mouse genome and newly assembled rat sequence are helping researchers make sense of humans' genetic code.


    Yaspo and her colleagues took a similar tack in studying genes from chromosome 21, the shortest human chromosome, but they concentrated on brain development shortly after birth. They found that about half the mouse genes were active throughout the brain and half were expressed in specific regions.

    Such studies can help researchers determine the functions of normal genes. But the data can also be used to home in on genes that cause disease, says Bucan. Once they find a few candidates, knowing where each candidate is active “will make getting the [right] gene easier,” she points out.

    Another study took a broad view of human chromosome 21, comparing its DNA and that of the mouse genome. Stylianos Antonarakis and Emmanouil Dermitzakis of the University of Geneva, Switzerland, and their colleagues report that only about half of the more than 3000 conserved sequences turned out to be genes. That other stretches of DNA are maintained through evolution means they must be important, possibly as regulators of gene activity, and could also play a role in human disorders, says Antonarakis. Indeed, notes Lindblad-Toh, “one of the highlights [of this week's reports] is that such a large fraction of the genome is conserved.”

    Other mouse-centered DNA studies should make the rodent even better suited for genetic studies, says Joseph Nadeau of Case Western Reserve University in Cleveland, Ohio. In one, Claire Wade of the Whitehead genome center compared the draft sequence, which comes from the B6 strain, with DNA from another mouse strain, cataloging the differences between individual bases. She and her colleagues found 80,000 of these single-nucleotide polymorphisms (SNPs), many of them clumped together in SNP-rich regions. These landmarks and their distribution should help researchers locate disease genes, Nadeau notes.

    Adding to the genetic wealth, a team from Japan's Institute of Physical and Chemical Research (RIKEN) has released almost 61,000 sequences, called full-length cDNAs, derived from mouse RNA. Many of these sequences represent proteins made by the mouse genome. Surprisingly, the total number of cDNAs proved to be larger than the number of genes, indicating that some genes are put together in different ways as they are expressed. “This is a very important point,” says project leader Yoshihide Hayashizaki.

    While the mouse genome sequencers celebrate their accomplishments, the U.S. team sequencing the rat genome also has cause to cheer. Last week, Richard Gibbs, director of the genome center at Baylor College of Medicine in Houston, Texas, and his colleagues announced that they had completed a high-quality draft of the rat genome. “Mouse and man are fairly far apart,” Artzt explains. Having two rodent genomes “will be particularly useful” in interpreting sequences from all three mammalian species—another gift waiting to be pulled out of its box.


    Mystery Academy Holds First Powwow in Private

    1. Philipp Weis

    BRUSSELS—The headquarters of science academies often are ornate structures and their annual meetings grand affairs involving hundreds of luminaries. Not so the European Academy of Sciences (EAS). Its humble address here is a mailbox on the fifth floor of a drab office building. And its first annual meeting, held on 29 November in a room borrowed from the European Commission (EC), drew 14 people who met behind closed doors.

    The newest academy on the block is not off to an auspicious start. On 31 October, the U.K.'s Royal Society issued a statement warning scientists “to exercise due caution before making financial commitments” to EAS, which began earlier this year as a dues-paying organization but now bestows memberships free of charge. The scientists behind the organization admit that they stumbled out of the starting gate. “We made a few mistakes … that obviously led to misunderstandings,” says Philip Carrion, scientific adviser to EAS.

    The academy is an attempt to transform a pilot project on technology transfer into a broader forum on research commercialization. Earlier this year, Carrion, a materials scientist at the University of Udine, Italy, and 30 colleagues completed an EC-sponsored project in Krakow, Poland, that helped obtain loans for small businesses by providing them with advanced technology from Western Europe. “We now want to expand” that model through EAS, Carrion says. A society of scholars, he explains, is “very important to assure the industrial partners of our academy that the technology is really state-of-the-art.”

    A few individuals tapped for membership felt the honor warranted telling the world. For instance, the Cedars-Sinai Medical Center in Los Angeles put out a press release in August announcing neurosurgeon Keith Black's selection to the body. Wolfgang Sigmund, a materials scientist at the University of Florida, Gainesville, says that EAS “described my field of research more accurately than I ever did,” and he presumed the organization was legitimate partly because he was the only person at his university offered membership. Others are less charitable. Guenter Albrecht-Buehler, a biologist at Northwestern University School of Medicine in Chicago, says that although he was delighted when the academy nominated him in September, he now has misgivings. He told Science that as a precaution he has canceled the credit card he used to make the dues payment of $115.

    Stay tuned.

    As Science went to press, EAS had not yet posted its full member roll.


    The academy has sought to legitimize itself by applying for membership in the All European Academy, an umbrella organization for national academies from 38 European countries. However, a spokesperson for the All European Academy says that EAS's application was rejected because it is not a national organization. EAS also invited the Royal Society to send an observer to its annual meeting. The society did so; that person was unavailable for an interview before Science went to press.

    EAS officials barred a reporter from Science from attending the meeting. According to participants interviewed after they emerged from the 3-hour event, discussions centered on fundamental issues such as the academy's structure and funding. Apparently, the body has decided to begin publishing annals early next year and intends to hold a nanotechnology meeting in Paris in May 2003.

    A total of four of the academy's claimed 250 members attended the gathering, one of whom was Carrion. A second, computer scientist Boris Verkhovsky of the New Jersey Institute of Technology in Newark, told Science that he's convinced that “this academy will succeed.” Another member present and accounted for, geophysicist Enders Robinson of Columbia University in New York City, says that “there is no such organization in Europe with a similar approach.” Few would debate that point.


    A Smashing Source of Early Martian Water?

    1. Richard A. Kerr

    The Red Planet is bitterly cold, bone dry—at least at the surface—and nearly airless. And it seems to have been that inhospitable for at least several billion years. But planetary scientists are deeply divided over whether the very earliest Mars was more conducive to the origin of life. “Geologists say there was all this water, overflowing lakes, and massive erosion,” says planetary geologist Michael Carr of the U.S. Geological Survey in Menlo Park, California. But climate modelers can't explain why Mars would have been any warmer in its earliest days than it is today. Now, on page 1977, a group of physics-inclined planetary scientists proposes a solution to that conundrum: giant asteroid impacts.

    The great impacts that pockmarked early Mars and their ejecta would have thawed the frozen planet's subsurface water and led to “episodes of scalding rains followed by flash floods,” the scientists write. The group has a “perfectly plausible way of making rain on Mars,” says planetary physicist Kevin Zahnle of NASA's Ames Research Center in Moffett Field, California, a co-author of the paper. “The details have to be worked out, but fundamentally, this is the answer.” That stance gets some support from planetary scientists, but most researchers particularly intimate with the geologic record of earliest Mars disagree. “It's a valiant effort,” says Carr, but “I'm skeptical that it really does much to explain what we're seeing.”

    This early Mars mystery cropped up when, 30 years ago, planetary geologists saw their first signs that water had run free on the surface of early Mars. Branching channels on the ancient, heavily cratered highlands looked for all the world like river valleys. Large crater rims were worn down as if swept by persistent rains. And, more recently, geologists have recognized that great craters appear to have been filled to overflowing by rain on early Mars. “The amount of erosion is huge,” says Carr. On the other hand, climate modelers can't stuff enough greenhouse gases into a martian atmosphere to compensate for the chilly faintness of the sun 4 billion years ago.

    In making the case for impacts, atmospheric scientists Teresa Segura and Owen Toon of the University of Colorado, Boulder, along with Anthony Colaprete and Zahnle of Ames, point to the 25 largest martian impact craters. Ranging from 600 to 4000 kilometers in diameter, the craters formed after the planet's formation 4.5 billion years ago and before about 3.8 billion years ago. For comparison, the Chicxulub impact that wiped out the dinosaurs formed a 170-kilometer crater. Even the small end of the range of martian impactors considered in the new paper— a 100-kilometer object—delivered about 4 × 1026 joules of energy to the planet, the equivalent of 100 to 1000 dinosaur killers, says Zahnle.

    Long dry.

    Four billion years ago on a wet Mars, water drained from craters down long channels.


    With that much energy dumped on Mars, things got hot in a hurry. When one of those “small-end” impactors hit, it threw out enough hot rock to cover the planet to an average depth of 7 meters, the group calculates. When the rock vaporized by the impact eventually cooled and condensed, a 1600-K “rock rain” fell to cover the surface globally to 2 meters' depth. Within a few weeks, global surface temperatures reached 800 K. Now water everywhere was vaporized or melting: the water in the asteroid, the crustal water where the asteroid hit, the water in the polar caps, and, most novel in this study, the water frozen just beneath the preimpact surface (Science, 14 June, p. 1962) that is heated by all that hot ejecta. Parts of the subsurface would have stayed above freezing for at least a year in the case of a 100-kilometer impactor, the group calculates, for more than a century after a 250-kilometer impactor hit, and for millennia for the largest impacts.

    All that thawed water would have made for plenty of erosion, the group says. A 100-kilometer impactor would have triggered 2 meters of rain over the entire planet within a few years and melted 3 meters' worth of ice. A single 250-kilometer impactor might have freed up 50 meters of water. One such impact would be enough, they calculate, to erode the mysterious valley networks. And at least 10 such objects hit Mars before the planets swept up the last of the debris from the solar system's formation.

    Planetary scientists have a range of reactions to this picture of a cold, dry early Mars punctuated by episodes of sizzling rain every 100 million years or so. “The idea is well founded,” says planetary scientist Stephen Clifford of the Lunar and Planetary Institute in Houston, Texas, who specializes in martian hydrology. “Ultimately, the geologists are going to have to come around.”

    Some Mars geologists don't think they will have to do so. Ross Irwin of the Smithsonian Institution's National Air and Space Museum in Washington, D.C., allows that impacts might have contributed to some of the earliest erosion, when the biggest asteroids were hitting, but later valley formation and the flooding of large craters seem to have required far more water than impacts could have supplied. Carr agrees that “the valleys are only part of the problem. The amount of erosion is enormous. You're talking kilometers of water,” not the 50 meters from an impact.

    To win any more supporters, Segura and her colleagues need to develop a better idea of the cumulative effect of both large and smaller craters over time, researchers say. But James Kasting of Pennsylvania State University, University Park, who has been struggling with the early Mars climate problem for 20 years, is not hopeful: “I'm pessimistic that any of us is going to solve it until we get some geologists up there.”


    ... And an Icy Patch at Mars's South Pole

    1. Dana Mackenzie*
    1. Dana Mackenzie is a writer in Santa Cruz, California.

    Space scientists have discovered a new hiding place for water on Mars. In a paper published online by Science this week (, three geologists report that they have found a kilometer-wide patch of water ice at the edge of the southern polar cap. The orbiting Mars Odyssey spacecraft first glimpsed the water ice in February, exposed when an upper layer of carbon dioxide ice (“dry ice”) evaporated in the −20°C heat of the martian summer. This discovery might literally be the tip of an iceberg: Some Mars scientists believe that the entire southern polar cap could be water ice, covered by a thin layer of dry ice.

    When Mars Odyssey began photographing the martian surface, says Phillip Christensen of Arizona State University, Tempe, “this region was one of the first pictures we took.” What caught scientists' attention was a relatively flat piece of land that was colder than the adjacent exposed soil. More- detailed measurements made with the spacecraft's infrared camera revealed that the tundralike plain absorbed more heat than did the surrounding terrain during the day and radiated more heat at night. That high “thermal inertia” strongly suggested that the surface was pure water ice.

    Cold shoulder.

    Region of water ice (arrow) flanking a vast sheet of frozen CO2, photographed by the Mars Global Surveyor, may be typical of the fringes of Mars's southern ice cap.


    Christensen's team, which included Timothy Titus and Hugh Kieffer of the U.S. Geological Survey (USGS) in Flagstaff, Arizona, also examined old visible-light photographs of the area taken by NASA's Viking orbiter mission in the 1970s. Sure enough, the photos showed sharp delineations between bright dry ice, medium-bright water ice, and dark rock, in exactly the same places where their infrared camera had seen them. The icy plain, the researchers concluded, is a regular feature that has reappeared every martian summer for at least 25 years. Viking saw many similar medium-brightness patches around the edges of the southern ice cap, so seasonal plains of water ice might be fairly common. This suggests that the permanent layer of carbon dioxide ice might be relatively thin—perhaps only meters thick.

    Other researchers say the find is like a Christmas present you have asked for: not a big surprise but good news nevertheless. “It's important to me because I predicted it,” says David Paige, a planetary scientist at the University of California, Los Angeles. Several years ago, he and two other scientists studying data from the 1971 Mariner 9 mission found that the spectrum of light reflected from the south pole did not match that of dry ice alone. They speculated that the other ingredient was water ice, but their instruments could not pinpoint its location.

    If the ice deposits are indeed accessible from the surface, they might someday provide a record of Mars's climatic history, just as glaciers do on Earth. “In many ways, Mars should be a simpler system than Earth for understanding climate change,” says Ken Herkenhoff of USGS. “There are no oceans on Mars, and no biological community that we know of.” Thus, Mars could serve as a laboratory for understanding the effects of orbital mechanics and of the sun's variations on climate.

    But that understanding will come only if NASA sends a mission to the polar regions of Mars, to replace the Polar Lander that failed to reach its destination in 1999. The inaugural Mars Scout mission, to be launched in 2007, might provide an opportunity. Two of the 10 finalists for this mission, including Paige's “Artemis” proposal, involve polar landings. (The winning proposal was expected to be announced on 5 December.) “I see a groundswell of interest in going to the poles,” Paige says. “The poles are where a lot of the action on Mars is.”


    U.K. Researchers Hope for Clarity in Tissue Use

    1. John Bohannon*
    1. John Bohannon is a writer in Lyon, France.

    CAMBRIDGE, U.K.—British medical researchers are facing major changes in the rules governing the scientific use of human tissues, the result of a series of scandals in the 1990s. Earlier this year, the U.K. government said it was considering drafting a law to prevent misuse of tissues and asked for public input; the deadline for submissions was 14 October. Researchers say that bioethics committees are not waiting for new legislation, however: They have already tightened access to human tissues drastically, causing some projects to grind to a halt.

    The use of human tissues for research has been a charged issue in the United Kingdom since 1999, when it was revealed that Alder Hey Children's Hospital in Liverpool, as well as hospitals in Bristol and Birmingham, had been removing organs from dead babies for decades without parents' consent. In some cases, the hospitals had allegedly given organs to pharmaceutical research companies in return for financial donations. “What happened at Alder Hey was inexcusable,” says Carlos Caldas, a genetic epidemiologist at the University of Cambridge, but he argues that overly cautious regulatory committees have overreacted in a way that is severely hampering legitimate medical research.

    Consent required.

    Slices of tumors are stored in wax before disposal.


    The projects hardest hit, according to Caldas, are those that rely on archives of human tissue samples. Such collections consist of tumors and other tissue removed during surgery that are either frozen or embedded in wax. Caldas coordinates an international consortium that hunts for genetic factors in gastric cancer by comparing archived tissue samples from patients going back 20 years. Access to such samples before 1999 was straightforward, but it has now become “very difficult,” he says.

    Regional bioethics committees now require that consent be obtained from the original owner, or the next of kin if the owner has died, for any new use of an archived tissue. Such requirements are long overdue, says immunologist Herbert Sewell of the University of Nottingham, a member of the Nuffield Council on Bioethics. Although he acknowledges that contacting donors or next of kin for decades-old samples can be difficult, informed consent for all intended uses of donated tissue is a fundamental requirement of ethical research.

    But many researchers are not happy. Because of the new requirements, “cancer research has been paralyzed,” says Kathy Pritchard-Jones, an oncologist at the Institute of Cancer Research in Sutton. Pritchard-Jones says several of her research projects are on hold because of difficulties in accessing archived tissues. “Legal clarity is needed,” she says, because tissue archives are now being treated as equivalent to whole organs from the recently dead.

    Caldas is leery of new legislation, however. “Legislating this kind of research could put it in a straitjacket,” he says. Caldas would prefer a more flexible set of guidelines that bioethics committees can use to approve research on tissue archives without fear of scandal. In spite of the hardships that have befallen researchers, Caldas at least hopes that the public debate will help dispel some public misperceptions. “At the end of this,” he notes, “[we hope] people will see that researchers are not Frankensteins. We're trying to improve health.”


    Common Control for Cancer, Stem Cells

    1. Dennis Normile

    KOBE, JAPAN—At the very beginning of life, stem cells can develop into all the different tissues of the body; in contrast, cancer cells often end life. Despite these obvious differences, researchers have suspected that similar mechanisms might be at work in both cancer and stem cells. For example, both can multiply indefinitely. Embryonic stem (ES) cells transplanted into mice sometimes develop into tumors. And stem cell lines have been derived from a cancer known as teratocarcinoma.

    Now, two researchers have found a new gene that is apparently involved in regulating the proliferation of both stem cells and at least some types of cancer cells. “This could show that stem cell biology and oncology interact,” says Ronald McKay, a molecular biologist involved in the experiments at the National Institute of Neurological Disorders and Stroke, part of the U.S. National Institutes of Health (NIH) in Bethesda, Maryland.

    Other researchers find the link to cancer cells intriguing, but, as Shin-Ichi Nishikawa, a molecular geneticist at Kyoto University in Japan, says, “we really need more data before saying anything conclusive about the role of this protein in cancer.” McKay described the finding briefly on 20 November during a symposium here on Stem Cells and Organogenesis. The full report, by McKay and NIH colleague Robert Tsai, appeared in the 1 December issue of Genes & Development.

    Stem cells are the focus of intense research interest because of their ability both to self-renew, or proliferate, and to differentiate into a variety of tissues, offering tantalizing possibilities of growing replacement organs in vitro, among other possible therapeutic applications.

    Missing multiplier.

    Cells with a minimum of nucleostemin protein (arrows) show no sign of proliferating (green).


    McKay and Tsai set out looking for genes with critical roles in the self-renewal mechanism. Working with various rat stem cell lines in culture, they found a new gene expressed in ES cells, in central nervous system stem cells, and in primitive bone marrow cells. In all cases, the protein encoded by the new gene was abundantly expressed while the cells were proliferating in an early, multipotential state, but it abruptly and almost entirely disappeared at the start of differentiation. The protein was not found in the differentiated cells of adult tissue, suggesting a role in maintaining stem cell self-renewal. The researchers dubbed the new gene nucleostemin because its protein product appears to be active almost exclusively within the nucleus of the stem cells.

    That locus of activity led to “an inspired guess,” says McKay. The cell nucleus is also the site of activity of several genes whose proteins are known to regulate the activity of p53, a gene with a well-studied role in suppressing tumors. Mutations in p53 have been implicated in numerous types of cancer. Following the hunch that nucleostemin might be involved, McKay and Tsai looked for and found a human homolog active in several human cancer cell lines. They further determined that its protein product binds to the p53 protein, although just how the two proteins interact is unclear.

    To further define the new gene's role, McKay and Tsai interfered with its activity in both rat stem cells and human cancer cells. They shut it off, through a technique known as gene silencing, and overexpressed it, by adding protein to the cells. Both too little and too much of the nucleostemin protein hindered cell proliferation. “There seems to be a critical level involved,” McKay says. At that level, they suggest, nucleostemin helps regulate the proliferation of both stem cells and some types of cancer cells, although the precise mechanism is not yet known.

    “The method appears to be very sound, and the study suggests the [gene has] some relation to promoting self-renewal,” says Nishikawa. But he warns against attributing too big of a role to a single gene. He explains that among different types of stem cells, the evidence indicates that “there may be very diverse ways [of regulating] self- renewal and differentiation.” And, he adds, regulation of proliferation in cancer cells is likely to be just as complex.

    McKay readily agrees, but he predicts that further studies will find more links between these cells at the opposite ends of life.


    Sensing the Hidden Heat of the Universe

    1. Robert Irion

    Astronomers prepare to launch the long-awaited Space Infrared Telescope Facility, the last of NASA's four Great Observatories

    SUNNYVALE, CALIFORNIA—For a telescope known as a “Great Observatory,” the Space Infrared Telescope Facility (SIRTF) seems rather small. It stands 4 meters tall in a cavernous clean room here at Lockheed Martin Space Systems, dwarfed by scaffolds that 2 decades ago embraced the Hubble Space Telescope in this same room. Many small colleges own telescopes with mirrors that rival the 0.85-meter reflector inside the Thermos-like tube of SIRTF. Perched on its testing platform, SIRTF gleams but doesn't overwhelm.

    Yet this compact machine carries the hopes of dozens of astronomers who have devoted their careers to making it fly. Their dreams of peering at faint traces of heat from the cosmos will become real on 15 April, when SIRTF is scheduled for launch at the Kennedy Space Center in Florida. The launch had been set for 29 January, but on the eve of Thanksgiving, NASA postponed it to make way for a military satellite damaged in a launch pad crane accident in October. However, the latest delay doesn't faze a team that has weathered 20 years of development, two severe redesigns, and two cancellations.

    “People almost forgot about us,” says Giovanni Fazio of the Harvard-Smithsonian Center for Astrophysics in Cambridge, Massachusetts, who leads one of SIRTF's three instrument teams. “Now, I think we'll have to convince them that it's really going to go off.”

    Fazio needn't fret: Astronomers working within and beyond the confines of infrared wavelengths say SIRTF has their attention. The observatory will sense the thermal glows of normal stars in remote galaxies, cocoons of star birth in nearby galaxies and in our Milky Way, and planet-forming whorls of gas and dust around young stars. Infrared radiation pierces dust, opening vistas that are frustratingly shrouded in Hubble's iconic images. Other satellites have examined the infrared heavens—notably the European Space Agency's Infrared Space Observatory (ISO), which set the standard in the field from 1995 to 1998. But with its modern detectors scanning the heavens from a frigid perch millions of kilometers from Earth, SIRTF will trump ISO's sensitivity by factors of 10 to 1000.

    “Using infrared light to look inside places where stars and planets are forming and at distant galaxies will be two of the dominant prongs of astrophysics for the next 50 years,” says optical astronomer Alan Dressler of the Carnegie Observatories in Pasadena, California. “A lot of that will involve cold telescopes, and here's the first one. It's a triumph that it finally got built.”

    A descoped scope

    SIRTF's awkward acronym dates from the early 1980s, when the “S” stood for Shuttle, not Space. The facility ('80s NASA-speak for a reusable payload) was supposed to hitch rides on the space shuttle to soar above the infrared-absorbing water vapor in Earth's atmosphere. In 1985, however, a test flight of a small cryogenically cooled telescope showed that the shuttle was a lousy setting, because heat from the spacecraft dazzled the instrument's sensors.

    The goal shifted to a grand, free-flying orbiter to fill out NASA's suite of powerful eyes at different wavelengths: Hubble, the Compton Gamma-Ray Observatory, and the Chandra X-ray Observatory. A national panel of astronomers led by John Bahcall of Princeton University dubbed the 1990s the “Decade of the Infrared” and declared SIRTF its top priority. Planners envisioned a $2 billion titan lugging 3800 liters of superfluid helium, aptly launched aboard a massive Titan rocket.

    Just chill.

    SIRTF's silver and black coatings will reflect solar heat and radiate the telescope's heat, respectively. (Solar panels not shown.)


    Then, NASA's budget roof caved in, forcing not just one but two total redesigns—or “descopings”—of SIRTF by 1995. The telescope was axed for about a year and wiped from the books a second time by a clerical error in Congress. The saga became painful. “There was a time when SIRTF was not mentioned in polite company,” says project scientist Michael Werner of NASA's Jet Propulsion Laboratory in Pasadena. “Only the imprimatur of the Bahcall report allowed us to rise from the ashes.”

    Loyal backing from colleagues helped, but the savior was a radically overhauled concept. “In the old days, you'd build a big infrared telescope and an even bigger Thermos to drop it into,” says Michael Bicay, director of SIRTF's Legacy Science Program. In the new design, a tiny vat of helium—just 360 liters—insulates only the observing instruments, not the entire assembly. The mirror and telescope tube, shielded from the sun, will cool to about 35 kelvin in space. Then, boiled-off helium vapor will course along the tube like dry-ice fumes cascading down the sides of a cauldron in a high school play, chilling the optics to 5 kelvin.

    The tight constraints forced everyone on SIRTF to adapt. “Our original instrument designs were not quite as big as the Hubble phone booths, but they were as big as phone booths for dwarfs,” recalls George Rieke of the University of Arizona, Tucson, another instrument team leader. Squeezing into oversize shoeboxes meant that some versatility had to go, Rieke says. Still, the delays gave the teams time to create sensitive new detectors with far more pixels than previous infrared missions had.

    This reimagined mission costs $720 million, including launch on an economical Delta rocket. A gentle nudge will propel SIRTF to a unique outpost: trailing far behind Earth in the same orbit and drifting about 15 million kilometers farther away each year, like a dog on a running track trailing its owner on a progressively longer leash. “We don't really care where it goes, as long as we know where it is,” Bicay jokes. Toward the end of the mission, NASA will need the largest antennas in its Deep Space Network to download data from the distant spacecraft once or twice daily.

    Small but sensitive.

    SIRTF's instruments (top) and mirror (bottom).


    With Earth's hulking globe out of the way, SIRTF will observe nonstop until the coolant gurgles away. “In principle, the mission will last at least 5 years with careful management of the helium, which is our lifeblood,” says Werner. The team won't know how quickly the telescope consumes its helium until 2 or 3 months after launch.

    Cool science

    Several factors dictate the research within reach of SIRTF. On the down side, its small mirror won't collect nearly as much infrared light as do ground-based telescopes, which can peer from mountaintops or the arid South Pole through infrared “windows” in the atmosphere. However, SIRTF's new detectors pick up the barest hints of heat, and its deep freeze means that virtually no stray warmth will muddle the weak cosmic signals. “It adds up to an enormous gain in sensitivity,” says B. Thomas Soifer, director of the SIRTF Science Center at the California Institute of Technology (Caltech) in Pasadena. “We can't reach back to where galaxies begin to form, but we can reach back to their early childhood.”

    Indeed, galaxies in many guises are high on SIRTF's list of targets. Today, astronomers know distant galaxies only by the light of their hottest, youngest stars. These objects blaze in ultraviolet light, which gets stretched by the expansion of the universe into visible and near-infrared light by the time it reaches us. Such “redshifting” also lengthens the visible light of distant, ordinary stars such as our sun—which compose the bulk of stellar mass in galaxies—to wavelengths that are right in SIRTF's range. This inexorable bit of cosmic physics makes SIRTF the first tool with the power to see older, long-lived stars that dominated the universe's galaxies as they matured.

    SIRTF's Legacy Science Program will capitalize on that strength by devoting chunks of observing time to major galaxy surveys. “We want a census of the stellar-mass assembly history of galaxies,” says astronomer Mark Dickinson of the Space Telescope Science Institute in Baltimore, Maryland. Such a census will include a better grasp of star formation in distant galaxies, much of which is veiled by dust. “That energy has to come out somewhere, and SIRTF will see it,” Dickinson says.

    One survey, which Dickinson directs, overlaps the most penetrating views of the universe ever taken. The spark was the now-famous “Hubble Deep Field,” in which the Hubble telescope stared at a minuscule patch of sky for 10 days in 1995 and exposed thousands of galaxies. “That became a magnet for other observations. Everyone pointed everything they had at that same patch,” says Dickinson—including the Chandra X-ray Observatory. Now, SIRTF will follow suit with detailed images of a swath 30 times bigger, enveloping the original Deep Field, plus it will make a similar scan of the Southern Hemisphere skies. Hubble is covering the same territory with its new Advanced Camera for Surveys. With all telescopes working in concert, says Dickinson, astronomers will have more tools for deciphering the nature of each remote fleck.

    In another survey, astronomer Carol Lonsdale of Caltech and her colleagues will take an opposite tack: a sweeping roundup of millions of closer galaxies in 63 square degrees of the sky, the size of a decent constellation. SIRTF is ideal for identifying dusty star-forming galaxies that suffused space during the universe's adolescence, Lonsdale says. By casting such a wide net, the survey should expose how galaxies grew up in different environments. For example, astronomers assume that galaxies assembled more quickly in matter-rich clusters than in sparse voids, but they have very little evidence for this theory.

    Other SIRTF legacy programs are confined to just one galaxy, our Milky Way. The birth pangs and rowdy youths of stars and planets are the quarries. “We're specifically looking at sunlike stars as analogs to our own, to help us put our solar system in context,” says astronomer Michael R. Meyer of the University of Arizona.

    Star children.

    SIRTF will examine stellar birthplaces in Perseus (upper right) and Orion (lower center).


    Meyer's survey team will examine more than 300 relatively nearby stars, ranging in age from 3 million years to 3 billion years. Although SIRTF will not spy individual planets, infrared patterns from the stars will reveal whether they are swaddled in dusty or gas-rich disks. As planets condense within those disks, they should carve out gaps around the stars in the shapes of thick Hula-Hoops. SIRTF would detect infrared radiation from warm dust inside such a gap and from cooler dust on the far side, but none in between. That telltale temperature dropout will yield indirect but strong evidence about the architectures of planetary systems elsewhere, Meyer says.

    SIRTF also will examine how rough-and-tumble dustups in emerging planetary systems enrich those infrared signals. “Once objects grow to the size of Volkswagens and start crashing together, they generate their own secondary dust,” Meyer says. In our solar system, such debris arises mainly from the asteroid belt between Mars and Jupiter—the source of the twilight sky's “zodiacal light”—and from the extended Kuiper belt of cold, primitive objects beyond Neptune. SIRTF will spot faint wisps from such tenuous bands even around stars nearly as old as our sun, Meyer notes.

    Meanwhile, another team led by astronomer Neal Evans of the University of Texas, Austin, will use SIRTF to examine newborn stars still tucked in their dusty nests or just fledging from them. Out to a distance of about 1000 light-years from Earth, says Evans, “we'll see anything forming a star, as well as substellar objects,” such as brown dwarfs. “However the energy is getting out, we'll be able to study very small amounts of it.”

    Resolving the controversial origins of brown dwarfs (Science, 4 January, p. 64) is one aim of the survey, Evans says. The other key goal is to learn how long the gas and fluff girdling a new star remain available for planet formation. Planetary scientists disagree over the time scales and mechanisms needed for objects such as Jupiter to arise (Science, 29 November, p. 1698). By tracing the waxings and wanings of infrared patterns around a range of ordinary stars, SIRTF might steer that debate.

    Some of SIRTF's science will hit closer to home. With part of its guaranteed observing time, Rieke's instrument team at Arizona will scrutinize scores of objects in the Kuiper belt. Astronomers have a poor grasp of the sizes of these icy bodies, because they must make assumptions about how much sunlight the objects reflect. SIRTF will detect their heat emissions directly; when combined with optical images, the data will pin down the objects' sizes and hint at their compositions. Leaders of a proposed mission to Pluto and the Kuiper belt will fine-tune their observing strategies based on SIRTF's results.

    Observations from the telescope's major legacy programs will flow directly into a public archive after the SIRTF Science Center processes the data. In that way, astronomers may analyze images as they see fit, perhaps finding surprises that mission planners couldn't envision. This rapid public access—an aspect of SIRTF that all of the project's astronomers praise—will help scientists devise follow-up studies quickly enough to be conducted during the mission's second year in flight. After SIRTF's first 9 months in space, most of the observing time will be open to peer-reviewed proposals from all U.S. astronomers.

    High-flying partners

    SIRTF will have an active partner during its third year in 2005: the Stratospheric Observatory for Infrared Astronomy (SOFIA). Based at NASA's Ames Research Center in Mountain View, California, SOFIA consists of a Boeing 747 equipped to carry a 2.5-meter infrared telescope at high altitudes for 10 hours at a time. Chief scientist Eric Becklin of the University of California, Los Angeles, expects SOFIA to follow up on SIRTF's most compelling objects by using its large instruments to measure velocities, temperatures, compositions, and other properties in detail. The European Space Agency plans to launch its own advanced mission into deep space in 2007, a 3.5-meter telescope called Herschel. That instrument will focus on longer wavelengths of infrared light, which penetrate even deeper into dusty clouds.

    The SIRTF team realizes that its work sets the stage for two of NASA's highest-profile space endeavors for the next decade: Hubble's successor, called the James Webb Space Telescope, and the Terrestrial Planet Finder. Both observatories will rely heavily on near-infrared vision. In that sense, says Carnegie's Dressler, SIRTF is not so much the denouement of NASA's Great Observatory program as the most logical prologue to the next act—a progression that probably wasn't foreseen when the Infrared Astronomy Satellite first scanned the sky 2 decades ago.

    A final transformation awaits SIRTF: After launch, NASA will announce a new and presumably more graceful name, chosen from a national contest. If there's any justice, the name will honor someone as resilient as the oft-threatened telescope itself has been.


    Oldest New World Writing Suggests Olmec Innovation

    Inscribed characters that resemble those used in the later Maya script and calendar suggest that the Olmec were the first American scribes, boosting the theory that they heavily influenced later cultures

    In A.D. 738, a minor lord in what is now Guatemala pulled off a staggering victory: Cauac Sky of Quirigua captured a powerful king, known as “18 Rabbit,” beheaded him, and overran Copán, a major city in the Maya empire. Chronicled in inscriptions on numerous stone monuments at Quirigua, that upset is just one saga in a sweeping history recorded by the Maya in stone carvings for 7 centuries. Yet despite archaeologists' phenomenal success in deciphering Maya hieroglyphs, a fundamental aspect of this sophisticated civilization and its copious writings has always been a puzzle: their origins.

    Scarce finds from pre-Maya times have left archaeologists arguing whether key features of Maya civilization, such as writing and the sacred calendar, stemmed from a nearby culture called the Olmec or whether several early cultures contributed. Now on page 1984, a team of archaeologists describes two artifacts that preserve signs of script: fragments of stone plaques and a cylindrical seal that bear symbols known as glyphs. Dated to about 650 B.C., these are potentially the oldest evidence of writing in the Americas. Mary Pohl of Florida State University, Tallahassee, and her co-authors argue that one fragment names a king and a date, indicating that, as with the Maya, early writing was intricately involved with both royalty and the calendar.

    Think big.

    The Olmec were the first in Central America to build large cities. Pyramids, such as this one at La Venta, and monuments demonstrated their rulers' power.


    For many archaeologists, the discovery, together with findings from new digs in even older Olmec sites, reinforces the notion that the Olmec was a “mother culture” and the primary influence on the later Maya and Aztec. “When the Olmec area was flourishing, there was nothing else like it,” says archaeologist Michael Coe of Yale University. “This is the place where everything was innovated.” The new glyphs add solid evidence to this long-standing theory, Olmec boosters say. “This is the oldest writing,” says archaeologist Richard Diehl of the University of Alabama, Tuscaloosa. “It's the mother and father of all later Mesoamerican writing systems.”

    But other experts counter that the new artifacts are too fragmentary to resolve the enduring question of Olmec influence, or even how writing developed in Mesoamerica. Some question the fragments' age and whether they meet strict definitions of writing. Those who favor a model of “sister cultures,” in which the Olmec were just one of a network of interacting cultures that all contributed to the development of key innovations, remain unswayed. “We strongly reject Olmec ‘influence’ on the ancient cultures of central Mexico,” says archaeologist David Grove of the University of Florida, Gainesville.

    The Olmec world

    Several clues have long suggested that the Olmec were the first to develop crucial aspects of Mayan culture, including writing. “The mother-culture thesis is that the glories of the Maya are directly derivative of cultural attainments of the earlier Olmec,” explains John Clark of Brigham Young University in Provo, Utah. According to this view, the large-scale Olmec architecture and monumental sculpture suggest that these people were the first in Mesoamerica to concentrate broad political power in the hands of a few. Linguistically, other Mesoamerican regions have apparently borrowed words related to writing and the sacred calendar from the precursor to the language now spoken in the Isthmus of Tehuantepec, the Olmec heartland.

    Important date.

    The dotted symbol on this fist-sized seal, which is spoken by a bird, might be the oldest evidence for the Mesoamerican calendar.


    But hard evidence of Olmec scribes is scant. In the major Olmec city of La Venta in the Gulf Coast region of what is now southern Mexico, researchers have uncovered two monuments containing a linear sequence of glyphs. But their age is unclear. They could be anywhere from 600 B.C. to 400 B.C., so they don't settle the question of when and where writing began.

    Then in 1997, Pohl and her co-authors—Kevin Pope of Geo Eco Arc Research in Aquasco, Maryland, and Christopher von Nagy of Tulane University in New Orleans, Louisiana—began to dig at a smaller site known as San Andrés, 5 kilometers from La Venta. They uncovered a stratified deposit of floors, hearths, and trash heaps (Science, 18 May 2001, p. 1370). The layers of refuse contained many sherds of pottery, plus some real showstoppers: a fist-sized cylinder seal and engraved chips of greenstone a bit smaller than a thumbnail. Radiocarbon dating allowed the researchers to come up with a date, 650 B.C., for the engraved objects—an “amazing” achievement, says Clark.

    These artifacts have features that the researchers interpret as symbols indicating words. For example, the greenstone fragments bear two inscribed oval glyphs that might be part of a columnar text, as are later inscriptions from the region. Inscribed on the cylinder is a single glyph and a bird's beak spewing two diverging lines. Pohl says that in later inscriptions from outside the Olmec heartland, human mouths also emit lines, in symbols called speech scrolls. At the end of the bird's scroll are U-shapes, a common ingredient in later Mesoamerican writing.

    The speech scroll indicates that the inscriptions on the cylinder are more than just representational art, say Pohl and others. “The implication is that they are representing words or sounds of a language,” says Martha Macri, a linguistic anthropologist at the University of California, Davis. The symbols are also not iconographic—they don't look like what they are supposed to mean—so they must be learned, Macri points out. “That's one of the hallmarks of writing,” she says.

    The glyph on the cylinder is a U surrounded by an oval and next to three small circles. It resembles a Maya symbol with three dots called “3 Ajaw,” a date in the Mesoamerican calendar. “I think it's definitely the day sign ‘ajaw,’” says Coe, although others aren't so sure. If correct, this day sign coupled with a number are the first archaeological hints of the all-important 260-day Maya sacred calendar. And in Maya writing, ajaw means king as well as a date. Thus, Pohl reads the Olmec cylinder seal as the name “King 3 Ajaw,” which makes sense because Mesoamericans were often named for the day of their birth. The San Andrés seal could have been used to print a royal message, the team says.

    San Andrés treasure.

    The oval glyphs on these thumbnail-sized plaques arranged in a column on the righthand piece might be fragments of text.


    But some researchers question the age and meaning of the fragments. Radiocarbon dates for this period always have wide margins of error; thus, by radiocarbon dating alone, the San Andrés glyphs may range from 792 B.C. to 409 B.C. Von Nagy, the team's ceramist, says that associated pottery fragments helped narrow the range to between 700 B.C. and 600 B.C. And although Pohl argues that a glyph that was “spoken” is evidence of writing, linguists and epigraphers tend to have a stricter definition. They want to see columns or rows of glyphs with word order and syntax—far more than these fragments can reveal. “A few isolated emblems … fall well below the standard for first writing,” says epigrapher Stephen Houston of Brigham Young University. “Show me a real text with sequent elements, and I'll be more convinced.”

    All the same, many researchers agree that even if these fragments aren't full-fledged writing, they document steps toward it. Pohl suggests, for example, that formatted rows and columns came later. “It's at the cusp between iconography and writing,” agrees archaeologist Chris Poole of the University of Kentucky in Lexington.

    The mother culture?

    To Poole, all this fits with other emerging evidence that the Olmec played “a special role … in the development and dissemination of many cultural traits important for Mesoamerica.” New data are making it clear that large populations and political structures developed first in the Olmec, he and others say.

    For example, over the past decade, Ann Cyphers of the National Autonomous University of Mexico, Mexico City, and her colleagues have discovered that the early Olmec center of San Lorenzo in Veracruz, active from 1200 B.C. to 900 B.C., is much more sophisticated than previously thought. The main site, replete with monumental sculptures of humans and felines, also featured an aqueduct that delivered spring water, plus a 100-square-meter “palace” with basalt drains. This might have been a seat of government, says archaeologist Kent Reilly of Southwest Texas State University in San Marcos. A survey of sites in the surrounding 800 square kilometers suggests that Olmec power extended far beyond the capital. For example, a site strategically located near the confluence of two rivers has thrones that are smaller than those at San Lorenzo but decorated in a similar fashion. “The way the settlement works on a regional level, we really think it was an incipient state,” says Cyphers.

    Despite this apparent concentration of political power, opinions remain divided on the Olmec share of influence. Scholars who favor the “sister culture” view note that the Olmec borrowed pottery styles from elsewhere, and that there was a bustling trade in goods such as obsidian tools and jade ornaments throughout ancient Mesoamerica. Grove of the University of Florida agrees that many cultures were active. And the new writing fragments are not enough to change the views of these scholars. Even if the glyphs are ancient, Grove, for one, isn't convinced that the La Venta area was necessarily the first site of writing. “I don't think it shows the Olmec invented it. It could have originated anywhere around that southern region,” he says, especially because the seal and plaques were easy to transport.

    There's also evidence that other Mesoamerican cultures were experimenting with writing, possibly at about the same time. Joyce Marcus and Kent Flannery of the University of Michigan, Ann Arbor, excavated a monument from San José Mogote, made by the Zapotec culture centered in Oaxaca, about 300 kilometers from La Venta. But although Marcus dates this monument at 600 B.C. to 500 B.C., Pohl and others are skeptical of that age. Archaeologist Javier Urcid of Brandeis University in Waltham, Massachusetts, says the Zapotec monument contains a day-name glyph, which he considers writing. He thinks the greenstone plaques from San Andrés are writing too, and they might have been invented independently. “Even if writing did originate in a single spot, I don't think we'll ever be able to find it,” he says.

    Whatever its origins, Mesoamerican writing flourished in the centuries after these early inscriptions began to appear. Even defeat did not stop the scribes, at least for long. Eleven years after the defeat of 18 Rabbit, for example, a new ruler of Copán tried to wipe away the bitter memory with a grand history of his city's warrior kings—the longest hieroglyphic text known from the New World.


    Mapping the Future

    With its topographic maps falling rapidly out of date, the U.S. Geological Survey has begun drafting an ambitious National Map—online

    It wouldn't have stopped the wildfires that swept the western United States last fall. And it certainly wouldn't have prevented the crash of United Airlines Flight 93 into a Pennsylvania field on 11 September 2001. But a digital topographic map of the United States would have given firefighters fresh information about new neighborhoods threatened by the blazes, and it would have helped relief workers identify the plane crash site more quickly. That's the idea behind the fledgling National Map program at the U.S. Geological Survey (USGS), which aims to put such up-to-date, high-quality topographical data at the fingertips of anyone who wants it.

    An online National Map promises to deliver continuously updated data—from elevations to rivers to geographic boundaries—for all the roughly 3000 counties across the United States. So far, eight pilot projects are under way; USGS officials hope to assemble the entire map with local partners over the next decade.

    But although industry insiders say that the map is a smart idea, collecting and maintaining all these data are daunting tasks. “The real challenge is doing this in any extensive way, in a country as large and diverse and changing as the U.S.,” says James Plasker, a former USGS executive and now executive director of the American Society for Photogrammetry and Remote Sensing in Bethesda, Maryland. “That's a huge organizational challenge,” says Plasker, who is not alone in wondering if USGS can pull it off.

    A new direction

    USGS is an old hand at topography. Soon after its founding in 1879, pioneering cartographers fanned across a rugged landscape, mapping the horizon on portable drawing boards called plane tables. Over the years, the survey incorporated the latest mapmaking tools, such as aerial photography.

    By the late 1980s, when USGS finished mapping the entire country at an estimated cost of $1.6 billion and 33 million work hours, the survey's 55,000 “topo maps” had become standard tools for hikers, urban planners, and relief workers such as the Red Cross, among other consumers. As USGS's priorities shifted toward scientific research, however, its mapping program languished. As a result, while towns went boom and bust and landmarks such as airports, buildings, and parks spread and dwindled, the topo maps lagged further and further behind the landscape they represented. Today, the maps are only sporadically updated, and some are 57 years old.

    Surveying the scene.

    Early USGS mappers used plane tables and alidades to get the lay of the land.


    “We just don't have the money to maintain a robust revision program,” says William Flynn, chief of USGS's Mapping Partnership Office in Austin, Texas. In the early 1990s, USGS tried a foray into electronic mapping, with a project to collect basic geographic information for governments and land developers to share, but the computer networks then available weren't up to the job. “They've been wandering in the desert for a decade or so, trying stuff that didn't quite work,” says Donald Cooke, founder of Geographic Data Technology Inc., a map database developer in Lebanon, New Hampshire. “Now, somehow Barb Ryan has pulled it together.”

    Ryan, a 28-year USGS veteran, joined the survey's geography staff in 1994 and took over its mapping program in 2000. Eager to breathe life into the effort, Ryan put together a National Map team, which began brainstorming ways to combine existing digital data on elevations and hydrography at USGS with the high-resolution maps emerging at local levels across the country. “Most of the local communities maintain current Geographic Information Systems (GIS) data sets at very high resolution for their own purposes, anyway,” says Flynn, head of Texas's National Map pilot. “Someone simply needs to develop the partnerships with those communities and take on the technical challenges of integrating the variety of data and formats into a standard set of products for distributing over the Web.”

    Ryan envisions the National Map as a seamless, continuously updated map with seven layers of geospatial data for every U.S. county. The data include topographic features from aerial photos and satellite images; surface elevations; locations of water bodies, transportation, major buildings, and public land boundaries; road names; and land-cover types, such as open water or high-density residential.

    To build and maintain this ambitious map, Ryan says, USGS will partner with federal, state, and local agencies that either already collect such data or would like to have it. “It's a national map, and that requires governments and agencies and private sectors to come to the table,” Ryan says. Those at the table will reap the benefits of a grand geographic template that could be overlaid with more specialized information, from floodplains to census demographics to disease and fire patterns. “This will be a platform of data that can be used creatively,” Ryan adds. Private companies, too, could contribute National Map data and then adapt them to sell more specialized maps, Cooke says.

    To kick-start the National Map, USGS has launched eight pilot projects, including a Landsat satellite project and local efforts in Delaware, California-Nevada, Missouri, Pennsylvania, Texas, Utah, and Washington-Idaho. Those pilots, some completing their first year this fall, hint at both the problems and promises that lie ahead.

    A mixed bag

    The sheer amount of labor will be the biggest hurdle, predicts Vicki Lukas, head of the Lake Tahoe pilot along the California-Nevada border. “We thought that in a well-known and highly studied environment like Lake Tahoe, there would be a wealth of core data,” Lukas says. “We were surprised to find a lack of any kind of regional data sets.”

    Change in direction.

    USGS hopes its classic paper topo maps (top) will pale beside the data richness of the online data sets it is assembling at pilot projects in Colorado (bottom) and seven other test sites.


    Instead, Lukas says, the team found a “mixed bag” of geography: excellent GIS measures in some agencies, poor ones in others, and, in one county, unusual computer software that required painstaking translation. That took more time and work than anticipated. “There's a lot of data there, but the integration is a huge effort,” says Lukas. William Schenck, a scientist with the Delaware Geological Survey and a member of the Delaware pilot team, agrees: “Even for our group, with considerable technical expertise, getting the framework data to an accurate scale is a lot of work.” Storing, communicating, and automatically updating nationwide digital geographic data pose daunting problems of their own.

    Ryan estimates that delivering the full-scale National Map in 10 years would require $150 million a year—roughly twice the current budget. She hopes that successful pilot projects will build support for funding a nationwide map.

    Regional digital maps from some pilots are already proving their usefulness. For instance, Tricia York, an environmental scientist with the Tahoe Regional Planning Agency, says that area's pilot is already improving efforts to track environmental changes in the Lake Tahoe watershed. “The geospatial layers in the map are far more accurate than what we had before, and that allows us to make more sound scientific decisions,” York says.

    National security also stands to benefit from the map-in-progress, Ryan says. Long before “homeland security” entered the public lexicon, USGS had teamed up with the National Imagery and Mapping Agency in Bethesda, Maryland, to start updating maps of 120 key urban areas vulnerable to terrorist attack. Since 11 September 2001, the survey has brought that effort under the umbrella of the National Map project and increased the number of cities to 133. “Whether we're talking about a natural disaster or a terrorist disaster, the need for information is the same,” Ryan notes. In addition to printing up-to-date paper maps, USGS will develop a digital database of the cities, including critical infrastructure such as power and water utilities. Although the basic geography will be open to the public, some data layers will likely be classified.

    The next milestone for the National Map is a report from a National Academy of Sciences committee, due out in the spring. Ryan hopes that the panel will endorse the work to date and affirm the need for additional funding. Meanwhile, other countries have started considering electronic national maps of their own. The United Kingdom, Canada, and Australia are publicly debating the need for similar digital geographic databases. Should their plans go forward, a few neighborhoods of the global village could soon become familiar territory.


    Nations Look for an Edge in Claiming Continental Shelves

    Ocean mappers are getting funding from governments eager to divide up 5% of the sea floor. Some of the claims have already sparked controversy

    Centuries ago, James Cook and other explorers set sail to lay claim to vast and valuable new territories. Modern ocean scientists are embarking on similar voyages of discovery, but this time the prize is under the surface: a swath of sea floor twice the size of Australia, with natural treasure worth trillions of dollars.

    Under a little-known provision of international law, the United States, Russia, and dozens of other nations are preparing to divvy up some 15 million square kilometers of continental shelf, the submarine slope between the shoreline and the ocean bottom. The area represents about 5% of the total sea floor and contains valuable energy, mineral, and biological resources. New United Nations (U.N.) rules could allow some countries to expand their marine territories by 50% or more. But before governments can plant their flags, researchers must pinpoint where continents end and the abyss begins—and settle technical disagreements over how best to draw the boundary.

    “This is research that has a potentially huge [economic and scientific] payoff,” says Larry Mayer, a geoscientist at the University of New Hampshire, Durham, who is helping the U.S. government prepare a possible claim to 750,000 square kilometers of the Atlantic, Pacific, and Arctic oceans. Already, he says, the global mapping push has produced surprising new views of the dark, cold world of the continental edge.

    Shelf claims, however, are likely to be dogged by scientific and legal questions. The governing international treaty includes vague and sometimes contradictory language, leaving geoscientists struggling to apply the rules. By staking a largely secret claim to nearly half of the Arctic sea floor, Russia has also raised questions about scientific transparency and how the U.N. will handle technical disputes. “Decisions will rest on interpretation, … and there are going to be disagreements,” predicts David Monahan, a marine geologist at the University of New Brunswick in Fredericton and head of Canada's ocean mapping program. Poor nations, meanwhile, worry about the cost of surveys needed to meet an initial 2009 deadline.

    Long shelf life

    The current surge of interest in the shelf grows out of the 1982 U.N. Convention on the Law of the Sea, which gave each of 151 coastal nations control of an exclusive economic zone. In many regions, the zone—which stretches 200 nautical miles offshore—easily covers the continental shelf. In places it doesn't, and the convention allows a nation to claim more territory if it can prove that the addition is a “natural prolongation” of its land mass. Originally, the first claims were supposed to be submitted by 2004, but it took the U.N. years to set up a review panel and write technical guidelines. When the new rules were approved in 1999, the U.N. reset the clock, with the first applications now due in 2009. Countries that don't apply lose their rights to the extra territory.

    Pushing the limits.

    Up to 60 nations could extend their current exclusive economic zones (blue) out to the limits of the continental shelf (red) as they begin to implement the 1982 Law of the Sea convention.


    Analysts estimate that up to 60 nations could benefit from the provision, known as Article 76, with some cashing in bigtime. New Zealand officials, for instance, hope to add 2 million square kilometers, a 50% rise, and India could gain 1 million square kilometers. The United States could add territory off its northeast and northwest coasts and in Arctic waters off Alaska, holding an estimated $1.3 trillion in resources.

    Before nailing down the extra territory, however, nations must meet complex criteria, which require them to document the depth and shape of the sea floor and the thickness of bottom sediments. To do that, some nations—including Australia, India, Brazil, and New Zealand—have already launched mapping programs, using sound produced by ship-towed seismic arrays to probe sea-floor geology and multibeam sonars to draw detailed three-dimensional maps of the ocean bottom.

    The eye-popping maps have produced plenty of finds. Off New Zealand, for instance, researchers have documented chains of towering volcanoes and spectacular canyons. Off North America, they've been surprised by the number of giant undersea landslides. “It's really shocking,” says Mayer. “We might have known there was a canyon here or a slump there, but now we're seeing them everywhere.”

    Using the new maps, countries might be able to justify extending their boundaries up to 350 nautical miles offshore if there is an obvious shelf. Under more complicated scenarios, however, governments will need to show that submerged ridges are part of their continental crust, or lay claim to piles of sediment that have slid off the continental margin.

    Such criteria can be a nightmare to apply, say Mayer and other researchers. One challenge is pinning down a key baseline called the “foot of the slope”—the point at which the descending continental slope touches down on the relatively flat ocean bottom. But the line isn't easy to define on jumbled, real-world sea floors. Even in relatively clear topography, opinions can vary. “Presented with the same data, two researchers can end up kilometers apart,” says Ron Macnab, a Canadian geophysicist who has spent years studying the issue.

    Similar uncertainty can blur measurements of sediment layers, home to oil and gas deposits. In general, the convention envisioned national borders petering out at the thin seaward edges of sediment deposits. But the sound pulses used to probe marine sediments can be notoriously difficult to interpret. If a layer of molten rock invades an older sedimentary complex, for instance, researchers might detect a kind of “false bottom” and underestimate sediment thickness. “The margin of error can be huge,” says Macnab. Further confusing matters, sediments don't necessarily become progressively thinner as they extend offshore. In fact, they can thicken farther from land due to currents or geological quirks.

    Ruckus over ridges

    The convention's most controversial provisions, however, deal with submarine ridges, say geoscientists. Governments can claim the sea floor along these underwater mountains if they are continental appendages. But Russia's attempt to claim one ridge has sparked a heated debate.

    Last December, Russia became the first nation to submit a sea-floor claim to the U.N.'s Commission on the Limits of the Continental Shelf. The panel of 21 experts reviews shelf claims, and although it doesn't have the authority to reject them, it can recommend changes. The idea, say international law experts, is for the panel to give claims a scientific “seal of approval” that is accepted by all nations.

    In its pathbreaking claim, Russia filed a plan that calls for adding nearly 1 million square kilometers to its Arctic territory. Although the claim's details are confidential—Russia was required to publicly release only a map showing basic boundaries—experts say it is based on the idea that the Alpha-Mendeleev Ridge System, a huge undersea formation that bisects the Arctic, is an extension of the nation's land mass. But the United States and some other nations disagree. “The submission has major flaws,” U.S. officials wrote in a letter to the U.N., citing “mounting geologic and geophysical evidence” that the ridge was instead “formed on oceanic crust.” The ridge's telltale magnetic signature, for instance, can't be found on Russian land, the letter noted, suggesting that the ridge isn't connected. If true, that would make the formation part of a common area open to all nations.

    An added dimension.

    A 3D view of the sea floor provides a clearer view of the foot of the continental slope (red line) than do older methods (orange line). Geoscientist Larry Mayer (bottom) is helping to develop these new sea-floor mapping tools.


    A seven-member review committee appointed by the commission apparently agreed with the critics and in July recommended that Russia reconsider its Arctic claim. “It pretty well crashed,” says one informed source, who requested anonymity. Russian officials could not be reached for comment, but reportedly they are studying the recommendation.

    Outside experts say that the commission did the right thing, but some question its reliance on secrecy. “The problem is that other nations can't learn from Russia's experience or independently evaluate the data upon which it based its claim,” says one geoscientist. “So we're somewhat in the dark about what the commission considers to be a credible submission.” Commission head Peter Croker, Ireland's petroleum resources chief, says he's sympathetic but that the commission's “hands are tied” by U.N. confidentiality rules. Still, he and others hope the body will find ways to release more information about future reviews and come up with a mechanism for resolving potential disputes.

    The uncertainty hasn't stopped Mayer and others from figuring out how much new data their governments will need to draw defensible maps. In an analysis published this summer (, Mayer and two colleagues concluded that the United States already has most of what it needs to make a case for extensions off New England and Alaska, but not in the Arctic. Filling the gaps would take several years and cost up to $22 million, they estimate, and several agencies have already drawn up plans to start work as early as next year.

    The United States will have 10 years to apply once it formally signs on to the convention. The odds of that happening improved with the retirement of Senator Jesse Helms (R-NC), a leading treaty opponent, whose successor takes office 3 January. Several government advisory panels have urged the Bush Administration to sign the treaty, and White House officials have signaled that they'll consider the idea. It is likely that the Senate would ratify that decision, observers say.

    Current signatories, however, face a tighter 2009 deadline, and poorer nations are pushing the U.N. to beef up trust funds to pay for training and mapping so that they won't be left behind. Croker and other experts say poorer nations might be able to use publicly available mapping data to file a preliminary claim and then fill in gaps once they can afford to do their own surveys.

    In the meantime, marine geoscientists are enjoying the view of the shelf produced by the latest land rush. Croker says the maps will lead not only to new political boundaries but also to novel ideas about how continents form, erode, and move. “We're just astonished,” he says, “by our ignorance about continental margins.”

  12. CANADA

    New Research Chairs Mean Brain Gain for Universities

    1. Wayne Kondro*
    1. Wayne Kondro writes from Ottawa.

    A national program to hire 2000 senior scientists and rising stars is luring top foreign talent to Canada—including some expatriates

    OTTAWA—When Ze'ev Seltzer got an offer last year to join a group at the University of Toronto, the pain researcher at Hebrew University in Jerusalem jumped at the chance. Canada, he says, is an emerging “superpower in pain research.”

    Seltzer isn't alone in making the trek to The Great White North. Last month, the government announced that 35% of the latest hires under the Canadian Research Chairs (CRC) program—some 43 of 123 scientists—were lured from abroad (see graphic). That's a much higher percentage than in the previous six rounds, when only 85 of 623 appointees came from outside Canada. The shift suggests that the government's $585 million program is finally meeting one of its major goals: to help reverse a brain drain that had been sapping Canadian science (Science, 22 October 1999, p. 651).

    The program, launched 3 years ago, is expected to fund 2000 research chairs over 5 to 7 years at 61 Canadian universities. The chairs are divided into two tiers, one for senior scientists and a second for rising stars. The hirings must be linked to an institution's strategic plan, which includes targeting areas for special emphasis. “Research is all about critical mass,” says Alan Bernstein, president of the Canadian Institutes of Health Research (CIHR), from which CRC winners draw operating grants. “It's all about having [people] down the hall you can talk to.”

    As a Tier 1 chair in comparative pain genetics, the 54-year-old Seltzer will earn $910,000 over 7 years and will be freed from his teaching duties. Like all chair recipients, Seltzer also is entitled to a sizable grant from the Canada Foundation for Innovation (CFI) and the provincial government to help set up his lab. The attractive package helped convince his wife to abandon a thriving architectural design career and make a new home in Toronto with their 11-year-old daughter. Now, Seltzer is busy integrating more than 2 decades of work on why people who lose a tooth or a limb still feel phantom pain from the missing body part into a hunt for a gene that predisposes some people to various types of chronic pain.

    The program is also luring expatriates: Some 16 of the 43 recent appointees are heading back north of the 49th parallel from jobs in the United States. Among them is biochemist David Granville of the Scripps Research Institute in San Diego, California. As a Tier 2 chair in cardiovascular biochemistry at the University of British Columbia in Vancouver, the 32-year-old Granville receives a 5-year, $325,000 salary. Although that's a bit less than his current salary, Granville says that the chance to work in a lab with all the equipment needed for his research sold him on the move. “I think I'll be better equipped” to explore the pathways that regulate cell death, he says, thanks to infrastructure and research funding from CFI, CIHR, the Heart and Stroke Foundation of Canada, the Michael Smith Foundation for Health Research, and various provincial bodies.

    Northern flow.

    Canada's research chairs program is attracting more foreign scientists such as Israel's Ze'ev Seltzer.


    That's also the case for University of Budapest mathematician Karoly Bezdek, currently on leave at Cornell University. His Tier 1 chair will allow him to help establish a center for computational discrete geometry at the University of Calgary in Alberta—one equipped with high-performance computers and a bevy of graduate students.

    The CRC program might be rearranging the intellectual map in certain fields. In August, Stephen Saideman left Texas Tech University to accept a Tier 2 chair in international security and ethnic conflict at McGill University in Montreal. The program is helping make Montreal a “magnet” for researchers and graduate students in international relations, he says. “The heaviest hitters in the field, professors at Berkeley and Stanford, have sent me e-mails pushing their graduate students. I never had that at Texas Tech,” says Saideman, 36, who studies the social, economic, and political fallout caused by politicians who whip up ethnic fervor during election campaigns.

    CRC officials say they aren't surprised that the chairs program is finally attracting large numbers of scientists from abroad. Universities initially used the new positions “to retain their stars,” says CRC executive director René Durocher. Once that goal was reached, university recruiters felt free to roam the world, homing in on designated fields. “The idea is to get them to grow a critical mass, as opposed to growing in all sectors,” says Marc Renaud, president of the Social Sciences and Humanities Research Council and chair of the CRC steering committee. “The more you showcase yourself as the best in an area, the more likely you are to attract students and funds.”

    Other institutions have gone a step further. In addition to concentrating on strategic areas such as bioinformatics, renaissance studies, and the social determinants of health, McGill officials also pledged to recruit all of the university's 171 appointees from outside. “That's a real challenge,” admits provost and vice principal (academic) Luc Vinet. “But as the chairs program gets better and better known, recruiting abroad gets easier.”

    As the CRC program nears its midpoint, officials are more convinced than ever that it's helping Canada climb back into the upper scientific echelon. “All of a sudden, Canada is getting pretty hot, and we're not used to that,” says CFI president David Strangway, recounting a visit to Australia in which science officials there told him they “are losing people to Canada because of the things going on there.”

    The final step, Strangway says, is a proposal before the cabinet to hike the number of graduate students in universities. If that program takes off, he says, “then we will never, ever look back.”