News this Week

Science  22 Feb 2002:
Vol. 295, Issue 5559, pp. 1438
  1. SCIENTIFIC COMMUNITY

    Universities Review Policies for Onsite Classified Research

    1. David Malakoff

    BostonU.S. academic researchers are again debating the wisdom of doing secret science on campus. But some Bush Administration officials hope that the answers will be different from the ones most universities came up with a generation ago.

    Last week, the Massachusetts Institute of Technology (MIT) launched a review of its policy toward classified research, which is now banned from the Cambridge campus. It's the latest major U.S. research university to do so in the wake of the 11 September terrorist assaults and subsequent anthrax mail attacks. And officials at Duke University in Durham, North Carolina, disclosed this month that they have rejected a grant that would have required the Pentagon to approve the release of any results.

    Government officials— including White House science adviser John Marburger—have suggested that homeland security may require academic scientists to withhold the fruits of some research, such as the genetic sequences of potential bioweapons or the recipes for toxic chemicals. Such a step would clash with policies at most major research institutions, however, which state that such restrictions are detrimental to teaching and scholarship.

    Many of the policies were adopted during the Vietnam War, and the issue flared up again during the defense buildup of the 1980s. In contrast, some observers have noted that universities have fewer qualms about similar restrictions imposed by industrial sponsors.

    At Duke, officials say that military funding agencies wanted to attach new requirements after 11 September to at least three grants, including one involving engineering research. In two instances, according to vice provost James Siedow, the school “pushed back” and convinced the agencies to drop the rules. “But in one, we had to tell the [researcher] that he couldn't accept the grant under our policy,” Siedow told a National Academy of Sciences public policy panel earlier this month in Washington, D.C. Duke received $210 million in federal research funding in 1999, ranking 18th in the country.

    At MIT, the 10th largest recipient of federal science funds, administrators “have refused every single proposed” grant restriction since 11 September, says aeronautics engineer Sheila Widnall. But on 13 February officials announced that Widnall, a former secretary of the Air Force, will lead a six-member faculty committee that will review the school's “policies on access to and disclosure of scientific information,” including how it handles proprietary research sponsored by industry. The committee's report is due at a May faculty meeting.

    The University of Washington, Seattle—which does a small amount of classified research as part of its $385 million allocation from the federal government—is mulling a similar study, says Alvin Kwiram, the school's vice provost for research. “We want to think about how the current state of war might affect our relationship with the government,” he says.

    Fresh look.

    Sheila Widnall is leading MIT's review of classified research rules, which now permit such work at off-campus affiliate Lincoln Lab.

    CREDITS: (TOP TO BOTTOM) REPRINTED WITH PERMISSION OF MIT LINCOLN LABORATORY; JOE MARQUETTE/AP

    Like Washington, MIT currently bars classified research from most of its campus but allows it at the affiliated Lincoln Laboratory in Lexington, Massachusetts. Other major research institutions have similar arrangements. They include Johns Hopkins University in Baltimore, Maryland, the top-ranked recipient of federal research funding, with $778 million. These off-campus facilities, with fenced-off classified research centers, typically bar students and foreign scientists.

    At least one science administrator says that a researcher's quest for knowledge should include the right to conduct secret research. “Academic freedom demands that if tenured faculty want to do classified research, they should be able to do it,” Larry Druffel said at a recent military science convention in Charleston. Druffel is president of the South Carolina Research Authority, a consortium of research parks in the state.

    Widnall believes that the strings attached to such research—including publication delays or restrictions that could hinder the careers of graduate students and postdocs—make such a policy unwise. Academic researchers who want to do classified research, she argues, “can go work for Boeing or Wright-Patterson” Air Force Base in Dayton, Ohio.

    Other university officials worry that their faculty members may already be under pressure to accept restrictions imposed by government funders, despite campus policies to the contrary. The Pentagon currently imposes no limits on basic research that it funds and seeks review only of applied science projects. But “it's a policy with nuance,” says John Sommerer, an administrator at Johns Hopkins University's Applied Physics Laboratory in Laurel, Maryland. “As a practical matter, some investigators are urged by [military] sponsors to delay or withhold basic research results, … [and] universities are kidding themselves if they think all faculty will adhere to an academic policy that is not in their best interests.”

    Marburger says he is glad to see such issues raised in public, although he admits that he is not an impartial observer. The nation needs talented academic scientists to fight the war against terrorism, he says, even if that means they might sometimes have to remain tight-lipped about their work. And he says he would favor rules that no longer “stigmatize” classified research.

    As a former university president, however (he headed the State University of New York, Stony Brook, for 14 years), Marburger also upholds the principle of autonomy. “It's important for universities to straighten this out campus by campus,” he says.

  2. CLIMATE CHANGE

    More Science and a Carrot, Not a Stick

    1. Richard A. Kerr

    President George W. Bush last week delivered on his promise to face up to the threat of global climate change. But the new policy, which includes a slight bump in climate- related research, seems unlikely to alter entrenched views on the intensely politicized subject. Speaking at the Silver Spring, Maryland, headquarters of the National Oceanic and Atmospheric Administration (NOAA), Bush outlined a go-slow, entirely voluntary alternative to the reduction of greenhouse gas emissions required by the Kyoto Protocol. Whether the approach will ever net any significant emission reductions is unclear, but Representative Sherwood Boehlert (R-NY), chair of the House Science Committee, pronounced himself satisfied with its tone if not its substance. “The statement shifts the debate once and for all from whether to limit carbon dioxide emissions to how much to limit them,” says Boehlert, who has criticized past environmental positions of his party's standard-bearer.

    Please help.

    Restraint of greenhouse gas emissions will be entirely voluntary.

    CREDIT: J. SCOTT APPLEWHITE/AP

    Bush's speech also highlighted two science initiatives, totaling $80 million, that are part of his 2003 budget request submitted this month to Congress. The Climate Change Technology Initiative would pump $40 million through the Department of Energy into as-yet-unidentified research and development, presumably including hot areas such as sequestration of carbon dioxide, with the goal of reducing greenhouse gas emissions. The $40 million Climate Change Research Initiative (CCRI) would complement research under the continuing $1.7 billion U.S. Global Change Research Program (USGCRP), which Bush's father began more than a decade ago. Presidential science adviser John Marburger last week told Boehlert's panel that the money would focus on finding science-driven answers to issues “of more immediate value to policy-makers” than what the global change program is addressing.

    The president's new strategy aims for an 18% reduction by 2012 in “greenhouse gas intensity,” the amount of emissions per unit of gross domestic product. The more efficiently Americans use fossil fuels and the more they use renewable energy, the more greenhouse gas intensity will decline. Bush hopes to entice businesses by providing $4.6 billion dollars over 5 years in tax credits for the use of renewable energy sources. He would also encourage more efficient use of fossil fuels by enhancing the existing registry of emission reductions and giving credits to businesses that show absolute reductions in emissions. These credits would become valuable if U.S. emissions were ever directly regulated; in the meantime, they would remain meritorious but useless.

    Right direction.

    The president's goal is to accelerate the decline in energy consumed per dollar of economic output.

    SOURCE: DOE

    The prospects for attaining Bush's goal “depend a lot on what investors think,” says economist Raymond Kopp of the think tank Resources for the Future in Washington, D.C. Businesses are more likely to respond voluntarily, he says, if they believe that future regulation will punish slackers.

    But Kopp and others note that the goal, even if attained, is not as ambitious as it might sound. Greenhouse gas intensity has been declining for many decades, including a 17% drop in the most recent decade. A government forecast of a 14% decline in the next decade would leave only 4% to come from the voluntary incentives. The total is less than 20% of the reduction in U.S. emissions spelled out under the Kyoto Protocol, which the U.S. has rejected and which in itself would not have detectably reduced global warming.

    The research initiatives are designed to help the government decide whether any regulation may be required, Bush explained. The $40 million CCRI would explore carbon cycling—which controls how much carbon dioxide remains in the atmosphere—and aerosols, including soot, that can either mask or enhance greenhouse warming. It would also bolster global climate observations in developing countries. NOAA would receive $18 million under the initiative, including $5 million to establish a climate modeling center at its Geophysical Fluid Dynamics Laboratory in Princeton, New Jersey. That might strengthen U.S. climate modeling now lacking in focus (Science, 5 February 1999, p. 766). The National Science Foundation would get $15 million, including $5 million to explore how society can cope with a changing climate.

    Scientists welcome the research initiative, despite its modest size next to the $1.7 billion USGCRP, which is scheduled for a $44 million boost. “It's not very much money compared to USGCRP,” agrees climatologist James Hansen of NASA's Goddard Institute for Space Studies in New York City. “But it is a signal to the agencies that this is the direction they should be going with the dollars they already have.” Scientists, he says, were already getting the message as they scrambled to help refine the priorities Bush set last June.

  3. ANIMAL WELFARE

    Senate Says No to New Rodent Rules

    1. David Malakoff

    Biomedical research groups have won a major victory in a long-running battle over U.S. government regulation of laboratory mice and rats. But the war isn't over.

    The U.S. Senate last week voted to bar the U.S. Department of Agriculture (USDA) from setting new rules on how scientists use and care for millions of research rodents. “A rodent could do a lot worse than live out its life-span in research facilities,” Senator Jesse Helms (R-NC) said as he successfully introduced an amendment to a major farm bill. Helms said that the new language will keep biomedical research from becoming entangled by “regulatory shenanigans” promoted by the “so-called animal-rights crowd.”

    Animal-rights groups have vowed to strip the new rule from any final version of the bill, which still must be reconciled with a House version that lacks the lab-animal language. “It's a setback, but we are not rolling over on this one,” says Nancy Blaney of the Working Group to Preserve the Animal Welfare Act, a coalition of animal-rights groups.

    Objection.

    Sen. Jesse Helms says that mice and rats won't benefit from “regulatory shenanigans.”

    CREDIT: GRANT HALVERSON/AP

    The controversy stems from a 30-year-old USDA policy that exempts mice, rats, and birds—which account for 95% of all experimental animals—from regulation under the Animal Welfare Act (AWA). Two years ago, after several court battles, USDA agreed to draft caging and care rules. The deal outraged biomedical groups, which argued that Congress never intended for AWA to cover laboratory animals. They also charged that USDA regulation would duplicate existing government and voluntary rules and drain millions of dollars from research accounts. The groups convinced Congress to delay the rules once, but last year lawmakers told USDA to begin writing the regulations (Science, 23 November 2001, p. 1637).

    Animal-rights groups plan to blanket negotiators on the final bill with appeals to drop the new language, which says that lab rats and mice aren't covered by AWA's definition of “animal.” “Lawmakers will be hearing from us. … This is making a huge change in the law without adequate debate,” says Blaney. But Frankie Trull of the National Association for Biomedical Research, which lobbied for the ban, says the new law “restates what has been agency policy for decades.”

    Many Washington policy watchers, meanwhile, are smiling at the sight of a research establishment often accused of liberalism joining forces with an archconservative and frequent opponent. Says one lobbyist: “I'm sure some scientists had to hold their noses when they learned that Jesse Helms was going to be their savior.”

  4. AIDS RESEARCH

    Longtime Rivalry Ends in Collaboration

    1. Jon Cohen

    AIDS researchers Robert Gallo and Luc Montagnier, who fought a long and bitter battle over credit for the discovery of HIV and the resultant blood test, this week announced plans to collaborate on developing AIDS vaccines for Africa and other impoverished regions. “A whole lot of people say, 'Why can't you guys collaborate, why don't you work together to try to help solve the problem?'” says Gallo, who heads the Institute of Human Virology at the University of Maryland, Baltimore. “It will stop a lot of that.” Montagnier, who recently retired from France's Pasteur Institute and now heads the World Foundation for AIDS Research and Prevention—an organization he helped form under the auspices of UNESCO—cites another reason: “If we join our efforts, it will be more credible for fund-raising. … We have some names that can help.”

    Montagnier approached Gallo a few years ago about setting up a collaboration. Gallo says he became intrigued in part because Montagnier's foundation has begun to develop testing sites in Côte d'Ivoire and Cameroon; a collaboration might speed the testing of Gallo's vaccines. The two also plan to merge their vaccine approaches. Gallo's team focuses on stitching various HIV genes into Salmonella, as well as studying a version of HIV's surface protein that they believe can stimulate potent anti-HIV antibodies. Montagnier has emphasized making vaccines from pieces of HIV's proteins gag, tat, and nef. Gallo says much of the joint work will be done in the lab of the University of Rome's Vittorio Colizzi, who has already been working with Gallo's lab and has had a separate project with Montagnier's foundation.

    Rapprochement.

    Robert Gallo (left) and Luc Montagnier sign collaboration agreement.

    CREDIT: ROBERT GALLO

    Gallo, 64, and Montagnier, 69, did collaborate before their famous falling-out. In 1983, when the cause of AIDS remained a mystery, Montagnier published a paper in Science, with Gallo's help, that implicated HIV as the cause. But it was not until 1984, when Gallo's lab published four back-to-back papers, also in Science (4 May 1984, pp. 497-508), that persuasive evidence linked HIV to the disease. Montagnier and his team felt badly slighted and charged that Gallo inappropriately hogged credit for the discovery. And when analyses proved that the blood test developed by Gallo's lab relied on a sample of HIV supplied by Montagnier, the question then became: Did Gallo's lab deliberately use the French virus without crediting the group, or was it an innocent contamination? A U.S. investigation cleared Gallo of wrongdoing, and Montagnier himself says he does not believe theft occurred. “This is settled now,” he says.

    The new collaboration with his longtime rival comes at an opportune moment for Gallo. This month, Chicago Tribune reporter John Crewdson published a highly critical book about Gallo's role in the discovery of HIV—Science Fictions: A Scientific Mystery, a Massive Cover-Up, and the Dark Legacy of Robert Gallo. “The timing was not calculated,” says Gallo, who dismisses speculation that the rapprochement is pure public relations. “They can say that,” he says, “but there's substance to the collaboration.”

    Montagnier also dismisses speculation that the two want to impress the Nobel Prize Committee, which notoriously shies away from researchers embroiled in controversies. “If the prize comes, it will come too late,” says Montagnier. “I would have preferred it could have come earlier, and then I think it could have given us more influence to do something in Africa.”

    Several AIDS researchers who know both Gallo and Montagnier are perplexed by the collaboration, as the two scientists clash not only in style but over substance. Montagnier, for example, contends that HIV relies on cofactors to cause disease, an idea that Gallo soundly rejects. Still, none wanted to comment publicly on the new effort. And both Gallo and Montagnier dismiss the idea that their relationship might devolve into another high-profile tempest. “We're wiser, more mature,” says Montagnier.

  5. MICROBIAL GENOMICS

    TIGR Begins Assault on the Anthrax Genome

    1. Martin Enserink

    Until recently, microbiologists were elated when the genome of their favorite bug was sequenced. Now, one genome is just not enough: The emerging gold standard is to produce multiple genomes of one species and compare them. Riding the bioterrorism wave, The Institute for Genomic Research (TIGR) in Rockville, Maryland, plans on taking anthrax to this next level. This year, the institute may sequence the genomes of as many as 20 different Bacillus anthracis strains from around the world, says TIGR director Claire Fraser—three times more than have been sequenced for any other species.

    Having a wide range of anthrax genome sequences could help investigators nab future bioterrorists and aid in designing drugs and vaccines. But the plan for the vast project was hatched last summer, well before fears of bioterrorism exploded. Charting genetic diversity across a large number of strains is fascinating in its own right, says Fraser: “It's something that we've wanted to do for a very long time, and it has nothing to do with the biodefense issue.”

    The strains will be selected by anthrax geneticist Paul Keim of Northern Arizona University in Flagstaff, who is involved in the criminal investigation of last fall's attacks (Science, 30 November 2001, p. 1810). TIGR and the funder, the National Institute of Allergy and Infectious Diseases, will review the project after the first four genomes are complete, says Fraser, to see how useful the information turns out to be. At the current price of about $150,000 per genome sequenced to eightfold coverage, the project could cost $3 million.

    Circled.

    The genome of the Ames strain is almost finished, but many others will follow.

    CREDIT: (DIAGRAM) TIMOTHY READ/TIGR; (INSET)MICHAEL ABBEY/PHOTO RESEARCHERS INC.

    TIGR has already produced the sequence of two B. anthracis genomes. One, a lab strain called Ames, has been in the works for several years, and the last gaps should be filled within weeks, TIGR's Timothy Read reported at a meeting.*In addition, the institute has determined the draft genome sequence of what is now known as the Florida strain: the anthrax that killed photo editor Robert Stevens of American Media Inc. in Boca Raton last October. Although that microbe, too, belongs to the Ames strain, TIGR says subtle differences set it apart from the first one—differences that may help identify the perpetrators of the attacks.

    Not long ago funders scoffed at the idea of comparing the genome sequences of multiple strains. Indeed, geneticist Frederick Blattner of the University of Wisconsin, Madison, recalls that funding agencies twice turned down his proposal to sequence a second Escherichia coli genome a few years ago, arguing that it would be a waste of money. When its genome was finally sequenced, that second bug—the O157:H7 strain, infamous for causing deadly food-borne outbreaks—turned out to have a million more base pairs than the first strain sequenced and almost 1400 new genes.

    Those differences have given researchers countless clues to understanding both microbes, says Fraser. By now, the genomes of five other strains of E. coli are being sequenced or have been finished; other pathogens to get such thorough treatment include Staphylococcus aureus and Chlamydia pneumoniae, with five strains each. The anthrax project would dwarf those efforts.

    Keim, who will also prepare the DNA for the sequencing effort, says he has come up with a list of candidate strains that best represents anthrax's phylogenetic diversity. Comparing the genomes should reveal why some strains are more virulent than others, or why some are better at surviving in the soil, says Martin Hugh-Jones of Louisiana State University, Baton Rouge. “I think you'll see what the really good genes are, and which ones are just coasting along,” he says.

    With existing tools, however, researchers have so far found very few genetic differences among strains. For instance, using his standard DNA fingerprinting system, which looks at 15 different markers called VNTRs, Keim has been unable to discriminate among the microbes used in the attacks and other representatives of the Ames strain. At the meeting, Keim reported an advance that may help federal investigators home in on the bioterrorists who sent the anthrax letters last fall. With a new marker discovered in his lab last year and dubbed Homomeric-1 (HM1), Keim says he's able to tell apart five different Ames strains, four collected from research laboratories and one from a goat that died in Texas in 1997.

    At the HM1 locus, B. anthracis has between 12 and 35 copies of adenine, one of DNA's building blocks, and the number varies for all five isolates. If the strain used in the mail attacks matches one of the strains obtained from laboratories, it could tell investigators where to focus their attention. But Keim, following FBI orders, declines to say which four labs the strains came from, or whether he had checked the Florida isolate for the same marker.

    • *Second Conference on Microbial Genomes, Las Vegas, Nevada, 10-13 February.

  6. CLONING

    Carbon-Copy Clone Is the Real Thing

    1. Constance Holden

    “While the cloning of companion animals is not yet possible, Advanced Cell Technology is currently able to store cells from your animal now.”

    —ACT Web site, 15 February 2002.

    ACT needs to update its Web site. Last week, scientists in Texas unveiled the first clone of a pet—a kitten named CC, short for Copy Cat (also Carbon Copy). The kitty is the fruit of a privately funded initiative, Operation CopyCat, started a year ago by Mark Westhusin and colleagues at Texas A&M University, College Station. It's actually part of a larger and much more difficult project that aims to clone a dog.

    The researchers, who report their feat in the 21 February issue of Nature, say cat cloning is just about as efficient (or inefficient) as duplicating mice, cows, sheep, goats, or pigs. Westhusin's team first attempted to use skin fibroblast cells, inserting their nuclei into enucleated cat eggs. Although 82 cloned embryos were implanted into seven surrogate mother cats, only one pregnancy resulted, and the fetus died. In their next try, the scientists created embryos using nuclei from the cumulus cells surrounding the ova of a calico research cat named Rainbow. They implanted five embryos in a surrogate mother—three from cumulus cells and two from the oral mucosa cells. This time, one of the embryos from a cumulus cell made it to term. That puts the success rate at one out of 87.

    Copied cat.

    Two-month-old CC seems normal so far.

    CREDIT: RICHARD OLSENIUS

    Born by cesarean section on 22 December 2001, CC is a lively, normal-looking feline, the researchers say. She's not an exact copy of her calico progenitor because these coat markings result partly from random events during development. “I'm not at all surprised at the success of the Texas A&M team; they're an excellent group of scientists,” says Robert Lanza, medical director of ACT. He says ACT is moving away from pet cloning and has licensed its technology to the Texas group and others.

    CC is the first creation to emerge from the Missyplicity Project. The project was launched 5 years ago by 81-year-old Arizona financier John Sperling, who wants to be able to replace his husky-border collie mix, Missy, when her time is up (Science, 4 September 1998, p. 1443). So far, Sperling has put $3.7 million into the Texas A&M group's cloning efforts through a new company called Genetic Savings and Clone, based in College Station, Texas, and Sausalito, California.

    Dog clones, however, are still a long way off, Westhusin cautions. “Assisted reproduction technology in cats has all been worked out,” he says. And “you can get them to come into heat when you want.” But dogs come into heat more rarely and unpredictably. Dogs also release immature eggs from their ovaries, and researchers have found it very difficult to get them to ripen in a test tube. The focus in this area, says Westhusin, is still “how do you get viable ova.”

    The public appears willing to wait for dog cloning—and probably to pay $20,000 or so for it. Lou Hawthorne, CEO of Genetic Savings and Clone—which stores tissue for possible future pet cloning and is gearing up to open its own lab—says the start-up was formed in response to public demand. “When we launched [it] 2 years ago, we got thousands of calls within the first 24 hours,” he says. The company hopes to be offering cat cloning on a “case-by-case basis” by the end of the year, according to spokesperson Ben Carlson.

    For now, at least, pet cloning is mainly of interest to sentimental animal lovers and not to serious dog and cat people. Currently, says Michael Brim, spokesperson for the Cat Fanciers' Association in Manasquan, New Jersey, a clone “wouldn't be registrable with us as a pedigreed cat” because of its irregular parentage. Cloning, says Brim, “would basically jump over all the genetic rules of breeding” and take all the sport out of cat fancying. Besides, the whole idea is to breed animals to approach a perfect ideal, so a clone would be a ready-made has-been.

  7. CANCER RESEARCH

    Obstacle for Promising Cancer Therapy

    1. Jean Marx

    Cancer cells are wily. Drug therapies may temporarily halt tumor growth, but all too often the agents lose their effectiveness as the cells' genetic versatility allows them to become resistant. Researchers hoped that so-called antiangiogenesis therapies, which are aimed at preventing the growth of the new blood vessels needed to nourish tumors rather than at the tumors themselves, might circumvent this problem. But recent work suggests that tumors may be able to get around angiogenesis inhibitors, too.

    The latest example comes from Joanne Yu, Robert Kerbel, and their colleagues at Sunnybrook and Women's College Health Sciences Centre in Toronto. They report on page 1526 that tumors in which the p53 tumor suppressor gene has been inactivated—which happens in about 50% of human cancers—are much less responsive to angiogenesis inhibitors than comparable tumors in which the gene is still functional. Researchers already knew that cancer cells can counteract the inhibitors by pouring out more of the factors that promote new blood vessel growth. But loss of the p53 gene apparently renders tumor cells better able to survive in the low-oxygen conditions present in tumors deprived of an ample blood supply.

    James Pluda—who has just left the National Cancer Institute, where he oversaw antiangiogenesis trials, for MedImmune Inc. in Gaithersburg, Maryland—describes the Kerbel team's experiments as “a very nice piece of work,” one that will help researchers decipher results from clinical trials of angiogenesis inhibitors. Already, some 40 agents are being tested worldwide against a wide range of cancers. Neither Pluda nor others expect the new results to preclude development of the inhibitors. But, Pluda notes, the findings “give us something to look at if patients whose cancers initially respond then progress.”

    Some 12 years ago, Kerbel proposed that therapies based on inhibiting new blood vessel growth might not be prone to the resistance problem. But hints to the contrary have appeared in the literature, particularly when angiogenesis inhibitors are given alone. Two years ago, for example, Kerbel and his colleagues found that treating human tumors growing in mice with single antiangiogenesis drugs caused them to shrink—but after a month or two they began growing again. Kerbel wanted to know, he recalls, “why were we getting these relapses?"

    A clue came last year when his team found that cells within a single tumor vary in their ability to withstand the low-oxygen (hypoxic) conditions that angiogenesis inhibitors create. Because work by other investigators had shown that p53 loss makes cells more resistant to hypoxia, Kerbel, Yu, and their colleagues decided to test whether that genetic change could account for the reduced susceptibility to angiogenesis inhibitors.

    Holding their breath.

    Cells without p53 (bottom) withstand hypoxic conditions in tumors better than do those with the gene (top), whose death throes are indicated by green staining.

    CREDIT: J. YU ET AL.

    They obtained two lines of human colon cancer cells from Bert Vogelstein's group at Johns Hopkins University School of Medicine in Baltimore, Maryland; the lines were identical except that in one, the p53 gene had been inactivated. The Sunnybrook workers then transplanted either the unaltered cells or the p53−/− cells into mice. The tumors produced by the unaltered cells “responded quite nicely” to a combination of two antiangiogenic drugs, Kerbel says. But those produced by the p53−/−cells took longer to shrink, and the response was shorter lived, even though the therapy had shown long-lasting effectiveness in previous animal tests.

    When the researchers then implanted equal mixtures of p53+/+ and p53−/− cells in single mice, the proportion of p53+/+ cells decreased dramatically after treatment with the angiogenesis inhibitors. This result also suggests that in natural tumors, which usually consist of genetically diverse cell mixtures, antiangiogenesis therapy might select for the growth of p53−/− cells.

    As the Sunnybrook team suspected, the p53−/− cells survived better because they are more tolerant of hypoxia. In mixed tumors, the p53+/+ cells tend to cluster around the oxygen-giving blood vessels, and those in the more hypoxic regions succumb to the cellular suicide known as apoptosis. In contrast, very few p53−/− cells died of apoptosis even in low-oxygen regions.

    Although Kerbel concedes that the new results are a “bit of a downer,” he maintains that “they don't negate the idea of exploiting antiangiogenesis therapy.” Indeed, as angiogenesis pioneer Judah Folkman of Children's Hospital Boston points out, although tumors may be able to reduce their reliance on the vascular supply, “this paper should not be misinterpreted to mean that tumors can grow under completely [oxygen-free] conditions.” This might be achieved by combining angiogenesis inhibitors with other drugs that destroy existing blood vessels.

    Folkman and Kerbel outline additional strategies that might get around the problem of tumor resistance to antiangiogenesis therapy, such as upping the dose of the inhibitors or giving them with drugs that specifically target hypoxic cells. The trick to defeating cancer, this work shows once again, will be to outmaneuver the enemy.

  8. MICROBIOLOGY

    Weight of the World on Microbes' Shoulders

    1. Jennifer Couzin

    Bacteria withstand stress far more gracefully than the rest of us. Sizzle them to above 110°C, freeze them to below -10°, douse them with salt or acid—and, if they had eyelashes, they'd barely bat any. Now a study takes stressful conditions to a new extreme, crushing microbes beneath the equivalent of a 160-kilometer column of water—and showing that, voilà, they survive. To some microbiologists this suggests that similar organisms might survive the high-pressure environments of other celestial bodies, like Jupiter's moon Europa.

    To engineer this pressure, geochemist Anurag Sharma, microbiologist James Scott, and their colleagues at the Carnegie Institution of Washington in Washington, D.C., took a 50-year-old tool used by physicists and applied it to microbe physiology for the first time. The high-pressure device, called a diamond anvil cell, is created by placing two chiseled diamonds on the ends of opposing cylinders and screwing them together. The Carnegie group layered a film of water and bacteria between the diamonds and began cranking. On page 1514, they report that molecular spectroscopy inside the anvil revealed metabolic activity in the two common bacterial species used, the gut microbe Escherichia coli and Shewanella oneidensis, which breaks down metals. Under 1.6 gigapascals, roughly 1% of the bacteria lived to tell the tale.

    The findings “could expand the habitable zone to areas of great pressure,” says John Baross, a professor of oceanography and astrobiology at the University of Washington, Seattle.

    Superbugs.

    Microbes carve liquid veins (purple) in ice, where they survive under extraordinarily high pressure.

    CREDIT: A. SHARMA

    Studying microbes under high pressure is technically tricky, but in this case the clarity of diamonds allowed Sharma's team to gaze through the gems with a microscope. To monitor the bacteria's behavior, the group added a dye to the solution. The dye turns clear in cells capable of breaking down an organic compound called formate. By melding observations of the dye with spectroscopy data that revealed peaks and valleys indicative of formate metabolism, the team could determine how many bacteria survived. Test microbes killed with heat before being squished, in contrast, failed to signal life.

    The pressure was so great that the solution turned into a form of room-temperature ice known as ice-VI. Of roughly 1 million bacteria, 10,000 remained viable after 30 hours, consuming formate and creating liquid pockets within the once-solid ice. When the researchers turned their diamond anvil upside down, the bacteria hung upside down as well—suggesting that their tails were functional enough for the microbes to cling to a surface or swim.

    But whether motility and metabolism are enough to qualify the bacteria as viable is contentious. “My measure of a live cell would be one that can grow and divide,” says Art Yayanos, an oceanographer at the Scripps Institution of Oceanography in La Jolla, California. Although Scott points out that the bacteria elongated and displayed characteristics suggestive of early cell division, he agrees that his group has not determined whether the bugs can divide. “I'm certain that what we're seeing is survival,” he says. “But I don't know if the cells … will be able to replicate.”

    Bacteria don't often encounter squeezed-together diamonds, but plenty of people would like to know whether similar creatures live in other extraordinarily high-pressure environments. It's hard to tell how the new results might translate to life on Earth or elsewhere, however. The rate of bacterial survival was low in this study, says microbiologist Derek Lovley of the University of Massachusetts, Amherst, who wonders if many microbes could endure long term. Baross would like to test how microbes that are adapted to extreme environments—unlike E. coli—would fare.

    In addition, high pressure is normally accompanied by extremes in temperature—generally hot, but on certain icy planets, very cold—and these high-pressure survivors were kept at a comfortable room temperature. Still, by bumping microbe hardiness to a new level, the study expands the range of the search for extraterrestrial life. “When people think about setting up missions to look for life, they tend to think about looking for it on the surface,” says Scott. “You might want to look underneath.”

  9. MASS EXTINCTIONS

    No 'Darkness at Noon' to Do In the Dinosaurs?

    1. Richard A. Kerr

    Try as they might, geologists have yet to find clear signs that any day in the past half-billion years was as bad as that one 65 million years ago, when a mountain-size asteroid or comet slammed into the Yucatán Peninsula. Life suffered mightily, the dinosaurs disappeared, and mammals seized the day. The immediate cause of death has long been listed as starvation after the 100-million-megaton impact threw up a sun-shrouding pall of dust. Even a lesser impact's dust could threaten civilization, some warned.

    Now, a geologist is questioning whether that ancient impact produced that much dust after all; perhaps the “darkness at noon” dust scenario was more like a dim winter's day in Seattle. The impact still gets the blame, but other killing mechanisms—an obscuring acid haze, global fires and smoke, or a combination of mechanisms—may have done the dirty work. The finding, if it stands up, might help explain the seeming absence of other impact crises and reduce, at least marginally, the potential hazard to civilization if another massive body were to strike.

    The challenge to the dust scenario comes from the latest attempt to estimate the amount of the smallest bits of debris from the impact. To cut off photosynthesis and starve the dinosaurs, copious amounts of submicrometer particles would have to have floated in the atmosphere for months. But this fine dust can't be measured directly in the 3-millimeter-thick, global layer of impact debris because it would have weathered away to clay.

    Geologist Kevin Pope of Geo Eco Arc Research in Aquasco, Maryland, scoured the literature for reports of the size and abundance of larger, more rugged bits of impact debris—typically quartz grains averaging 50 micrometers in size—found in the global layer, which consists mostly of relatively large spherules condensed from the plume of vapor that rose from the impact. From these measurements, Pope tried to understand the dispersal of the dust cloud.

    Beginning of the dinosaurs' end.

    Global fires triggered by the impact rather than obscuring dust may have done in the great beasts.

    CREDIT: COPYRIGHT DON DAVIS

    In the February issue of Geology, Pope reports that the size of this larger debris dropped off sharply with increasing distance from the impact, as if it had fallen from wind-blown debris clouds rather than being blasted around the globe by the impact. Indeed, when Pope modeled debris dispersal by winds alone, the modeled drop-off in both size and mass of debris grains resembled that seen in the global layer, but only if the total amount of debris produced by the impact was relatively small.

    In addition, assuming that the impact debris had a distribution of particle sizes similar to that of volcanic ash, Pope concludes that less than 1% of the debris consisted of submicrometer particles. Therefore, the dust in the global layer “is two to three orders of magnitude less than that needed to shut down photosynthesis,” he writes.

    Researchers are generally cautious about setting aside dust as a killer. “This is a very complicated problem,” says atmospheric physicist Brian Toon of the University of Colorado, Boulder. “We're all inferring this. The relation between big and small particles is not obvious.”

    Planetary physicist Kevin Zahnle of NASA's Ames Research Center in Mountain View, California, tends to agree that dust was not the likely killer, but he's not persuaded by Pope's evidence. He, Toon, and others have estimated that 10-kilometer impactors would produce huge amounts of dust. But Zahnle acknowledges that if dust really can trigger major extinctions, there should have been many impact-triggered extinctions in the past few hundred million years, because there have been many impactors larger than the few-kilometer minimum for a global dust cloud. Yet, none besides the dinosaur killer has been proven, so Zahnle now leans toward global fire and its sun-blocking smoke. Such fires would come from vapor condensing into blazing-hot droplets that fall to the surface, radiating heat on the way down; only an impactor 10 kilometers in size or larger could throw up enough vapor to set the planet on fire.

    “Everyone has their own favorite mechanism,” says Zahnle. “We don't know the facts, so you operate from your intuition.”

    If dust really isn't to blame, then the environmental punch of larger impacts would be less than researchers have generally assumed, and encounters with smaller objects might be less disastrous. But, as Zahnle cautions, because the only data come from a single huge example, taking a lesson from the death of the dinosaurs is fraught with difficulty.

  10. ANALYTICAL CHEMISTRY

    New Test Could Speed Bioweapon Detection

    1. Robert F. Service

    Last fall's anthrax attacks in the United States exposed more than the potential dangers of terrorism by mail. They also showed that current schemes for detecting the deadly bacterium carry an unwelcome trade-off: They're either fast but prone to mistakes, or highly accurate but slow (Science, 9 November 2001, p. 1266).

    Much the same can be said for tests to detect other pathogens, including both potential biowarfare agents such as smallpox and botulism and more common threats such as the bacteria that cause strep throat and other infections. But a new way to detect specific DNA sequences offers hope for swift and accurate microbe detection.

    On page 1503, three researchers at Northwestern University in Evanston, Illinois, report creating simple electronic chips that can detect DNA from anthrax and other organisms in minutes. The chips appear to be vastly more sensitive than other high-speed techniques. And, unlike many such tests, they don't rely on the polymerase chain reaction. This procedure, commonly used to amplify snippets of DNA, can be tricky to carry out and sometimes introduces unwanted errors.

    The new test is “a very clever idea that would lend itself to very inexpensive [diagnostic] devices,” says Stephen Morse, a molecular biologist at Columbia University's Mailman School of Public Health and former program manager of the Advanced Diagnostics Program at the Defense Advanced Research Projects Agency. “It sounds like this technique has a lot of potential.”

    Golden gate.

    New technique detects target DNA (here, anthrax) by using it to link fixed “capture strands” with “probe strands” attached to current-carrying gold nanoparticles.

    ILLUSTRATION: A. STONEBRAKER

    The work grew out of earlier experiments, in which Northwestern University chemist Chad Mirkin and colleagues linked DNA to microscopic specks of metal, known as nanoparticles, to create chemical complexes that changed color in the presence of a target DNA strand (Science, 22 August 1997, p. 1036). But because it takes a fair amount of target DNA to produce the color change, Mirkin decided to look for a more sensitive test.

    Mirkin and group members So-Jung Park and T. Andrew Taton (who is now at the University of Minnesota, Twin Cities) devised a two-part scheme for first capturing their DNA-based target, then converting that DNA into a wire to carry an electrical current between two electrodes. The researchers started by placing a pair of electrodes 20 millionths of a meter apart atop a glass microscope slide. To the glass surface between the electrodes, they anchored numerous identical snippets of single-stranded DNA, each designed to bind to one end of complementary DNA from the target organism: the anthrax bacterium. The team then immersed the setup in a beaker containing the target DNA and waited a few minutes while the chip-bound DNA yanked the target strands out of solution, filling the space between the electrodes with a patchy lawn of anthrax DNA.

    To turn those DNA strands into a wire, Mirkin's group created a second set of single-stranded DNAs, called “probe” strands. One end of each probe was designed to bind to the free end of the target DNA strand; the other end toted a tiny gold particle. When the probes were added to the solution and found their targets, they towed the gold particles into position between the two electrodes. These gold particles act like steppingstones in a river to carry electrical current between the shores of the two electrodes, Mirkin says. The electrical DNA detector could spot anthrax DNA in concentrations of just 500 femtomolar, orders of magnitude more sensitive than current high-speed detection schemes.

    The test turned out to be highly selective as well. All current DNA hybridization techniques are plagued by mismatches in which DNA strands that differ from the target by just one or two nucleotide bases also bind to capture strands, threatening false-positive readings. Because mismatched DNA doesn't bind as tightly to its partner as perfectly matched pairs do, researchers typically dislodge mismatched strands by heating their samples. But that requires additional equipment to heat and cool the samples.

    Mirkin's team found that adding a little salt produces the same result. Adding a solution with the right amount of salt, the Northwestern researchers discovered, forced target strands with even a single mismatch to shake loose, leaving behind only the fully complementary DNA sequences they were seeking.

    “The salt work is a very nice development,” says Dan Feldheim, a chemist at North Carolina State University in Raleigh. Eliminating the need for heating and cooling elements, he says, should make future DNA-detection devices both small and cheap.

    Another potential advantage is versatility. Mirkin and colleagues can pack their electrical DNA detectors into arrays that look for different target DNAs simultaneously. Such multitasking could pave the way for hand-held readers that scan for a battery of different infectious agents. Mirkin is already associated with a company called Nanosphere that he says is likely to commercialize this work.

  11. ASTRONOMY

    Glimpsing the Post-Hubble Universe

    1. Andrew Lawler

    Next month's servicing mission marks NASA's next-to-last planned visit to the Hubble Space Telescope. That has kicked off a lively debate about what's next

    The aging Hubble Space Telescope should get a new lease on life early next month after astronauts on the space shuttle carry out extensive repairs and upgrades. But astronomers are tempering their excitement, because NASA plans to turn off the highly successful instrument at the end of the decade after one more servicing mission.

    That may seem a long way off, but it's practically next week on the time scale needed to plan major observatories. The Next Generation Space Telescope (NGST) is slated to replace Hubble in 2009, but it will observe in the infrared range, leaving optical and ultraviolet astronomers without a major space-based instrument. Meanwhile, Earth-based telescopes are experimenting with new technologies that could give them an edge over their more expensive space brethren. To avoid being caught short, researchers this spring will start to tackle what scientific questions over the next 2 decades are best answered through space observations—and which technologies are needed to get the job done.

    The exercise will also tempt the community to reexamine whether it makes sense to halt operations of a popular and productive observatory. Previous efforts to extend Hubble's life were brushed off by former NASA Administrator Dan Goldin as coming from “Hubble huggers.” NASA officials say that servicing Hubble costs too much, and that the $1.3 billion NGST effort is a better use of scarce resources. But Goldin has been replaced by Sean O'Keefe, and Hubble enthusiasts are heartened by the changing of the guard. “Now there's new leadership and lots of interest in finding a way to keep it on line,” says Harry Ferguson of the Space Telescope Science Institute (STScI) in Baltimore, Maryland, which manages Hubble.

    It may be hard to convince NASA officials, however, who dismiss the idea as politically and technically impractical. “I'm a Hubble hugger in the ultimate sense,” jokes NASA space science chief Ed Weiler, who spent 17 years as the telescope's chief scientist. “I want to hug it in 2010” as a display in the Smithsonian National Air and Space Museum, he says.

    Room for service?

    After an embarrassing debut in 1990, when it was launched with a defective mirror, Hubble has been one of the most successful space science projects. Even now, demand for glass time continues to rise, with the biggest surges coming after each servicing mission. The first, in December 1993, fixed the mirror, and two subsequent missions provided huge leaps in Hubble's data-collecting abilities. During the 1999 servicing mission, for example, the shuttle crew replaced six dying or dead gyroscopes, critical for the telescope's pointing accuracy. Astronauts also installed a new main computer, an advanced data recorder, a transmitter, a better guidance sensor, and insulation.

    Next month's mission will add a new science instrument, power switching station, and four large, flexible solar arrays that are smaller but will produce 30% more power than the current radiation- and debris-pitted sails. The crew also will install a new cooling system that will reactivate a dormant instrument (see sidebar, p. 1450). The result, say Hubble managers, will be virtually a new observatory. “It's been an excellent value for the money,” concludes Steve Beckwith, STScI's director.

    But those missions remain technically challenging, and they aren't cheap—$172 million for next month's effort. Planning them requires a standing army of technicians and engineers, plus exhaustive training for the astronauts who carry out the work. And then there's the $500 million price tag for a shuttle flight.

    These huge costs led to an agreement in the late 1990s among Congress, NASA, and the White House Office of Management and Budget to halt servicing missions after 2004 and to funnel money into NGST. “If you don't stop servicing Hubble, then NGST won't get started,” says Weiler, adding that Hubble's 15-year lifetime has already been extended to 20 years. By then, he says, even servicing missions won't be able to keep Hubble's technology up-to-date. “We could continue to do great science with Hubble,” says Weiler. “But I want to do outstanding science. Without unlimited funding, we've got to make tough choices.”

    Long-distance service.

    A 1993 servicing mission put Hubble back on track for a decade of spectacular science.

    CREDIT: MSFC/NASA

    Beckwith, however, says the choice isn't necessarily that stark. NASA could conduct a relatively inexpensive repair mission after 2004 that wouldn't entail costly instrument upgrades, he says, and technical advances could cut Hubble's $40 million annual operating costs in half by the end of the decade. Keeping Hubble alive a bit longer would also restore a 1-year overlap between NGST and Hubble that disappeared as the new telescope's launch date has slipped at least a year, he adds.

    But Anne Kinney, a senior NASA space science official, says that an overlap is not vital to the agency's research program. And even reduced annual operating costs—not to mention the cost of another servicing mission—would break NASA's limited budget, she and Weiler say. Turning off Hubble “is not a scientific question, it's a political one,” concedes Beckwith.

    Out in the cold

    The voice of researchers remains important, however, in shaping a post-Hubble future. Next month, infrared astronomers will gather in Maryland to discuss their scientific goals and the missions needed to reach them. Optical and ultraviolet scientists will hold a similar discussion in April at the University of Chicago. “We are thinking ahead more than a decade,” says Robert Kennicutt of the University of Arizona in Tucson, who is helping to organize the Chicago meeting. But scientists hoping to rally support for a fast-track payload will be disappointed, Kennicutt warns: “This is not an effort to shoehorn a new mission into NASA's planning cycle.”

    Coming up with a cohesive plan is particularly pressing for researchers who use the ultraviolet and optical wavelengths that will not be a part of NGST's portfolio. Although smaller missions are on the drawing board, NASA has no plans right now for a major space telescope to serve their needs.

    Meeting participants are expected to weigh the relative merits of exploring a range of scientific topics, including the nature of the intergalactic medium, the precise distance among galaxies, extrasolar planets, and the identification of fainter galaxies to understand galactic evolution. “It is real easy to define exciting problems,” says Kennicutt. “It's more difficult to ask how many will still be cutting edge in a decade.” Getting a firm grip on the scientific issues is vital, says Kinney. “Scientists tend to think in terms of facilities; from NASA's point of view, we need to know what science questions are important.”

    Researchers will also try to mesh their plans with the growing capabilities of ground-based telescopes. Those on the ground cannot compete with the cold and clear conditions of space when it comes to gathering infrared, x-ray, and ultraviolet wavelengths, and that fact is unlikely to change for a long time. But thanks to new technologies—such as adaptive optics—Earth-bound telescopes are starting to rival some of the best of Hubble's images in the optical realm. And ground observations are a lot less expensive.

    “In some parts of the visible spectrum, there's not necessarily an advantage to a space telescope,” Kennicutt says. But there is no consensus on when or whether those advances will equal or supercede space-based instruments. “There is a gray area when it comes to the visible, and [there is] quite a divergence of opinion,” he adds.

    For example, adaptive optics can monitor how the atmosphere distorts the image of a single bright star and compensate for the entire field by making minute adjustments to the mirror. The latest technique makes use of an artificial “star” created by a spot of laser light shot high into the sky. Recent pictures from the European Southern Observatory's Very Large Telescope atop Cerro Paranal, Chile, are even sharper than some Hubble images. But that clarity fades for wider fields.

    Although some astronomers say wide-field optical imaging from space will retain an advantage over ground-based mirrors, the University of Arizona's Roger Angel says that ground-based technologies are catching up at telescopes such as the W. M. Keck Observatory atop Mauna Kea in Hawaii. At Keck, a series of lasers shot into the atmosphere allow researchers to adjust their mirrors to compensate for atmospheric effects over wider fields. Developing accurate lasers has proved problematic, but they are improving, says Angel. In fact, some ground-based astronomers soon expect to be able to capture images of extrasolar planets (Science, 25 January, p. 616). “The space-based guys always need to be looking over their shoulders,” he adds.

    Technology bridge

    The outcome of this horse race between ground-based adaptive optics and space-based technologies will play a key role in NASA's decision on which missions to fund. “We need a hard-nosed discussion of the technology for the future,” says Kinney. Agency managers are pondering a 2004 initiative to come up with better and lighter mirrors, detectors, coatings, and other technologies. So far, such long-term vision has suffered from near-term budget troubles. A NASA proposal to spend some $5 million in 2003 on solar sails and lightweight optics was shot down by White House budget officials to cover cost overruns in other programs. But NASA officials say they are confident that O'Keefe will prove amenable to spending money on long-term technologies for space telescopes.

    Telescopes of every wavelength need bigger and lighter optics. But the details differ. Ultraviolet instruments, for example, require greater precision in their optics because of the shorter wavelengths; mirror contamination, which absorbs those shorter wavelengths, also poses a major threat. Infrared astronomy demands low temperatures, to reduce the amount of contaminating heat and light. And x-ray astronomy requires higher precision and multiple mirrors; the current use of cone-shaped mirrors is limited because increasing their size does not rapidly increase light collection. Harley Thronson, NASA's chief astronomy technologist, says that better detectors are essential for all wavelengths, pegging total research and development costs at $30 million to $40 million a year.

    Meanwhile, NASA managers say they want to keep close tabs on advances in ground-based astronomy. The 2004 initiative, adds Thronson, might include collaboration with the National Science Foundation, which oversees ground-based telescopes.

    Such cooperation would no doubt please the White House, which pushed unsuccessfully last year to combine the two agencies' largely separate efforts under one roof. Coordinating technologies could be a first step toward determining which missions are best met by space- or ground-based instruments. “Hubble has been king of the hill for so long, it's hard to think of a different direction,” says Angel. With Hubble's demise in sight, however, the time may be ripe for a new approach.

    HUBBLE'S HITS

    Comet Shoemaker-Levy 9: Watched the comet slam into Jupiter and studied its plume and wake. (1994)

    HST JUPITER IMAGING TEAM

    Protoplanetary dust disks: Showed that they are common around young stars but often quickly lose their gas. (1993)

    C. R. O'DELL/RICE UNIVERSITY/NASA

    Globular clusters: Showed that those in our galaxy are all roughly the same age. (1991)

    47 Tucanae: Vain search for Jupiter-sized planets in this cluster showed that such planets are much rarer in globular clusters than in the solar neighborhood. (2000)

    R. GILLILAND/STSCI/NASA

    Quasars: Confirmed that they reside in host galaxies, many of them colliding. (1997)

    J. BAHCALL, M.DISNEY/STSCI/NASA

    Supermassive black holes: Discovered that they dwell in the cores of most galaxies. (1997)

    K. GEBHARDT, T. LAUER/STSCI/NASA

    Gamma ray bursts: Helped show that they reside in galaxies that are forming stars at high rates. (1997)

    K. SAHU ET AL./STSCI/NASA

    Hubble Constant: Measured the expansion rate of the universe with an uncertainty of only about 10%. (2001)

    Hubble Deep Field: Deepest-ever optical/ultraviolet/infrared image of the universe showed galaxies when the universe was less than a billion years old. (1996)

    R. WILLIAMS/STSCI/NASA

    High-redshift supernovae: Most-distant exploding stars ever observed gave further evidence of an accelerating universe. (1998)'

    A. RIESS/STSCI/NASA
  12. ASTRONOMY

    NGST Aims for Big Advance by Going Back in Time

    1. Andrew Lawler*
    1. With reporting by Daniel Clery.

    The Hubble Space Telescope is a hard act to follow. But the designers of the Next Generation Space Telescope (NGST) expect the $1.3 billion instrument to win loud applause by greatly extending Hubble's vision. It should be able to peer back in time to first generations of stars and galaxies, and its sharp eyesight should spot individual stars in nearby galaxies. Their confidence remains unshaken after several years of daunting technical challenges, budget threats, and sometimes difficult negotiations among the U.S., European, and Canadian partners.

    Scientists began thinking about NGST in the 1990s as a vehicle for probing the early universe—primarily at infrared wavelengths, which are absorbed by Earth's atmosphere. They are giving it a 6-meter mirror that will provide more than six times the viewing area of Hubble's 2.4-meter mirror. On the other hand, its designers won't have a chance to fix any flaws after its scheduled launch in late 2009 because its location will put it beyond the reach of the space shuttle. Hovering beyond even the faintest wisps of the atmosphere, NGST will peer back in time and “see” the universe when it was between 1 million and a few billion years old. The National Academy of Sciences' most recent decadal panel on astronomy makes it the field's top priority (Science, 26 May 2000, p. 1310), and, although NGST's capabilities were scaled back substantially last year to trim costs, another academy review concluded last fall that the revised blueprint remains “a sound approach.”

    Time traveler.

    The Next Generation Space Telescope hopes to glimpse the early universe.

    CREDIT: ESA

    NASA will contribute about $1 billion for the mirror, instruments, spacecraft systems and integration, and launch. European contributions will total some $250 million to build the spacecraft platform and 50% of the three instruments. Canada will chip in $50 million for half of one instrument. “We hope to have the details worked out by the fall,” says Sergio Volonté, the European Space Agency's (ESA's) coordinator for astronomy and fundamental physics. In exchange, European astronomers will receive at least 15% of NGST's observing time, and Canada will get 5%. Although use of the telescope will be governed by peer review, European researchers now receive a similar percentage of time on Hubble despite having made a far smaller contribution to its cost.

    NASA's Anne Kinney says that U.S. negotiators had wanted to launch NGST aboard Europe's Ariane rocket and do the bulk of the spacecraft work in the United States. But under ESA bookkeeping practices, France—which developed Ariane—would have received sole credit for the contribution. So the ESA-built platform will be shipped to the United States, integrated with other components, and launched aboard a U.S. Atlas rocket. The parties must still tackle less pressing details such as sharing communications and ground facilities. “Most of the issues are resolved,” adds Kinney.

    This spring NASA will choose a U.S. prime contractor for NGST. And proposals for the spacecraft's instruments are due 5 March.

  13. ASTRONOMY

    A Hot New Camera--and a Chilly Revival

    1. Robert Irion

    Next week's service call to the Hubble Space Telescope should double the pleasure of astronomers. Not only will astronauts install a spiffy new camera, but they will resurrect an older instrument that sputtered to a premature shutdown.

    Slated for rescue is the Near-Infrared Camera and Multi-Object Spectrometer (NICMOS). Installed in February 1997, NICMOS peered through dust at objects otherwise hidden from Hubble's view. But an accidental contact in the cooling system caused the camera's solid-nitrogen refrigerant to boil away, shutting NICMOS down by January 1999.

    The daring fix looks and acts like the minirefrigerator in a college dorm room. “Refilling nitrogen in orbit isn't easy, so we figured out how to cool the camera mechanically,” says project scientist Ed Cheng of NASA's Goddard Space Flight Center in Greenbelt, Maryland. The 90-kilogram cryocooler—built by Creare Inc. of Hanover, New Hampshire—will circulate compressed neon gas that expands within the existing NICMOS plumbing, driven by half-centimeter turbines whirling 7000 times per second. The vibration-free system should chill NICMOS to a frigid 70 kelvin for many years, Cheng says.

    Sharp arcs.

    Large detectors in Hubble's new Advanced Camera for Surveys will capture stunning views of gravitationally distorted galaxies, this simulation predicts.

    CREDIT: ACS TEAM

    Expect hotter results from the Advanced Camera for Surveys (ACS), which astronauts will slide into the slot occupied by Hubble's last original instrument, the Faint Object Camera. Pictures from ACS will cover twice as much area on the sky as those from Hubble's workhorse, the Wide Field and Planetary Camera 2. Moreover, the camera's imaging chips are three to five times as efficient and its vision twice as sharp. Those gains will have a dramatic impact, says lead scientist Holland Ford of Johns Hopkins University in Baltimore, Maryland: “After 1 or 2 years in orbit, ACS will have detected more faint stars and galaxies than all of Hubble's previous instruments combined.”

    Astronomers will use ACS to survey wide patches of the distant universe. Such studies should write the book on how clusters of galaxies have evolved, Ford says. ACS images of the eerie distortions of space caused by massive clusters, called gravitational lenses, may reveal magnified images of the first galactic building blocks less than a billion years after the big bang. Further, surveys of the outermost solar system should expose a host of faint icy objects beyond Neptune and Pluto, within the poorly understood Kuiper belt. ACS also boasts a high-resolution camera for even sharper images of small patches, such as the cores of galaxies and dusty disks around young stars, and a detector for spotting ultraviolet light from hot stars, aurorae on Jupiter and Saturn, and other energetic objects. A small disk will block nearly all light from bright central sources for certain studies.

    “Everybody and his brother wants to use this camera,” says astronomer Richard Ellis of the California Institute of Technology in Pasadena. The competition is already fierce even by Hubble's stingy standards. In the first round of proposals for observing time on ACS, just one of every 20 requests made the cut.

  14. NEUROSCIENCE

    How Neurons Know That It's C-c-c-c-cold Outside

    1. Caroline Seydel*
    1. Caroline Seydel is a freelance science writer in Los Angeles.

    A newfound mint receptor and a cold-sensitive balance of ion movements can each send the chilling message

    Mouthwash and chewing gum makers advertise that their minty products feel and taste cool. It's not just a metaphor. To a cold-sensitive neuron, menthol, the active ingredient in mint, might just as well be ice. Until recently, however, researchers didn't have a good grasp of how nerve cells transmit cold sensations. Now a spate of papers published this month offers two answers. Sometimes specialized menthol sensors do the job; other times neurons choreograph the movement of ions in response to cold in the absence of a specific receptor. Researchers suspect that the two mechanisms operate in concert, either in separate populations of cells or together in the same neurons.

    Fifty years ago, researchers discovered that some nerve endings react both to cold temperatures and menthol. That meant that menthol could be used to identify cold-sensitive neurons, says neuropharmacologist David Julius of the University of California, San Francisco. These neurons should be mirror images of neurons that are sensitive both to hot temperatures and capsaicin, the chemical that gives chili peppers their sting. In 1997 Julius identified a receptor that registers both types of heat sensations, and since then he's turned his attention from the oven to the freezer. “Is there a bona fide menthol receptor?” he wondered. If so, “what does it look like, and is it [also] a cold receptor?"

    Last year work by Gordon Reid's group at the University of Bucharest, Romania, pointed to the existence of such a receptor. The team observed a flow of ions into certain neurons when stimulated by cold and found that exposure to menthol increased that flow—exactly the behavior one would expect if a cold- and menthol-sensitive receptor were at work. That receptor would likely be a channel in the cell membrane that allows ions to flow into the cell and activate the neuron when it gets cold. (Because they looked at only the electrical characteristics of the current, the researchers did not identify any specific neural receptors.) They were a bit perplexed, however, as the new findings seemed to contradict the team's earlier work suggesting that menthol stimulated neurons not by allowing ions to flow into the cell but by preventing ions from leaving the cell.

    The new observations, by three independent research groups, support both mechanisms. Two teams cloned proteins that appear to be the long-sought cold receptor. Under cold temperatures, the proteins shuttle positive ions into the cells. The third team reports that cooling blocks a protein channel through which positive ions leak back out of the cell, trapping extra charge inside the neuron. “They're both going on,” Reid says of the two mechanisms, but “the relative importance of the two is still open.”

    Cool customers.

    Neurons expressing the newfound receptor relax at room temperature (top) but fill with calcium ions at 15°C (bottom). Color changes from blue to green to red as the ion concentration inside the cell increases.

    CREDIT: PEIER ET AL., CELL, PUBLISHED ONLINE 11 FEBRUARY 2002

    In search of a channel responsive to cold, Julius and his colleagues isolated menthol-sensitive neurons from the faces of rats. Even in the face, which is particularly sensitive to cold, only about 10% to 15% of neurons responded to the cooling compound. After verifying that cold temperatures also stimulated the cells, the researchers inserted various genes expressed by those cells into other cells that don't normally register cold or menthol. One gene opened up a new world of sensations for the cells, making them respond to low temperatures as well as the minty chemical, the group reports online 11 February in Nature. The gene encodes a receptor that they named, straightforwardly enough, CMR1, for cold and menthol receptor. The team found that the receptor is present primarily in small-diameter neurons, which typically play a role in sensing pain.

    A second group, led by Ardem Patapoutian of the Scripps Research Institute in La Jolla, California, and Stuart Bevan of the Novartis Institute for Medical Sciences in London, approached the problem from the opposite direction. They searched DNA databases for sequences similar to the recently identified heat and capsaicin receptor and pulled out a mouse gene they called TRPM8. When transferred into cultured cells, the gene conferred cold and menthol sensitivity, the group reports in the 11 February online edition of Cell.

    Although the two groups didn't realize it until a few days before their papers were published, TRPM8 and CMR1 are the same receptor. Neither group yet knows exactly how cold affects the receptor, but the “most straightforward” explanation, Patapoutian says, is that “colder temperatures change the conformation of the channel,” allowing positive ions to flood into the cell. A similar adjustment could occur when minty-cool menthol binds to the receptor.

    But “by no means can this [receptor] by itself explain perception of cold,” Patapoutian says. “Other mechanisms and channels will be involved, given that humans can detect differences as little as 1°C.”

    Indeed, neurons with designated cold receptors aren't the only ones to feel a chill. Félix Viana and colleagues at Miguel Hernández University in San Juan de Alicante, Spain, propose that cold sensitivity results from an interaction among channels that hold potassium ions inside the cell and those that let them out—a process that has no need of a specific cold receptor. The team isolated a population of cold-sensitive neurons from the faces of mice. These neurons, when exposed to cold or menthol, close a so-called “leak channel” that normally allows positive potassium ions to trickle out of the cell, the researchers found. Closing the channel holds positive ions inside the cell, exciting the neuron, they report online on 11 February in Nature Neuroscience. These observations confirm similar findings in nonfacial neurons reported by Reid's group in January 2001.

    Viana's group found another difference between cold-sensitive and cold-insensitive sensory cells that are otherwise quite similar. The latter seem to have a “braking” mechanism that is absent from cold-sensitive cells. This brake, called a voltage-gated channel, slows neural excitation by shuttling potassium ions out of the neuron, despite the closure of the leak channel. Cells' sensitivity to cold, therefore, rests not on the presence or absence of one particular channel but rather on “the unique combination of channels [the cells] express,” Viana says.

    Neurons aren't necessarily committed to a life of sensitivity or obliviousness to cold. Viana's team found that blocking the voltage-gated potassium channel in neurons normally impervious to cold made about 60% of those cells cold-sensitive. This mechanism could explain how nerve injury sometimes causes painful sensitivity to cold, the researchers suggest. Previous reports indicate that nerve injury disrupts certain voltage-gated potassium channels. If a neuron contains too few of these channels, the researchers suggest, it could become hypersensitive to cold.

    “It looks like you have a population [of cells] that can become responsive [to cold] in the absence of CMR1,” says neuroscientist Michael Gold of the University of Maryland, Baltimore. “My gut feeling is that in the end, it's going to turn out to be probably a combination of both,” with specialized receptors working in concert with a balance of ion channels to let people know that, baby, it's cold outside.

  15. ASTRONOMY

    The Tumultuous Teens of Supernova 1987A

    1. Robert Irion

    With the explosion long faded, the most spectacular star in centuries is treating astronomers to a rip-roaring new light show

    When Supernova 1987A burst onto the cover of Time 15 years ago this week, astrophysicist Stan Woosley of the University of California, Santa Cruz, told the magazine: “It's like Christmas. We've been waiting for this for 383 years.” Today, the supernova is the gift that keeps on giving. Although the explosion has dimmed, space around it now rages with energy. This maelstrom promises the closest look yet at how nature churns fresh elements into the cosmos.

    The rebirth started a few years ago, when the supernova's blast wave hit part of a ragged ring of gas shed by the bloated star before it collapsed and self-destructed. Recently, that fierce impact has spread from a single “hot spot” to a dozen, and more are coming. “It's all going to become one solid hot spot,” says astrophysicist Richard McCray of the University of Colorado, Boulder. “It will become 100 to 1000 times brighter in the next decade.”

    That's not bright enough to make 1987A visible to the unaided eye. However, the fireworks should reach beyond the inner ring to light up a puzzling cloud of gas and dust, also cast off by the star long ago. The shape and motions of this cocoon may reveal whether the doomed star had a companion. The shock wave itself will start to echo throughout the supernova's debris, reflecting like a sonic boom in a canyon. That chaos will disperse the star's pristine elements far into space: carbon, oxygen, iron, and heavier substances flash-forged by the explosion.

    It's all part of 1987A's transformation into a supernova remnant like the famed Crab Nebula, whose explosive birth Chinese astronomers noted 948 years ago. This time, astronomers have a ringside seat with modern telescopes at their disposal. “We've never had a good record of how a supernova feeds heavy elements into the interstellar medium to make future stars and planets,” says astronomer Arlin Crotts of Columbia University in New York City. “Supernova 1987A will lay it out for us in exquisite detail.”

    Tuning up.

    Radio waves from Supernova 1987A keep getting brighter as a shock wave boosts electrons to near the speed of light.

    CREDIT: RICHARD MANCHESTER ET AL./CSIRO AUSTRALIA TELESCOPE NATIONAL FACILITY

    Coasting toward a collision

    For many years after its detonation, which astronomers first saw on 23 February 1987, the supernova was in what Crotts calls a “coasting phase.” Its blast wave—the leading edge of matter ejected by the explosion—raced at about 1/20th the speed of light through a cavity of mostly empty space about 2 light-years wide. Meanwhile, light from the explosion itself, fueled by the radioactive decay of unstable isotopes, faded inexorably. That glow is now just one 10-millionth as bright as it was at the supernova's peak.

    Still, things revved up during the apparent calm of the coasting phase. As the shock slammed past leftover gas in the cavity, it boosted a growing horde of electrons close to the speed of light. Those electrons, whirling along magnetic field lines, began to emit strong radio waves. “The acceleration was in first gear for a while, then second, but now we're in third gear,” says radio astronomer Bryan Gaensler of the Harvard-Smithsonian Center for Astrophysics in Cambridge, Massachusetts. “It's already brighter in radio than it was 1 week after the explosion.” The powerful blast has also scorched the sparse atoms in the round cavity and triggered high-energy x-rays, visible to both NASA's Chandra X-ray Observatory and the European Space Agency's XMM-Newton satellite.

    Monitoring of the radio emission by the Australia Telescope Compact Array, a network of six radio dishes in New South Wales, has revealed a distinct mass imbalance in the cavity. Most energy streams from the eastern and western sides, near the plane of the former star's equator. That jibes with a scenario in which the bloated star lost much of its outer atmosphere along its rotating midsection, rather than from its poles, before it collapsed and exploded. By the time the shock wave reached the edge of the remnant in the past few years, the extra gas near the cavity's equator had slowed it down to about 3000 kilometers per second. That concussion—still moving at 1% of the speed of light—is sparking the newest fireworks. The shock plows into the innermost fringes of a ring of dense gas that the star emitted some 20,000 years ago, perhaps in a single puff like a hobbit blowing a smoke ring with his pipe-weed. The milder but still violent collision heats atoms enough to produce low-energy x-rays. And as the gas cools in the wake of the shock, the atoms shine in visible light that the Hubble Space Telescope can catch.

    Hubble sees 12 hot spots and counting. Indeed, the optical flares now span much of the ring. Few gaps larger than 45 angular degrees remain, according to a paper submitted to The Astrophysical Journal by Columbia graduate student Ben Sugerman and his colleagues. The team's analysis suggests that some spots flicker on time scales as short as 1 month. “We think the shock is hitting disconnected clouds or breaks in the matter,” says Crotts. Closely spaced observations with Hubble and with adaptive-optics systems on the ground will help astronomers use the shock to probe the structure of the inner remnant, he notes.

    As the blast wave strikes bigger arcs of the dense ring, the barriers will propel intense “reverse shocks” into the matter ejected by the supernova. Initially, that matter will consist mostly of the dead star's outermost layers. But within 5 to 15 years, the shock's echoes will sweep across elements created deep within the star, according to models by McCray and others. McCray expects that radioactive “shrapnel” of nickel-56 shot far out into the blast from the star's core, rapidly decaying into cobalt and then iron. The resulting heat created foamlike pockets of low-density iron, he believes, within the rest of the higher density ejecta. Infrared data from the Kuiper Airborne Observatory suggest that such foaming action did indeed occur: About 1% of the mass in the remnant's interior appears to fill half of the volume, McCray says.

    Blue supergiant or deadly merger?

    McCray is eager to witness another consequence of the shock wave's plowing into the entire ring. “It's clear that the ring is only the inner skin of a much larger volume of material that has remained invisible,” McCray says. “The ring will be a new light bulb. As it heats up, it will gradually ionize the rest of the structure so that we can see it in x-rays. It will blossom like a flower in the next decade.”

    This flowering will yield more than pretty pictures. By studying its form and the motions of gas and dust in the nebula—which may contain 10 times the mass of our sun—astronomers should unearth the history of the star that blew up. McCray hopes this “interstellar archaeology” will help resolve an ongoing debate about Supernova 1987A's origins.

    For years, astrophysicists maintained that the star ended life as a lone “blue supergiant,” a smaller and hotter cousin to the red supergiants believed to spawn most supernovae. According to this picture, slow-moving billows of matter from an earlier red supergiant phase interacted with intense winds from the final blue phase to create the inner ring of gas. However, other researchers proposed a rather different idea: A smaller companion star merged with the giant star 20,000 years before it exploded, spewing out a flattened disk of dust. Astronomers do see binary systems inside other stellar remnants, so it's cosmically feasible.

    Ring of fire.

    A dozen hot spots (arrows) have flared as the blast from Supernova 1987A plows into knobs of gas; soon, the entire ring of gas, shed by the doomed star 20,000 years earlier, will blaze

    CREDIT: B. SUGERMAN ET AL./COLUMBIA UNIVERSITY AND SPACE TELESCOPE SCIENCE INSTITUTE.

    Neither scenario, however, explains a pair of faint, larger rings that Hubble sees above and below the inner dense ring, forming a rough outline of an hourglass. “The evidence for either idea is all circumstantial,” McCray says. “There is no proof of anything, and models have failed to create those structures.” When the distant cloud starts to glow beyond the faint rings, astronomers hope to retrace the behavior of the star—or stars—that cast the gas and dust into space.

    Supernova 1987A hides still other mysteries. Most notably, astrophysicists don't know whether a neutron star or a black hole now lurks at its core. Radio astronomers periodically check for pulses of radiation from a whirling neutron star, but so far the remnant is silent. A few researchers speculate that some of the innermost matter fell back into the heart of the blast, adding enough mass to tip the scales toward a black hole. However, a thick shroud of debris at the remnant's center may hide the mystery object for decades. Unless they spot a telltale blip from the core, few astronomers plan to devote much time to that quest for now.

    Instead, the shock wave and its violent impact are today's supernova growth industry. “This will keep astronomers busy for centuries,” says Stephen Lawrence of Hofstra University in Hempstead, New York. By that time, an even closer supernova may bare its secrets to the orbiting telescopes in Earth's future.

  16. ECOLOGY

    Disciplines Team Up to Take the Pulse of Tampa Bay

    1. Susan Ladika
    1. Susan Ladika is a science writer in Tampa, Florida.

    A team of more than 60 scientists is probing the health of Tampa Bay, reconstructing its ancient environment, and developing a better basis for resource planning. The project may be a pilot for other studies around the gulf

    St. Petersburg, FloridaLike ecological SWAT teams, researchers in motorboats have been skittering across Tampa Bay in the past few months. Their mission: examine sea-grass beds, measure water salinity, take core samples, and conduct a host of other studies under the largest multidisciplinary science project ever orchestrated by the U.S. Geological Survey (USGS).

    During the next 5 years, Tampa Bay will be among the most intensively studied coastal areas in the United States. Scientists will be drawing up maps of the sea floor, charting habitats, identifying the sources and quality of groundwater seeping into the bay, and reconstructing the region's ancient environment. The goal is to determine human impacts on the bay, a 1000-square-kilometer expanse that is one of the largest estuaries along the Gulf of Mexico. Such information should help managers “make scientifically sound decisions about resources in the bay,” says Kim Yates, the USGS project leader.

    All this activity may be just a warm-up: USGS hopes to extend the Tampa Bay project to other gulf estuaries, where urbanization comes head to head with recreational demands and the livelihood of commercial fishers. “We're trying to bring together information that decision-makers—federal, state, or local—need in order to make decisions on multiple uses of the land that we live on,” says Bonnie McGregor, eastern regional director of USGS.

    The project has benefited from friends in high places. USGS's plans to launch the project in 2001 were dealt a temporary setback when the Administration decided it wasn't high enough priority to include initial funding in the president's 2001 budget request. Enter Representative C. W. Bill Young, a powerful Republican who chairs the House Appropriations Committee. Young, whose congressional district includes part of the Tampa Bay area, was instrumental in getting Congress to add $1 million to the 2001 budget and $2 million in 2002 for pilot studies. The White House has now climbed aboard: The 2003 budget proposal unveiled earlier this month would elevate the project to “fully operational” status and requests about the same level of federal funding as in 2002. Local agencies are kicking in an additional $5 million a year to the effort, which now involves more than 60 scientists from 13 agencies and universities.

    Bay watch.

    Two major cities, farming, sinkholes, and wetlands such as Terra Ceia Aquatic Preserve (top) all play a role in the health of Tampa Bay.

    CREDITS: USGS

    Scientists have already begun taking bathymetric soundings—something that hadn't been done in much of Tampa Bay for a half-century. This information has been combined with land data to create digital elevation models for the bay and its surroundings. Using charts from the 19th and early 20th centuries, researchers plan to develop historic digital elevation maps as well. That will allow them to gauge changes over time from processes such as erosion and shifts in rainfall patterns. Elevation changes influence the mix of seawater and fresh water in wetlands that fringe the bay and alter the circulation of sediments and contaminants in bay waters.

    Researchers are also tracking the peculiar groundwater flow that characterizes the bay area. Aerial photographs from the 1920s reveal countless sinkholes that make the topography look like Swiss cheese. Landfills, power plants, golf courses, and housing developments have been built atop many sinkholes, which act as conduits to groundwater supplies. No one is quite sure what settles into them and eventually flows into the bay, says Lisa Robbins, the USGS project facilitator. Various techniques—including seismic profiling and light detection and ranging, using NASA aircraft—are being used to locate previously undetected sinkholes. Scientists then monitor isotopes of radium, strontium, and oxygen—which are present at different levels in groundwater, surface water, and seawater—to determine the sources and mix of water at various points around the bay.

    To monitor changes in wetlands, researchers are comparing maps from the late 19th and early 20th centuries with recent aerial photographs and satellite images. They are also using sediment cores to probe back in time, tracking changes in sea level, climate, biomass, and geochemistry.

    One vexing observation now under intense study is why sea grass no longer grows in several areas near St. Petersburg where beds once flourished. “It's sort of a mystery,” says Holly Greening, senior scientist for the Tampa Bay Estuary Program. The grasses are vital to a healthy habitat, serving as nurseries for fish and invertebrates and as a food source for animals such as manatees and sea turtles. USGS is identifying where groundwater might be entering the site, looking for anything that may be affecting the grasses. Besides probing the die-off, the program hopes to restore 800 hectares of grasses.

    Insights into particular topics—such as the conditions that favor sea-grass growth—should help researchers understand the ecosystem as a whole. Nutrient levels, changes in elevation, and alterations in water circulation, for instance, all can have a tremendous impact on sea grasses and wetlands. “We can begin to understand the relationships between what are normally individual research focus topics,” says Yates.

    Data gathered under the project will be freely available at gulfsci.usgs.gov. Researchers working on similar projects will be watching closely. “We are following a lot of what [USGS] is doing as a template,” says project coordinator Ernest Estevez, who is coordinating a study in nearby Charlotte Harbor, funded in part by the local Mote Scientific Foundation.

  17. SEISMOLOGY

    Working Up to the Next Big One

    1. Richard A. Kerr

    The chatter of faults breaking in moderate earthquakes may give warning of the larger quake to come in Southern California

    arthquakes speak to one another. But what are they saying? That's an important question in Southern California, where scientists are trying to understand whether a resurgence of lesser earthquakes there heralds the big one or is meaningless babble.

    A pair of geophysicists has found that each of California's nine large quakes of the past 50 years was preceded by a rising chorus of regional seismicity. The finding prompts speculation that some major quakes could be anticipated years ahead. But in a field with a history of overreaching, researchers are probably more excited by the analysis being grounded in basic geophysics.

    “It's a significant step, a much more rational approach than anything done previously on seismicity patterns,” says fault mechanics specialist James Rice of Harvard University. “It's such a breath of fresh air, compared to what preceded it.”

    Attempts to read the meaning of regional seismicity patterns began modestly. In 1980, seismologist William Ellsworth of the U.S. Geological Survey in Menlo Park, California, noted an abundance of moderate quakes in the decades before the great 1906 San Francisco earthquake and their absence in the 50 years after the quake. The heightened regional seismicity before large quakes and quiescence after them became known as the seismic cycle.

    Russian seismologists led by Volodya Keilis-Borok, of the Institute of Earthquake Prediction Theory and Mathematical Geophysics in Moscow, then taught a computer algorithm to recognize patterns of unusual seismic activity that they thought appeared late in the seismic cycle over regions hundreds of kilometers across (Science, 15 March 1991, p. 1314). But the highly empirical—some would say mysterious—method failed to win over most U.S. seismologists.

    The seismic cycle started to make more sense, however, after seismologists began listening to conversations among earthquakes (Science, 16 February 1996, p. 910). In general, when a fault ruptures under stress and slips in an earthquake, stress levels increase beyond the tips of the rupture and decrease in broad areas on either side of the fault. The changes are greatest at the fault and decline with distance. Thus, one fault can relieve a distant one of some stress and delay its rupturing, if the second fault falls in the “stress shadow” around the first. Seismologist Lynn Sykes of the Lamont-Doherty Earth Observatory in Palisades, New York, showed, for example, how the great earthquake of 1857 cast an equally great stress shadow over much of Southern California. That stress shadow induced a broad, century-long seismic quiescence, at least among moderate and large quakes.

    Up, up, and away.

    A rising curve of seismic activity pointed toward the Northridge quake (arrow).

    CREDITS: (TOP TO BOTTOM) D. C. PIZAC/AP; SOURCE: D. BOWMAN AND G. KING, GEOPHYS. RES. LETT.

    Turning that theory into a more detailed forecast has proven difficult. Now, geophysicists David Bowman of California State University, Fullerton, and Geoffrey King of the Institute of Earth Physics in Paris have refined the search for seismicity patterns and applied their method to all nine large (magnitude 6.5 and greater) earthquakes in California in the past 50 years. They reasoned that the clearest signals would come from listening to seismicity changes within the original stress shadow of a fault's most recent large quake, a region they could calculate by running the quake in “reverse” in a computer and observing the stress changes.

    When Bowman and King analyzed California seismic records, they found that all nine large earthquakes were preceded by not just heightened seismic activity but increasing activity that, in hindsight at least, accelerated toward the large quake. In eight of the nine cases, the rate of acceleration seemed to point toward rupture of the fault within a year or less of the actual time of the quake. The “hindcast” for the time of the ninth quake, the Northridge quake in 1994, was off by 18 months.

    Seismologists insist on a plausible physical mechanism before they'll accept a forecasting method. Bowman and King think they have found one in the shrinking of the stress shadow. It is deepest and “darkest” at its center, so as the steady grinding of tectonic plates slowly adds stress across the region, the first faults to return to stress levels near the breaking point are on the fringes of the stress shadow. Renewed activity begins there and extends inward as the shadow shrinks. The accelerating intensity of seismicity reflects the increasing area subjected to fault-rupturing stresses as the quake-dampening shadow shrinks. As the shadow edge approaches the main fault, which has remained quiet through the seismic cycle, it triggers foreshocks and then the main shock.

    Researchers are generally relieved to see the new work. “This is the right approach, one based on the physics of stress transfer,” says seismologist Bernard Minster of the Scripps Institution of Oceanography in La Jolla, California. “If confirmed, it is a major step toward understanding the physics of earthquakes.” Seismologist Stefan Wiemer, of the Swiss Federal Institute of Technology Zurich, is “more skeptical. It comes from my experience with earthquake prediction research. The next logical step is … predictions of the next earthquake. That's when things often fall apart.”

    In fact, Bowman and King have applied their method to the two long segments of the San Andreas fault that broke in the great 1906 and 1857 quakes. They found no seismic acceleration in the San Francisco Bay region but did find it in Southern California. The fault seems to be building toward another rupture there, but, as King notes wryly, “prediction is particularly difficult when it concerns the future.”

    The curve-fitting that the researchers did to hindcast past earthquakes so accurately simply predicts that the main event in Southern California will begin shortly after the seismic record ends. And their approach depends on knowing which fault segment is liable to fail next, knowledge that has been lacking in most recent large quakes. So the next steps must be more geologic studies of faults and their history of rupture.

  18. AMERICAN ASSOCIATION FOR THE ADVANCEMENT OF SCIENCE ANNUAL MEETING

    Human Gene Count on the Rise

    1. Ben Shouse*
    1. Ben Shouse, a former Science intern, is an intern at The Nation.

    BostonThe annual meeting of the American Association for the Advancement of Science (publisher of Science), held from 15 to 19 February, included symposia across the scientific disciplines. According to one presentation, the number of human genes may actually be much closer to the early prediction of 70,000 genes rather than the much smaller number predicted when the draft sequence was published last year. This story and the following one are a sampling from the early sessions; more coverage will be published next week.

    When the draft human genome sequence was completed last year, a computer analysis suggested that the number of genes was shockingly small. Now, an experimental approach suggests that the number may actually be much closer to the early prediction of 70,000 genes, according to a presentation on 16 February.

    The all-day session started innocently enough. Eric Lander of the Whitehead Institute for Biomedical Research in Cambridge, Massachusetts, a leader of the Human Genome Project, told a standing-room-only audience that his best guess was still that there are about 32,000 human genes. That's based on the use of specialized computer software to identify genelike sequences in DNA.

    ILLUSTRATION: C.SLAYDEN

    Later in the day, Victor Velculescu mounted a small rebellion by raising the gene count. He and his colleagues, at Johns Hopkins University in Baltimore, Maryland, have gone back to the lab to look for genes that the computer programs may have missed. Their technique, called serial analysis of gene expression (SAGE), works by tracking RNA molecules back to their DNA sources. After isolating RNA from various human tissues, the researchers copy it into DNA, from which they cut out a kind of genetic bar code of 10 to 20 base pairs. The vast majority of these tags are unique to a single gene. The tags can then be compared to the human genome to find out if they match up with genes discovered by the computer algorithms. Velculescu said that only roughly half of the tags match the genes identified earlier—evidence, he says, that the human inventory of genes had been underestimated by about half.

    The reason for the disparity may be that the standard computer programs were largely developed for the genomes of simple (prokaryotic) organisms, not for the more complex sequences found in the genomes of humans and other eukaryotes. “We're still not very good at predicting genes in eukaryotes,” said Claire Fraser of The Institute for Genomic Research in Rockville, Maryland. It's entirely possible that there could be more than 32,000 genes, and SAGE is an important approach to finding them, she says: “You absolutely have to go back into the lab and get away from the computer terminal.”

  19. AMERICAN ASSOCIATION FOR THE ADVANCEMENT OF SCIENCE ANNUAL MEETING

    A Whale of a Chain Reaction

    1. Jay Withgott*
    1. Jay Withgott is a science writer in San Francisco.

    BostonThe annual meeting of the American Association for the Advancement of Science (publisher of Science), held from 15 to 19 February, included symposia across the scientific disciplines. At one presentation, a researcher suggested that whaling decades ago may have triggered a cascade of ecological events that ultimately wiped out kelp forests off Alaska's coast. This story and the previous one are a sampling from the early sessions; more coverage will be published next week.

    A Whale of a Chain Reaction

    Whaling decades ago may have triggered a cascade of ecological events that ultimately wiped out kelp forests off Alaska's coast, a researcher suggested here on 15 February. The idea—which runs counter to conventional wisdom—suggests that overharvesting of top marine predators can have far-reaching impacts throughout an ecosystem.

    Researchers have been struggling for years to explain why kelp forests have crashed along Alaska's Aleutian archipelago. Many researchers believe that overfishing or climate change drove down fish and sea lion populations, forcing orcas—the region's top predator—to prey on less desirable sea otters instead. Fewer sea otters, in turn, led to a boom in sea urchins, spiny invertebrates that then overgrazed the region's lush kelp forests, leaving behind empty “urchin barrens” (Science, 16 October 1998, p. 473).

    Alaskan collapse.

    Orcas turned to smaller prey when other whales were decimated by whaling after World War II.

    CREDIT: KENNAN WARD/CORBIS

    But James Estes of the University of California, Santa Cruz, has put a new twist on the story. Orcas, he suggests, turned to seals and sea lions only after their original prey, the great whales, were decimated by the whaling industry after World War II. The decline of the great whales forced orcas to “fish down the food chain,” shifting to smaller and smaller prey as human whalers claimed the larger animals. Estes calculates that current declines in Steller's sea lions could be explained by as few as 18 orcas eating only that species, or by a mere 1% shift in diet among all orcas.

    “As this line of reasoning has developed, I've found it more and more compelling,” said Elliott Norse of the Marine Conservation Biology Institute in Redmond, Washington. But not all scientists are satisfied with the hypothesis. Vincent Gallucci of the University of Washington, Seattle, suggests that a growing population of sharks precipitated the kelp decline by competing with sea lions for food or preying on pups.

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution