News this Week

Science  27 Feb 1998:
Vol. 279, Issue 5355, pp. 1294

You are currently viewing the .

View Full Text

Log in to view the full text

Log in through your institution

Log in through your institution


    Microsoft Researches Its Future

    1. Dana Mackenzie
    1. Dana Mackenzie is a mathematics and science writer in Santa Cruz, California.

    The software giant is betting $200 million a year that research in esoteric areas of physics and mathematics will yield breakthroughs in personal computing

    Seattle–Boot up a copy of Microsoft's Office 97, and an animated cat—or, if you prefer, a paper clip or a cartoon of Albert Einstein—strolls across your screen and settles down. Before long, the cat is kibitzing: “It looks like you're writing a letter. Would you like help?” it asks, in a cartoon-style balloon. The cute critter may seem like just another gimmick, but it's one of the first commercial products to come out of Microsoft's growing research department—and a sign of change in the way the world's largest personal-computing software company prepares for the future.

    In the past, Microsoft has been slow to catch on to the hottest trends in personal computing. The first operating system with windows and icons was invented at Xerox Palo Alto Research Center (PARC) in California a decade before Microsoft released its first version of Windows—and these were standard features of Apple computers long before Microsoft caught up. Netscape enjoyed a head start of at least 2 years over Microsoft's Internet Explorer in the well-publicized “browser wars.” Although Microsoft has been remarkably successful at what chair Bill Gates calls “embracing and extending” the ideas of others, it would like to be first off the blocks when the Next Great Thing comes along. To improve its chances, it is assembling a star-studded research department in the mold of such famous corporate research laboratories as IBM, Xerox PARC, and Bell Laboratories.

    “Microsoft Research is unique in that it is the first software-centered lab set up by a software company,” says Nathan Myhrvold, the chief technology officer of Microsoft. Founded in 1991, the laboratory already rates seventh among all academic and corporate research labs in computer science, according to an informal survey of 200 computer scientists conducted by Business Week. Myhrvold estimates the division's budget at about 2% of Microsoft's revenues, or about $200 million. It has grown to include 250 scientists in over 20 research groups, and this year it added an $80 million branch in Cambridge, England (Science, 20 June 1997, p. 1783). Its growth has placed Microsoft on the short list of companies to recognize that research “makes both business sense and research sense,” says John Seely Brown, director of the 250-member Xerox PARC. “It is a wonderful event, and I highly applaud it.”

    Observers credit the sandy-haired, multitalented Myhrvold for getting Microsoft Research off to a flying start. “Nathan's a wonderfully charismatic and brilliant guy,” says Brown. Myhrvold has authentic whiz-kid credentials: After earning a Ph.D. in physics from Princeton University at age 23, he studied briefly under Stephen Hawking at Cambridge University before launching a software company called Dynamical Systems. The company was bought out by Microsoft in 1986; 5 years later, Myhrvold himself proposed the new research division to Bill Gates.

    Although Microsoft's high pressure and burnout rate are legendary, the atmosphere at Microsoft Research is low-key, academic, and collaborative. Small, oddly shaped rooms and open doors make it as easy to have a conversation in the doorway or hallway as in the office. Ties and business cards are out; Dilbert cartoons and whiteboards are in. Within broad limits, scientists are free to work on any project they want to. According to physicist Jennifer Chayes of the Theory Group, who was formerly at the University of California, Los Angeles, “I felt unquestionably more pressure to do applied research for my grant [at UCLA] than I do here. … Here, if I work on an application it's because I'm really excited about it.”

    The combination of benevolent oversight and a long time frame has led Microsoft Research into areas of basic science that seem far removed from the personal computing world of today—quantum field theory, decision theory, and statistical physics. The ultimate goal is a computer (or operating system) that will be as easy to interact with as a human—just by talking. It will not require special words, special commands, or even special input devices, such as a keyboard or mouse. This is what Brown calls “radical research”—more cross-disciplinary and technology-oriented than “basic” research but more visionary than “applied” research. Although the humanlike computer, for example, may sound like science fiction, Myhrvold is banking on technology's version of the trickle-down theory: If the company works toward that distant goal, he says, then shorter term benefits will drop out.

    Take that cat—or Office Assistant, as it is formally known. Behind that reassuring figure, says developer Eric Horvitz of the Decision Theory Group, lies a Bayesian inference engine that “generates a probability distribution over the problem a user might be having before a query is input.” Thus, Office Assistant can evaluate the probability that a user is writing a letter (a probability that goes way up when it encounters the words “Dear John”) and anticipate that the user might want to know about Office 97's letter formats. The cartoon figure, for its part, comes from research by the User Interface Group on how humans interpret gestures and expressions—research that will be needed by that far-future computer to make sense of the vagaries of human communication.

    By some measures, Microsoft Research is just getting started. For example, the 199 patents it received in 1997 are dwarfed by the 1724 patents awarded to industry leader IBM. But the company has already made its mark in one way: its ability to snare eye-catching talent. The “look who's at Microsoft now” club includes Michael Freedman, formerly of UC San Diego, who is a MacArthur Fellow and a winner of the Fields Medal (considered the mathematical equivalent of the Nobel Prize) for his work on computational complexity. Other stellar recruits include James Blinn, a MacArthur Fellow for his work at the California Institute of Technology in educational animation, and Gary Starkweather, the inventor of the laser printer, snared from Xerox PARC. Myhrvold explains the quality and diversity of the staff this way: “We're willing to believe in a discipline and get a collection of people together that wouldn't have happened otherwise. We can create a hell of an interesting environment.”

    When Myhrvold decided that the area of Bayesian statistics sounded promising, for example, he says he “set out to track down the best guys in the field.” David Heckerman, a decision theorist, recalls what happened then. “My dissertation [on Bayesian networks] won the Association of Computing Machinery doctoral dissertation award. It caught Nathan's eye.” At first, Heckerman says, he told Myhrvold, “There's no way you're going to get me up here, but I have this company [Knowledge Industries] with Eric [Horvitz] and Jack [Breese], and they might want to join.” Several visits later, Myhrvold persuaded Heckerman to come—and Horvitz and Breese as well. They now form one-third of Microsoft's Decision Theory Group, which created Office Assistant.

    Another expanding group is the Theory Group, which recently brought Freedman aboard. Freedman says that Chayes, the co-founder of the group, came up with the name “Theory” as a euphemism for “Mathematics.” But “Theory” may be the only term broad enough to encompass the group's interests. Freedman started out as a topologist, but his career took a detour into the theory of computational complexity by way of knot theory. Chayes and her husband Christian Borgs, on the other hand, are statistical physicists. Yet all three of them are bringing their insights to bear on solving “NP-hard problems,” a hard nut at the center of computer science that bears on challenges from scheduling airlines to cryptography (see sidebar).

    From concept to product

    Such cross-disciplinary stimulation helps attract researchers to Microsoft—as does the company's ability to make young researchers into millionaires through stock options. Researchers there say they are also exhilarated by the speed at which a new idea can be transformed into software. As Myhrvold says, “If someone has a new idea, we can take that idea and put it in the hands of 100 million people.”

    One example is Comic Chat—an idea born out of the graduate dissertation of David Kurlander of the User Interface Group. Kurlander proposed that the history of an Internet chat session could be portrayed in a comic strip, making it easy for newcomers to the session to see what had gone on before their arrival. The idea went from concept to product in just 9 months and is now a regular feature of Microsoft's Internet Explorer.

    Ideas reach Microsoft's product division through official channels—a Technology Applications office—and direct contacts with researchers. Indeed, Kurlander was so committed to Comic Chat that he had himself transferred to a product group to see it through. Myhrvold adds that the proximity of research and product divisions on the same campus has helped produce a culture that encourages researchers to ponder the possible commercial applications of their work, no matter how abstract it may seem.

    Yet researchers at Microsoft say they face few restrictions on their freedom to publish results. Unlike many other companies, including IBM and Xerox, Microsoft has no review process for papers and no intellectual property department. Researchers are simply expected to file their own patents when necessary. The benefits of this nonpolicy, in Myhrvold's view, are the automatic quality control that comes from peer review and the freedom to exchange ideas with outside researchers. The Theory Group, for example, plans to host regular visitors from academia for periods of anywhere from 1 day to 12 months. And Microsoft researchers often serve as de facto advisers for graduate students at the University of Washington. “People worry, ‘That means we're going to lose some ideas,’” says Myhrvold. “Well, I've found that people who are too afraid of losing ideas are people that don't have very many.”

    Research leaders elsewhere might not be that sanguine. But Microsoft does not seem to be alone in its strategy of keeping the research focused on a problem relevant to the business—such as creating a computer with a completely intuitive interface—while letting people take any approach to it they want. Andrea Califano, the manager of computational biology at IBM Research, says the atmosphere there has changed “quite significantly” since the company's financial crisis in the early '90s, from a pseudoacademic environment where publishing papers was the only requirement to a more technology-driven model. Still, Califano says, “at least 50% to 60% of the work that we do would be very basic science.” Michael Garey, director of mathematical research at Lucent Technologies Bell Laboratories, complains about the “misperception” that his company now does only applied research. “It hasn't gotten any less fundamental. We think in terms of building an intellectual foundation for a technological area we see as important to the company down the road.”

    Now that Microsoft is a convert to that notion, Myhrvold says it's time for other technology companies to take a longer view as well. “Most have lots of what I call ‘r&D’—little r, big D. Or even nor, in the sense that they do no pure research.”


    Solving 'Hard' Problems--or Dodging Them

    1. Dana Mackenzie

    Theoretical mathematics and physics may seem far removed from the challenge of building better software. But in an example of Microsoft Research's willingness to gamble on basic research (see main text), one of its newest research groups is looking to those disciplines for approaches to some of computer science's toughest problems.

    Computer scientists have found that certain types of problems, called NP-hard, are especially intractable, because the time it takes to solve the most difficult examples of an NP-hard problem seems to grow exponentially as the amount of input data increases. These problems range from the utterly pure (telling mathematical knots apart) to the highly practical (scheduling airline flights so that two planes don't need the same runway at the same time). Not only are all NP-hard problems equally refractory, but they are actually equivalent: An algorithm to solve one of them in subexponential time could be converted into an algorithm to solve any of them.

    Because of this equivalence, researchers can work on any NP-hard problem they want to. Michael Freedman, a topologist who is the newest member of the Theory Group, studies the Jones polynomial—a tool for telling knots apart. As in any NP-hard problem, it takes exponentially longer to compute the Jones polynomial of a knot as the number of overpasses and underpasses in the knot increases. But physicist Ed Witten of the Institute for Advanced Study in Princeton, New Jersey, showed in 1988 that the Jones polynomial could be directly measured for knots in a topological quantum field theory—one of a group of theories describing the microscopic structure of the vacuum. Thus, in some as yet unimaginable “quantum field computer”—a device that would, in effect, compute with the quantum vacuum—Freedman believes it may be possible to “enter the den of the exponential dragon” and slay it.

    His colleagues, statistical physicists Jennifer Chayes and Christian Borgs, are instead looking for ways to detour around the dragon's lair. Their favorite NP-hard problem, called three-satisfiability or 3-SAT, is a type of logical problem similar to the popular mystery game “Clue,” but with clues written by an incompetent author. The goal is not necessarily to find the murderer(s) but to determine whether any of the suspects could possibly satisfy all the clues. More suspects make it more likely that someone fits the description of the murderer; if there are too many clues, it is likely that no one does. At either extreme, the problem is easy to solve. In between, when the ratio of clues to suspects is about 4.2, a “phase transition” occurs, and the two possibilities are about equally likely (Science, 27 May 1994, pp. 1249 and 1297). At that point, determining whether the murderer can be identified rises to a peak of difficulty.

    Chayes and Borgs are now trying to determine how to identify similar phase transitions in other NP-hard problems—such as scheduling airplanes. It might then be possible to predict where the critical regimes are and avoid them, and abstruse physics might have a very practical payoff for software design.


    Reports Call for New Super-Accelerator

    1. David Kestenbaum

    Physicists in the United States have been understandably timid about asking for a major new accelerator. The debacle of the Superconducting Super Collider (SSC), which Congress canceled in 1993 when it was already under construction, is still fresh in their minds. And it took years of negotiation to arrange a consolation prize: U.S. participation in what will be the highest energy collider ever built, the Large Hadron Collider (LHC) at CERN in Geneva, Switzerland. But the message was clear in two U.S. reports on the state of particle physics released last week: Another collider will be needed if physicists are to assemble a complete picture of the particles and forces that constitute the world.

    Plans for some kind of successor or companion to the LHC—which hurls protons against protons—have been in the works since the mid-1980s. But in a once-a-decade review of the field prepared by the National Research Council (NRC) and a draft report to the Department of Energy's (DOE's) influential High Energy Physics Advisory Panel (HEPAP), physicists have made their most public plea yet for a new machine. “The LHC is set, so the stage is now open for the next [machine],” says Columbia University physicist William Willis, a member of the HEPAP panel.

    On the wish list are three very different devices: a scaled-up LHC, called a Very Large Hadron Collider; a 30- to 50-kilometer-long Next Linear Collider (NLC) that would smash electrons together; or an even more fanciful device that would collide muons—the electron's short-lived, heavy brothers (Science, 9 January, p. 169). Physicists have been exploring all three possibilities, and some portions of the NLC have even been bench-tested. Any of the three options will cost over a billion dollars and take a decade or more to plan and build.

    But such a behemoth will be essential for moving beyond the existing picture of the subatomic world, says University of Chicago physicist Bruce Winstein, who chaired the NRC committee. The missing piece of the decades-old theory called the Standard Model is an account of how particles get mass. Most physicists think a hypothetical particle called the Higgs boson works behind the scenes to confer mass. Although physicists expect the Higgs to tumble out of the collisions in the LHC, sightings are expected to be rare at the energies the accelerator can achieve. And occasional glimpses won't be enough, because the Higgs is expected to lead the way beyond the Standard Model to an even more fundamental theory of particles and forces.

    Finding out which, if any, of several candidate theories is right will take a more detailed investigation of the Higgs—and any other particles that turn up—than will be possible at the LHC, the reports say. The electron and muon colliders would give physicists a cleaner environment for studying the new particles, as these collisions produce less debris than do the proton collisions of the LHC. And a supersized LHC would generate higher energy collisions, allowing physicists to search for still more particles.

    But the specter of the failed SSC clearly darkens the pages of both reports. The SSC cost rose billions of dollars over initial estimates, to nearly $12 billion just before its demise, and the project came up short in attracting funding from other countries. The SSC “cannot happen again,” says Cornell physicist Persis Drell, who helped draft the NRC report. “If nothing else, that is branded on our foreheads.”

    The key to success, physicists hope, will be an affordable price tag and a global effort. Building even the NLC (the furthest along of the three options) will be “damn difficult and very, very expensive,” says Donald Shapero, director of the NRC board on physics and astronomy. “This is going to be an international game from now on. No one country is going to be able to contemplate doing it alone.”

    Yet the NLC camp is already divided, members of the NRC committee note. While physicists at the Stanford Linear Accelerator Center (SLAC) and Japan's KEK laboratory have been working in close concert on a design that uses conventional radio-frequency cavities to accelerate the electrons, Germany's DESY lab is hard at work on a plan that uses superconducting technology. That's a bad omen for future collaboration, some say. “Each lab gets committed to its own technology,” explains Michael Riordan, assistant to the director at SLAC. “You have this kind of technological inertia.”

    Overcoming that inertia, and bringing all the labs together, may take the creation of a “world HEPAP,” says Peter Rosen, DOE's associate director for high-energy and nuclear physics. It may have to come soon. Already, DESY director Bjorn Wiik has made the rounds at towns near Hamburg to build support for an underground tunnel that could house the NLC. Wiik says he is simply preparing the ground for DESY's proposal. But many say that raising the issue of the NLC's site at this early stage is risky, because each lab is likely to want the new accelerator in its own backyard. “I have to say, quite frankly, it's not good for international collaboration,” says Rosen.


    Northern Europe Tops in High School

    1. Gretchen Vogel

    The Winter Olympics are over, but this week many of the countries at the top of the medals chart could claim another victory in worldwide competition: Their high school students were the top performers in the latest results from the Third International Mathematics and Science Study (TIMSS). Unfortunately for the United States, its solid standing in Nagano was not replicated on TIMSS. Instead, U.S. high school seniors performed near the bottom in general science literacy, were second to last in advanced mathematics, and brought up the rear in advanced physics. The results “debunk the myth that our best and brightest are still the best in the world,” says Larry Suter of the U.S. National Science Foundation's education directorate. “There is no evidence here that any of that is true.”

    The new results are the third in a series of international assessments of student performance in mathematics and science. The first showed Singapore, Japan, Korea, and the Czech Republic at the top of the heap among seventh- and eighth-grade students (Science, 22 November 1996, p. 1296). The second test, for third- and fourth-graders, featured the same countries, plus strong showings by Hong Kong in mathematics and the United States in science (Science, 13 June 1997, p. 1642). Asian countries did not participate in the latest assessment, however, citing the intense pressure on their students in the senior year to prepare for college entrance exams. “We naturally were disappointed,” says TIMSS international study director Albert Beaton of Boston College. “My guess is they would have done very well.”

    Northern highlights.

    Scandinavia dominates general literacy test, with the U.S. well below average and Asia not participating.

    View this table:

    The latest report includes the results of three tests. An assessment of general mathematics and science literacy—given to students from both academic and vocational tracks—included questions on basic algebra, proportionality, estimation, life science, physical science, and earth science. A second test assessed students in advanced mathematics courses [those in precalculus, calculus, or advanced placement (AP) calculus in the United States]. And a third assessed students taking advanced physics (either physics or AP physics in the United States). The Netherlands and Sweden scored highest on the general literacy exam, while France and the Russian Federation outpaced 14 other countries on the advanced mathematics test. Norway, Sweden, and the Russian Federation had the highest scores among the 16 countries whose students took the advanced physics exam.

    Although educators praise the latest findings, they provide few simple answers about why some countries do better than others. As with the other exams, top scores did not correlate directly with any of the factors commonly associated with student performance. Students in high- and low-scoring countries spent about the same amount of time in math and science classes, had similar amounts of homework, and watched about the same amount of television. “The study just doesn't have a lot of new insights into why,” Suter says.

    The new test also found that, once again, boys scored better than girls did. The differences were not consistent in all countries, however, ranging from 17 points in the United States to 57 points in Norway. That result “will make an uproar in this country,” says Svein Lie, a science education professor at Oslo University and head of Norway's TIMSS project.

    Pressed for possible factors contributing to his country's high ranking, Lie notes that Scandinavian students typically start school a year later than most of the rest of the world and are a year older when they graduate. Barbara Wennerholm of the National Agency for Education in Stockholm, Sweden, points to the homogeneity of the Swedish system, in which students on vocational tracks have the same science teachers as those on academic tracks.

    For U.S. officials, the results reinforce the unhappy lessons of the earlier tests. A recent analysis of the elementary and middle-school results shows that U.S. students decline in almost all subject areas between the fourth and eighth grades. As a result, says William Schmidt of Michigan State University in East Lansing, “you have to do remarkable work at the high school level to make up for that.”


    Astronomers See a Cosmic Antigravity Force at Work

    1. James Glanz

    Seemingly in defiance of common sense, space itself appears to be permeated by a repulsive force that is counteracting gravity on large scales. That, at least, is the reluctant conclusion of an international team of astronomers who have used the brightness of distant exploding stars called supernovae to gauge how cosmic expansion has changed over time. Gravity should have gradually slowed that outward rush. But as team member Alexei Filippenko of the University of California, Berkeley, announced at a meeting near Los Angeles last week,* the dimness of the supernovae—pointing to unexpectedly great distances—implies that cosmic expansion has actually sped up in the billions of years since the stars exploded.

    “My own reaction is somewhere between amazement and horror,” says Brian Schmidt of the Mount Stromlo and Siding Spring Observatory in Australia, who leads the group, called the High-z Supernova Search Team. “Amazement, because I just did not expect this result, and horror in knowing that [it] will likely be disbelieved by a majority of astronomers—who, like myself, are extremely skeptical of the unexpected.” But after intense efforts to account for the dimness with prosaic effects such as dust in the cosmos or some intrinsic dimness of those remote explosions, says Schmidt, the team concluded with a statistical confidence of between 98.7% and 99.99% that cosmic expansion is receiving an antigravity boost.

    Astronomers expressed caution over what would be a momentous turn of events, saying there could be still-undiscovered differences between galaxies now and billions of years ago—and hence in the brightness of the supernovae they host. “Even the most conservative explanations for the results are quite amazing,” says Rocky Kolb, a cosmologist at the University of Chicago who attended Filippenko's talk. A cosmic repulsion “would be such a fundamental result that I think everyone should reserve judgment.” No one, however, is arguing with the data themselves: Just last month, an independent team presented data from another set of distant supernovae that suggested, more tentatively, an acceleration of roughly the same amount (Science, 30 January, p. 651). “This is what the observations are telling us,” says Filippenko.

    The discovery of an accelerating universe would have a major impact on the reigning theory of how the big bang got started. In the simplest version of this theory, known as inflation, the universe contains just enough matter to make it geometrically “flat”—a mass density that would also slow the cosmic expansion to a halt, given infinite time. Earlier supernova results and other measures have shown no sign of the gravitational brake that so much mass would apply (Science, 31 October 1997, p. 799). But because both matter and energy can curve space-time, a mysterious background energy—which Albert Einstein named the cosmological constant, or lambda—might make up the deficit and flatten the universe again. This background energy would push rather than pull, speeding up the cosmic expansion over time.

    Physicists do not have a good explanation for the source of the energy; it could somehow be related to the fleeting “virtual” particles that quantum theory says wink in and out of existence in empty space. But some cosmologists have been drawn to the concept, in part because it would be compatible with more refined versions of inflation that would not require a radical overhaul of the theory. So astronomers have gone looking for lambda by trying to detect its influence on cosmic expansion.

    The High-z team probes the expansion with distant supernovae. The team electronically subtracts images of the same regions of the sky, taken weeks apart, to find new supernovae of a class called type Ia, thought to occur when a white dwarf star rips so much material from a nearby companion star that the dwarf suddenly explodes like a giant thermonuclear bomb. They then record the gradual brightening and fading of each one with the Hubble Space Telescope or with ground-based instruments.

    Although type Ia's do not all reach exactly the same peak brightness, the variation can be corrected for: Those that fade more quickly are less luminous. The correction allows type Ia's to serve as approximate “standard candles,” whose apparent brightness is a measure of their distance. The team compares those distances to the “redshifts” of the light—a measure of how fast cosmic expansion is sweeping the supernovae outward—to gauge the expansion when they exploded, as much as halfway back to the big bang.

    What the team found left many of its members “stunned,” says Berkeley's Adam Riess, lead author on the paper being prepared on the results. The 14 distant type Ia's in the study turned out to be, on average, 10% to 15% farther away than they would be even in a low-density universe, in which the expansion would have slowed very little. “Not only don't we see the universe slowing down; we see it speeding up,” says Riess. If the universe is indeed flat, then the results imply that it contains roughly twice as much energy in the cosmological constant as in matter.

    That conclusion survived detailed corrections for any dust that might be veiling the supernovae and making them look more remote than they are. It also survived another test, which Riess and Filippenko did in tandem with the other supernova team, led by Saul Perlmutter of Lawrence Berkeley National Laboratory in California. To see whether the distant supernovae behave the same way as closer ones, they compared how the spectral fingerprints of distant and nearby supernovae change during the explosion. No significant differences turned up, Riess says.

    Group members stress that their findings still need careful scrutiny by the astronomical community. “To be honest, I'm very excited about this result,” says group member Robert Kirshner of the Harvard-Smithsonian Center for Astrophysics in Cambridge, Massachusetts. But for all the group's vigilance, he says, it's still conceivable that some “sneaky little effect” is mimicking the acceleration. “It's a remarkable result,” says Marc Davis of Berkeley, who is not part of either group, but he agrees, “it's clearly going to take some time to digest this.”

    That certainly goes for theorists. Kolb, for one, says that the universe is starting to look like a cosmic version of the Marx brothers movie Monkey Business, in which more and more people show up in a ship's stateroom, leading to chaos. The universe already contains visible and dark matter as well as radiation; now there's a mysterious new guest. “It's crazy,” he says. “Who needs all this stuff in the universe?”

    Martin Rees of the University of Cambridge, Britain's Astronomer Royal, sees it differently. Like Kepler, who was troubled that planetary orbits were ellipses and not perfect circles, theorists who long for a simpler universe might just be missing “the really big picture,” he says. Newton's theory of gravity ultimately made sense of elliptical orbits, and some missing concept may ultimately make sense of a seemingly baroque universe. Cosmic simplicity's aesthetic lure, says Rees, “may seem, in retrospect, as shallowly motivated as was Kepler's infatuation with circles.”

    • * Third International Symposium on Sources and Detection of Dark Matter in the Universe, 18–20 February, Marina del Rey, California.


    Controversial Trial Offers Hopeful Result

    1. Eliot Marshall

    A bitter, yearlong ethical dispute over the use of placebos in anti-HIV drug trials in poor countries moved into a new phase last week. A trio of U.S. and international health organizations announced that a U.S.-funded trial in Thailand has demonstrated that a brief, relatively inexpensive course of drugs given during the final weeks of pregnancy can lower the transmission of HIV from mothers to their newborn infants. Plans are now under way to make the cheap therapy available to thousands of HIV-infected women in the developing world, says Joseph Saba, spokesperson for the Joint United Nations Program on HIV/AIDS (UNAIDS).

    Saba calls the results a “statistically significant” victory for AIDS research that was hastened by the fact that the therapy was tested against a placebo. This increased the statistical power of the study and gave health authorities confidence to recommend that the therapy be widely used and that other placebo-controlled trials be modified, he says. But critics, who condemned this and similar trials last April (Science, 16 May 1997, p. 1022), continue to maintain that the use of placebos was unnecessary and unethical.

    The Thai study, whose main findings were released on 18 February, offered pregnant women the antiretroviral drug AZT orally for a brief period (4 weeks) before they went into labor to reduce the amount of virus they passed on to their children. The drug was already known to be effective in reducing HIV transmission when given in a more complex and expensive regimen involving intravenous injections and postnatal therapy for the child (Science, 4 August 1995, p. 624). The short regimen costs $80 or less—one-tenth the cost of standard treatment. To make sure the study yielded clear-cut results, researchers gave half the women in the trial AZT and the other half, a sugar pill.

    View this table:

    Last week, a special analytical panel took a look at the preliminary data from 392 patients and concluded that short-term AZT therapy was working spectacularly well. According to a statement issued by UNAIDS and other sponsors, HIV transmission declined from a background rate of 18.6% in the placebo group to 9.2% in the test group—a 51% reduction. (In contrast, the original tests conducted in Europe and the United States of the longer term, more complex therapy reduced transmission by 70%.) The “results show that a simplified AZT regimen can be well tolerated and is effective in significantly lowering perinatal transmission from HIV-infected women who are not breast-feeding,” the UNAIDS joint statement concluded. Helene Gayle, who heads the division of the U.S. Centers for Disease Control and Prevention (CDC) that sponsored this trial, says these results argue for “extending this therapy throughout the developing world.” Gayle and others caution, however, that it's not clear how well it will work for children who breast-feed for half a year or more—as in Africa—and may be exposed to HIV in milk.

    Because short-term AZT therapy worked so well, the research sponsors moved rapidly last week to offer it to all patients in ongoing trials. AZT trials sponsored by CDC and the French government in Côte d'Ivoire, designed to enroll 1900 women, are now being revised to exclude the use of placebos. Another trial sponsored by the National Institutes of Health (NIH) in Ethiopia is being revised to omit placebos, as are trials sponsored by UNAIDS in South Africa, Tanzania, and Uganda. Researchers who collaborated on these trials were planning to meet at NIH this week to discuss how to restructure the protocols.

    Without doing a placebo study, Saba says, investigators could not have gotten these decisive results so quickly. Observed transmission rates “have a huge range,” he says, from 15% to 44%. Trial designers were concerned that the effect of short-term AZT therapy might be lost if no placebo were included.

    Critics are not convinced. Sidney Wolfe, medical affairs chief of the Ralph Nader group Public Citizen in Washington, D.C., says he has unearthed data on a subset of women in the original U.S.-European study known as the “076 trial” that show that short-term therapy was effective. Wolfe claims the data were available as early as February 1994. A “disgraceful loss of life would have been avoided,” Wolfe claims, if there had been no use of placebos.

    Saba insists, however, that last week's results are the “first reliable data” on the value of short-term AZT therapy. And Lynne Mofenson, an NIH official who helps coordinate the perinatal HIV studies, says researchers had always planned to examine early data from one trial and, if warranted, drop placebos.

    Saba and other organizers of the trials are now using the early data from Thailand to argue for increased funding of anti-HIV therapy. They aim to bring together government officials, health workers, and pharmaceutical executives for a meeting in Geneva in late March to consider how to make short-term AZT therapy available in the developing countries.


    Bringing Order to Amorphous Silicon

    1. Robert F. Service

    San Jose, CaliforniaAmorphous silicon is the semiconductor of choice for large-area, low-cost electronics such as solar cells and the displays in laptop computers because it can easily be laid down from a vapor over large areas. But its disordered arrangement of atoms cuts into the performance of these devices. To do better, manufacturers would have to turn to single-crystal silicon (the material used for high-end computer chips), but it is prohibitively expensive to put down as a thin film over a large area. Now a team of researchers from Columbia University in New York City may have found the perfect compromise.

    At the Photonics West meeting here last month, materials scientists James Im, Robert Sposili, and Mark Crowder reported that they have come up with a laser technique that allows them reliably to create islands of crystalline silicon in a thin amorphous silicon film. Their colleagues, Paul Carey and Patrick Smith at Lawrence Livermore National Laboratory in California, then crafted transistors over these crystalline regions and found that they performed as well as devices made on conventional crystalline silicon wafers. The new technique “is definitely of interest,” says Jim Boyce, a silicon device expert at the Xerox Palo Alto Research Center in California, adding: “This may well be a scheme that provides the path to single-crystal silicon” for large-area electronics.

    Fine point.

    Melting amorphous silicon through a chevron-shaped mask is key to creating single crystals.


    Im and his colleagues are not the first to try crystallizing amorphous films with laser light, which can break bonds in an amorphous material, allowing the atoms to rearrange themselves into a crystalline form. Using powerful ultraviolet beams, other groups have managed to create crystalline silicon, but the results have been inconsistent and impossible to control. Instead of reconnecting into a continuous crystalline lattice, the atoms often formed an array of tiny crystallites or some other amorphous jumble.

    Two key innovations enabled Im and his colleagues to produce large islands of crystalline silicon reliably. The first was to start small, melting a region just a few micrometers across. The second was to shine their laser light in bursts through a chevron-shaped slit in a metal template, or “mask.” The bursts melt the amorphous silicon film just in this region, which cools and begins to solidify almost instantly. The chevron shape is important because as the silicon resolidifies, it preferentially forms elongated crystalline grains. The grains grow from the outer edges of the chevron into the melted region in the middle, orienting themselves perpendicularly to the edges. This produces an unusual result just below the peak of the chevron: Here the initial grain boundaries growing up from the bottom edge of the chevron's two arms diverge, leaving a diamond-shaped single-crystal region in the middle.

    To expand this region, the researchers simply move the mask up about a micrometer and hit the silicon film with another laser pulse over an area that overlaps the first chevron. The light melts the new amorphous silicon and part of the crystalline silicon produced by the previous pulse. As this new melted region cools and solidifies, its structure follows that of the crystalline area produced earlier. By simply repeating the process over and over, the researchers could grow crystalline regions 100 micrometers square, which they then patterned with working high-performance transistors. By shining laser pulses through masks bearing thousands of chevron-shaped slits at once, Im and his colleagues can grow thousands of crystallites in parallel, patterning a region the size of a display screen in less than 5 minutes, he says.

    “Melting and resolidifying materials is one of the oldest technologies around,” says Im. But in this case it may bring new order to future high-tech electronics.


    New NIH Grants for Clinical Research

    1. Eliot Marshall

    Research that involves human subjects makes heavy demands on hospital staffs, requires complex ethics reviews, and burns up lots of money. And insurers don't want to pay for it, says William Crowley Jr., director of clinical research at Massachusetts General Hospital in Boston. As a result, he says, young doctors “have been getting killed” by the adverse economics of this field. Last week, National Institutes of Health director Harold Varmus offered some relief: He unveiled three new NIH grant programs designed specifically for clinical researchers.

    Speaking at the Howard Hughes Medical Institute in Chevy Chase, Maryland, on 23 February, Varmus sketched out details of these new awards for the first time. Each is meant to provide assistance at a different level of the system:

    ▪ For young M.D.-investigators going into clinical research, NIH is creating a new category of “career development awards” that will provide up to $75,000 per year to cover 75% of the salary of M.D.s who have finished their specialty training and have committed to a research career. These grants, known as K-23s, will also provide $20,000 to $50,000 per year for research support. NIH hopes to make 80 of these awards in 1999, and the total would level off at about 400 after 5 years.

    ▪ NIH is also planning to provide new grants (K-24s) to “particularly talented early- and midcareer investigators doing clinical research” who want to train new investigators. The 5-year grants will provide $62,500 to support 50% of the investigator's salary and $25,000 annually for research assistance. They will be renewable once. Varmus said the goal is to make 50 to 80 awards in 1999, leveling off in 5 years at a total of 250 to 400 grants.

    ▪ To help institutions develop curricula for clinical researchers, Varmus said, NIH will establish a new category of 5-year renewable grants, each worth $200,000. He said NIH hopes to award 20 of these grants.

    Clinicians such as Crowley and neurologist Guy McKhann of Johns Hopkins University, who have been appealing for help from NIH for some time, said they welcomed the new initiatives. And William Kelley, dean of the University of Pennsylvania School of Medicine, expressed approval from the audience at the Hughes meeting. “Harold,” he said, “that's great news.”


    Unusual Cells May Help Treat Parkinson's Disease

    1. Marcia Barinaga

    On each side of your neck, nestled up against your carotid artery, is a helpful little organ called the carotid body. Its job is to sense how much oxygen is in your blood and signal your brain to step up your breathing if the level drops too low. New research suggests that the cells of the carotid body may one day prove useful in yet another way: as brain grafts for treating Parkinson's, the degenerative brain disease that strikes 1% to 2% of people over age 65.

    In the February issue of Neuron, a team headed by José López-Barneo of the University of Seville in Spain reports that in rats, carotid-body cells transplanted from the animals' necks into their brains reverse the symptoms of experimentally caused Parkinson's disease. Neuroscientist Arnon Rosenthal, who works on therapies for Parkinson's at Genentech Inc. in South San Francisco, describes the result as “quite intriguing, quite promising.” Carotid-body cells will have to pass many more tests before researchers can even consider trying them in patients. But Rosenthal says they may do a better job of correcting the defect of Parkinson's disease than do the fetal-brain cells sometimes used as a treatment—and they raise fewer ethical questions.

    The root cause of the disease is the death of neurons in a part of the midbrain called the substantia nigra. These neurons normally send connections to an area called the striatum, where they release the neurotransmitter dopamine. The loss of this dopamine causes the movement problems characteristic of the disease. The goal of transplants is to make up for the loss by putting dopamine-releasing cells directly into the striatum. And the cells from a patient's own carotid bodies “may produce … up to 45 times more dopamine” than fetal neurons do, says Rosenthal.

    What's more, he adds, the carotid-body cells “survive much better as a transplant. They even thrive in low oxygen, which is exactly what you want.” That's because brain tissue, especially when disturbed by surgery, can be quite oxygen-poor, which may explain the low survival rate of fetal and other types of grafts.

    López-Barneo had been interested mainly in how carotid-body cells sense oxygen, but he recalls that colleagues kept pointing out that these cells might make great candidates for grafting into the brains of Parkinson's patients. So he and his team set out to test the idea. They turned to a standard rat model used for screening potential Parkinson's therapies, in which researchers kill substantia nigra neurons on one side of the rats' brains. This causes several symptoms, including a movement imbalance that makes the rats turn in circles. Researchers can test a therapy by seeing whether it corrects those symptoms in the rats. And López-Barneo's team found that transplants of glomus cells—the dopamine-producing cells that make up 80% of the carotid body—appear to work.

    The researchers implanted chunks of carotid bodies containing about 800 glomus cells into the striatum on the damaged side of the rats' brains. The neurons fared well. Three months after the surgery, 30% to 60% of the glomus cells had survived and were making dopamine, and new neural connections were visible in the striatum.

    The rats' symptoms also improved over the course of 3 months, although some abnormalities, such as the turning in circles, reversed faster than others did. López-Barneo suggests that for some improvements, it's enough for the transplanted cells to secrete dopamine into the striatum. Other forms of recovery may require the new neural connections, which had not formed yet in rats examined 1 month after the surgery.

    It is not yet clear, though, which cells are producing the new connections. “In several cases, you can see fibers coming out of the glomus cells,” López-Barneo says. But he thinks the glomus cells can't be the source of all the new fibers. Instead, the transplant may be producing some growth factor that encourages the remaining substantia nigra neurons to sprout new extensions. If so, says Rosenthal, “that would be a major bonus.” Indeed, he and others are trying to find ways to use growth factors to treat Parkinson's, in hopes that re-creating lost neural links will provide better symptom relief than grafts do. But, Rosenthal says, “a major problem with growth factor therapy is the delivery.”

    If the glomus cells make both dopamine and growth factors, Rosenthal adds, “you get two therapeutic approaches in one.” López-Barneo's group is investigating which cells are putting out the new extensions and checking to see if glomus cells really do make growth factors.

    Despite the encouraging early results, researchers caution that success in rats is only the first hurdle—and a relatively low one—for a potential Parkinson's treatment. Fetal-graft pioneer Anders Bjorklund of the University of Lund in Sweden notes that a few hundred fetal cells—about the same as the number of glomus cells used in these experiments—can also reverse Parkinsonian symptoms in rats, but it takes 200,000 to 300,000 surviving transplanted fetal neurons to treat human patients effectively. “The question,” he says, “is if the human carotid body can provide that many cells.”

    Together, they might. Each human carotid body contains roughly 100,000 glomus cells, says López-Barneo. People can live with both carotid bodies removed, as long as they don't exercise or go to high altitudes. But the cells' high dopamine output means that one carotid body—which can be spared without any ill effects—should do the job, he argues, provided that glomus cells from elderly Parkinson's patients are as dopamine-rich and resilient as the cells tested in the experiment are. There is no reason to believe that won't be the case, he notes, but his lab is now repeating its experiment with cells from very old rats.

    Even if the initial promise of the carotid-body transplants isn't borne out, researchers may find other ways to exploit the ability of these unusual cells to thrive in the brain. Rosenthal suggests, for example, that they might be engineered to enhance their growth-factor output and then placed in the brain where the factors are needed. So take a moment to appreciate your carotid bodies: You never know how they might come in handy.


    Ocean Scientists Find Life, Warmth in the Seas

    1. Robert Irion
    1. Robert Irion is a writer in Santa Cruz, California.

    San DiegoAbout 2000 marine scientists gathered here at the 1998 Ocean Sciences Meeting from 9 to 13 February, during a relatively sunny respite from the El Niño-driven storms that have battered California. At the biennial gathering, hosted by the American Geophysical Union and the American Society of Limnology and Oceanography, El Niño for once took a back seat to other explorations of the state of the world's oceans and the life they contain.

    Life Among the Whale Bones

    Life on the floor of the deep sea is harsh. Sunlight is a distant memory, and there isn't much to eat—except for teeming bacteria at the oases where hot, sulfide-laden water spurts from the seabed, or cold oil and gas seep from sediments. But every so often, a huge source of food thumps to the ocean bottom: a whale carcass, blubbery manna from on high.

    At the meeting, oceanographer Craig Smith and his colleagues at the University of Hawaii, Honolulu, presented research showing that an unexpected variety of marine organisms crowd in for these feasts. The skeleton of a single whale can support more species than are found at the richest hot-vent field, they found. Some of the animals apparently have evolved to live specifically on whale skeletons. “It's wonderfully exciting to discover yet another major community living in the deep sea,” says marine biodiversity expert Robert Hessler of the Scripps Institution of Oceanography in La Jolla, California.

    Working in kilometer-deep waters southwest of Los Angeles, Smith's team members have intensively studied two of the dozen or so known whale skeletons on the sea floor. They also have deliberately sunk three whale carcasses to similar depths and repeatedly visited them via submersible. The team found that the first stage in creating a whale-skeleton habitat happens fast: Hagfish, crabs, and perhaps an occasional shark reduce each body to bones within about 4 months, rather than the years researchers had expected.

    Once the scavengers scurry away, bacteria take over. Whale bones are rich in oils, providing buoyancy while the whales live and a greasy bounty when they die. Slice open a bone, says Smith's graduate student, Amy Baco, and you'll see a substance “like a thick, white fat.” On the sea floor, anaerobic bacteria decompose this material and emit hydrogen sulfide and other compounds, which diffuse outward through the bone. Another set of bacteria live off the sulfides, coating the bones in thick mats. These chemosynthetic bacteria in turn support a host of worms, mollusks, crustaceans, and other animals. Such communities can thrive for years—the first one, discovered in 1987, was still going strong in 1995, Smith found.

    Smith and Baco were most startled by how many species can swarm a single skeleton. When they hauled up five vertebral bones from one whale off southern California, Baco counted 5098 animals from 178 species, even though the bone surface area totaled just 0.83 square meter. Among them were 10 species—limpets, worms, and other critters—that seem to live only on whale skeletons and apparently evolved in this habitat after large whales first appeared 40 million years ago, says Smith: “This is, by far, the most diverse deep-sea habitat on a hard surface yet discovered.” In contrast, the most fertile known hydrothermal-vent field supports 121 species, and a single hydrocarbon seep might sport 36 species at most.

    All these communities may be connected. At least 15 of the whale-bone species are also native to the other sulfide-rich habitats, says Smith. He feels that this bolsters his contention, first offered 8 years ago in Nature, that whale skeletons may serve as “steppingstones” for the dispersal of marine animals that depend on chemosynthesis. Otherwise, drifting larvae might not find these scattered, ephemeral habitats—especially some of the vent systems, which flicker on and off in decades or less.

    Vent specialists applaud the finds but remain cool to the steppingstone scenario. Taxonomic analysis to date shows whale bones and hot vents share just eight species, perhaps too few to support that hypothesis, says biological oceanographer Lauren Mullineaux of the Woods Hole Oceanographic Institution in Massachusetts. And vents typically lie at depths greater than 1500 meters, Mullineaux says, whereas many whales—with the notable exception of sperm whales—live and die in the shallower, biologically richer waters along the edges of continental shelves. “We're also finding more and more vents every time we turn around,” Mullineaux adds, so larvae may be able to hop among vents without stepping-stones. Whether the whale skeletons are way stations or worlds of their own, she says, they show that dark, deep islands of life may be more common than anyone has imagined.

    Sounding Out Pacific Warming

    Sometimes the most clever theoretical ideas are the toughest to put into practice. By simply measuring the travel time of pulses of underwater sound, oceanographers have proposed, one could take the temperature of an entire ocean to watch how it responds to global warming. But for years this elegant scheme—now called the Acoustic Thermometry of Ocean Climate (ATOC) project—has endured delay because of concerns about whether the sounds would harm marine life, as well as doubts that it would work in reality. Now data presented at the meeting, from an extended trial run of the experiment, help clear away these scientific questions.

    The new results show that temperature readings of the Pacific Ocean are even more precise than were projected, and marine mammals apparently aren't bothered by the sounds. But just as the project seems poised to fulfill its promise, another obstacle looms. Thanks to the rocky politics of funding long-term projects and a post-Cold War decline in ocean-acoustics studies, organizers aren't sure that ATOC will win the funding it needs.

    Conceived 20 years ago by oceanographers Carl Wunsch of the Massachusetts Institute of Technology and Walter Munk of the Scripps Institution of Oceanography in La Jolla, California, acoustic thermometry exploits some basic physics of the oceans. Sound travels slightly faster in warmer water. And when a sound pulse travels through the deep ocean, distinct layers of salinity, temperature, and other physical factors combine to trap most of the energy within a specific channel of water, where it persists for thousands of kilometers. Sound therefore looks ideal for spotting long-term temperature trends in entire oceans. However, concerns about sound-transmitting properties at depth led some to question whether the idea would work in practice.

    The answer, according to a report at the meeting by project director Peter Worcester of the Scripps Institution of Oceanography, is a resounding “yes.” Data from the first 15 months, just submitted for publication, show that 195-decibel broadcasts from a source off the central California coast are picked up surprisingly well by arrays of sensitive hydrophones as remote as Christmas Island, 5000 kilometers away. ATOC scientists can pinpoint variations as small as 20 milliseconds in the hourlong travel time of the pulses. That's accurate enough to deduce the average ocean temperature along the sound wave's path to within 0.006 degree Celsius. “That's far better than we expected,” Worcester says.

    The acoustic patterns already reveal expected seasonal temperature cycles of about 2 degrees Celsius in the upper ocean. The new data “show that we could expect to see subtle, long-term climate changes in the ocean [with ATOC] long before any other means,” adds Wunsch. But it's too soon to discern the mark of human-induced warming, which modelers project will amount to only a few thousandths of a degree per year in the deep Pacific. Teasing that out of background noise will require at least a decade of observations.

    Meanwhile, ATOC's marine mammalogists have their own good news. Environmental groups delayed the project's launch for most of 1994 and 1995 by protesting that its long bass rumbles—mistakenly described in some media accounts as “blasts”—would deafen whales and seals. So sharp-eyed biologists have been monitoring whale and elephant seal behavior near the California sound source, 900 meters underwater and about 100 kilometers southwest of San Francisco. They've flown over it some 35 times during both “on” and “off” periods and haven't seen marked changes in the animals' swimming distributions, says bioacoustician Christopher Clark of Cornell University.

    Researchers also released deep-diving elephant seals beyond the transmitter and used satellite tags to track their paths back to shore; the animals didn't swerve to avoid the sounds. “Biologically, [the source] is totally meaningless,” says Clark. However, the verdict isn't in as to whether vocal whales will temporarily fall silent when exposed to the noise, as happens with explosions used for seismic studies. Clark's team is now recording vocalizations of several kinds of whales to see whether ATOC silences their songs.

    Although the scientific doubts are largely cleared up, the project's original $40 million—a swords-to-plowshares grant from an environmental program at the Department of Defense—will dry up at the end of 1998. And prospects for a large-scale project, which Worcester says would require $5 million to $10 million a year, seem grim. “Everyone insists this is important research, but no one government agency is willing to stick with monitoring the oceans for 10 years,” says Wunsch.

    But at least one federal source proffers some hope. “I've seen some of their data, and it's really nice,” says Jeff Simmen, program manager for ocean acoustics at the Office of Naval Research in Arlington, Virginia. Despite the long-term investment needed, he says the project “may get sufficient funding to continue,” perhaps from a consortium of agencies, if scientists can clearly demonstrate ATOC's value.

    RNA Can't Take the Heat

    Many biochemists believe that the molecule most likely to have powered Earth's earliest life-forms was RNA. Only RNA seems capable of carrying out both of life's crucial functions: storing genetic information and catalyzing its own replication. But the molecule is notoriously fragile in warm conditions, and most scientists picture the early Earth as a steamy place. Research presented in an origin-of-life session at the meeting fuels those doubts by showing that RNA's chemical building blocks simply fall apart within days to years at temperatures near boiling—and may not thrive even in cooler environs.

    Researchers already knew that RNA's chemical backbone is fragile, even at room temperature, but they speculated that other, alternative backbones might have held the molecule together in the early world. Now graduate student Matthew Levy, who works with origin-of-life specialist Stanley Miller at the University of California, San Diego, has shown that the information-coding units of RNA, its four nucleobases—adenine (A), cytosine (C), guanine (G), and uracil (U)—fare almost as badly. He found that at 100 degrees Celsius, half of each batch of nucleobases he tested would degrade in periods that ranged from 19 days (for C) to 12 years (for U). “We suggest that this makes an RNA-driven origin of life at 100 degrees very unlikely,” Levy says.

    The situation improves at the freezing point of water, where the half-lives for A, G, and U all exceed 600,000 years. But cytosine remains a weak link: It decomposes within 17,000 years. That may be too short for C to play a role in primeval life-sustaining reactions, as most biochemists believe that the first biological processes required high concentrations of the proper ingredients for a million years or so. (RNA and other frail molecules become far more stable once they reach the chemically sheltered womb of a cell.)

    Levy and Miller argue that RNA's high-temperature infirmity implies either that it wasn't the first genetic material—and no one has yet proposed a viable alternative—or that the young Earth was a chilly place, perhaps layered by ice. But as noted by atmospheric chemist James Kasting of Pennsylvania State University in University Park in another talk at the meeting, most geochemical models still point to an ovenlike early Earth, blanketed in greenhouse gases—with “oodles of carbon dioxide”—for several hundred million years. “Then the whole planet is close to 80 or 90 degrees [Celsius],” he says—and there would be no “cold refugia” to harbor the sorts of frigid reactions that Miller envisions.

    Worse, the early Earth weathered frequent comet and meteorite impacts, some of which could have heated the oceans to full boil and wiped out all nucleobases. Still, Kasting acknowledges, “no one has ruled out a cold Earth, because there are no data.”

    Biochemist Gerald Joyce of The Scripps Research Institute in La Jolla says it's more likely that some molecule other than RNA was the first carrier of genetic information. Joyce still thinks an “RNA World” preceded the DNA-protein paradigm of modern organisms—but that world probably evolved well after life's very beginnings. “You have to build straw man upon straw man to get to the point where RNA is a viable first biomolecule,” he says. But if another self-replicating compound set the stage for RNA, it thus far has eluded the best efforts of researchers to find it.


    Sea Floor Records Reveal Interglacial Climate Cycles

    1. Richard A. Kerr

    Climate scientists are avid historians, looking to the past for the key to understanding the future. But researchers trying to mine the past for clues to how our present climate is likely to change as greenhouse gases warm the world have faced a dearth of material. Some of the most detailed climate records available up to now, preserved in the annual layers of the Greenland Ice Sheet, chronicle an alien climate—that of the last ice age, extending back 100,000 years. Detailed records from even earlier times, when Earth experienced warm interglacial periods like our own, have been scarce.

    Now, however, researchers studying deep deposits of muck on the sea floor have read a detailed history of climate that extends nearly 2 million years into the past and covers multiple interglacial periods. As one group reports in this issue of Science, the record shows that climate varies on regular cycles lasting from 1200 to 6000 years, in glacial and interglacial periods alike. It's a finding that offers a mixed message of reassurance and warning about the future of our own climate.

    Researchers had already caught a glimpse of these cycles in the ice-core records, which showed that glacial climate was subject to violent temperature swings of up to 10 degrees Celsius, from extreme cold to relative warmth and back again, in a few thousand years. They had also seen signs of subdued climate oscillations during the past 10,000 years, but that was too short a time to know whether the violent swings of the ice age are the climatic norm. As reported on page 1335 by Massachusetts paleoceanographers Delia Oppo and Jerry F. McManus of the Woods Hole Oceanographic Institution in Massachusetts, and James Cullen of Salem State College, also in Massachusetts, the longer record now shows that during warm interglacials, the swings were much more subdued, suggesting that the climate roller coaster of glacial times won't appear anytime soon.

    “We're showing that there's variability marching through the record,” says McManus, “but it's amplified in glacial times.” On the other hand, the cause of these millennial-scale cycles remains unknown, so no one can say how they will react as the strengthening greenhouse pushes the climate system toward ever greater warmth.

    McManus and his colleagues found their detailed climate records in piles of sediment called drifts, rapidly accumulating deposits formed where bottom currents drop their sediment loads. Marine sediments have long been a source of climate data, but most of the deposits studied in the past were laid down at a rate of 1 to 3 centimeters per thousand years, giving a time resolution no better than a few thousand years. In drifts, however, sediment accumulates at 10 or more centimeters per thousand years.

    For example, in the summer of 1995, the international Ocean Drilling Program's drill ship JOIDES Resolution went to Feni Drift in the North Atlantic off Ireland, where bottom currents from the north drop fine-grained sediment in the lee of Rockall Plateau. This sediment dilutes the coarser material falling from the sea surface, including the skeletons of small organisms called forams. These preserve a record of surface temperature in the ratio of oxygen isotopes they contain. The extra sediment stretches out the foram record so that it can be read in finer detail. Although cores from most sites yield a data point every 1000 years, those from Feni Drift give a climate reading every 300 years. “The drift sites allow you to resolve time so beautifully that you can really capture tremendous detail,” says McManus.

    That detail shows that climate has been oscillating on the same schedule as far back as researchers have analyzed drift sediments. In the Feni Drift core, Oppo and her colleagues found the temperature swings previously seen during the past glacial age occurring between 340,000 and 500,000 years ago. The ancient cycles tended to repeat roughly every 6000, 2600, 1800, and 1400 years, just as they did during the past ice age and, on a more moderate scale, during the past 10,000 years of interglacial time.

    The sea floor record isn't detailed enough to show how abrupt the changes were, but it does reveal that the relative calm of recent millennia is apparently typical of interglacial climate. According to Oppo and her colleagues, during glacial periods 450,000 and 350,000 years ago, North Atlantic sea-surface temperature varied as much as 3°C to 4.5°C, while during the warm interglacial between them it varied by only about 0.5°C to 1°C. That's about the amount of cooling during the Little Ice Age of 300 years ago.

    Ongoing studies of other cores from North Atlantic drifts show the same pattern of damped oscillations during interglacials throughout the past 2 million years. Smaller interglacial cycles show up in drift sediments dating to about 650,000 years ago, according to ongoing work by Benjamin Flowers of the University of South Florida in Tampa, and also about 1.8 million years ago, according to a study led by Katherine McIntyre at the University of California, Santa Cruz.

    All these studies suggest that the same forces have been driving climate cycles for almost 2 million years, says oceanographer Scott Lehman of the University of Colorado, Boulder. But what those forces are, “we really don't know,” says Oppo. Candidates include cyclical variations in the sun's brightness and Earth's regular wobbles on its axis, which could shift climate by changing the distribution of sunlight across the planet. Whatever the ultimate drivers might be, the work of Oppo and others shows that changes in ocean circulation must play some role (Science, 14 November 1997, p. 1244). By studying the carbon-isotope composition of bottom-dwelling forams—a signature they borrow from whichever deep current bathed them while alive—she and colleagues show that ocean circulation shifted in step with sea-surface temperature. Because the deep waters at Feni Drift are part of a global loop of heat-carrying currents—the global “conveyor belt”—those shifts could have altered global climate patterns.

    Whatever the cause of the climatic gyrations, the records suggest that the worst climate swing likely in the present interglacial is another Little Ice Age, in a millennium or so. But human-induced greenhouse warming might intervene and amplify the cycles.

    Oppo and colleagues found that climate oscillations were largest when ice sheets were growing and when they were disintegrating. Variations were subdued in the depths of an ice age, although not as much as during an interglacial. That pattern, also seen in earlier work, suggests to Richard Alley of Pennsylvania State University in University Park that climate shifts might be strongest not just when it's cold but when the climate system is being pushed from one state to another. If so, a push toward warmth during an already-warm interglacial might boost climate shifts to devastating proportions. Then again, because past climate swings have been smaller in warm periods, continued global warming might dampen them even further.

    How to choose between the two possibilities? For better or worse, the answer will come as human beings continue to pour greenhouse gases into the atmosphere, says Alley: “The experiment to answer that question is the one we're doing now.”


    Viral Saboteurs Caught in the Act

    1. Steven Dickman
    1. Steven Dickman is a freelance writer in Cambridge, Massachusetts.

    Disguising yourself as your enemy is an age-old ruse of human saboteurs. Viruses, those saboteurs of the cell, have adopted it as well, fashioning components that are the spitting image of normal host proteins. This “molecular mimicry” can help a virus evade detection by the host immune system long enough to create an infection. Occasionally, though, the immune system catches on, and immunologists think that the resulting immune attack may damage host cells as well as the virus. Although this is an attractive explanation for such devastating autoimmune diseases as insulin-dependent diabetes and multiple sclerosis, until now, no one has been able to show conclusively that this type of molecular mimicry really can cause disease.

    On page 1344, however, immunologist Harvey Cantor and his colleagues at Harvard Medical School in Boston show that molecular mimicry is at work in herpes stromal keratitis (HSK), a common autoimmune disease of the eye triggered by herpes simplex virus 1 (HSV-1). The group found that HSK, which can cause blindness by clouding the cornea, is much more likely to develop in mice if the infecting virus carries a particular protein segment that closely resembles part of a protein found on the animals' corneal cells than if that viral segment is removed. The result is the “final piece of evidence that during an infection, a virus can bring about autoimmune disease [by molecular mimicry],” says viral immunologist Michael Oldstone of The Scripps Research Institute in La Jolla, California, who first proposed the hypothesis in 1982.

    Discovery of the target of the immune attack also has clinical implications for people with ocular herpes, which can lead to HSK and is the principal infectious cause of blindness in developed countries, affecting an estimated 400,000 people in the United States alone. M. Reza Dana, an ophthalmologist and ocular immunologist at Harvard Medical School, notes that if the immune system is indeed attacking the corneal protein identified by the Cantor group, then the discovery could “in principle allow us to disrupt or arrest this component” of the attack, perhaps by inactivating the specific set of immune cells responsible for it.

    Cantor and his team got their first clue to the importance of molecular mimicry in HSK about 3 years ago, while trying to determine why some mice infected with HSV-1 don't develop the disease. Previous genetic studies had suggested that mice are protected if they have a particular variant of a gene coding for antibodies of the immunoglobulin G2a (IgG2a) class. At the time, no one knew exactly how the IgG2a variant might offer protection. What Cantor's team found is that it contains a sequence that renders T cells that would otherwise damage the corneal tissue incapable of mounting their immune attack.

    That finding suggested that the same short protein sequence is present on cornea cells as well, and that it might be the target of the autoimmune attack in HSK, an idea that the members of the Cantor group confirmed when they found that the sequence is indeed located on the cornea cells of resistant animals. Apparently, its fortuitous presence on the IgG2a variant trained those animals' T cells, which can encounter antibodies in the blood, to recognize that protein as “self.” As a result, they respond neither to it nor to the corneal protein sequence that it resembles. Animals not having the IgG2a variant are susceptible, because T cells do not ordinarily contact corneal cells and so do not develop such tolerance.

    The link to viral mimicry came when Cantor's group found the same sequence in a herpesvirus protein called UL6. That suggested a mechanism by which the virus might cause HSK: by triggering immune cells that recognize both UL6 and the same protein sequence on corneal cells. Still, Cantor says, “we had to show how viral infection could use this mechanism to actually induce disease.”

    So, in their current work, Cantor and his team members infected mice with either normal HSV-1 or a virus that they had genetically altered so that it lacked the UL6 protein. The result was striking: T cells from mice given virus containing native UL6 protein caused disease, while T cells of animals given the altered virus did not. Furthermore, more than 75% of mice infected with virus bearing the normal protein developed severe corneal autoimmune disease, whereas fewer than 20% of those infected with mutant virus did, and their symptoms were barely detectable.

    Although researchers say the work demonstrates that molecular mimicry can play a role in triggering autoimmune disease, it is unlikely to be the whole story. “These results are clear, as far as they go,” says Abner Notkins, a viral immunologist at the National Institute of Dental Research in Bethesda, Maryland. “But in many autoimmune diseases, T lymphocytes and antibodies target many proteins, not just the initial one mimicked by a virus. The model does not account for these other targets.”

    Cantor agrees, but says molecular mimicry must be at least one piece of the puzzle. “Now,” he says, “the task is to find out just how frequently this mechanism accounts for the link between infection and autoimmunity.” If it is common and the viral triggers can be identified, he adds, the work might aid efforts to develop therapies aimed at preventing autoimmune damage. For instance, if researchers can determine which sequences trigger human T cells, it may be possible to induce the patient to develop tolerance to the viral protein before it sends the immune system down the road to self-destruction.


    Mother Tongues Trace Steps of Earliest Americans

    1. Ann Gibbons

    From 12 to 17 February, some 5400 people descended on Philadelphia for the annual meeting of the American Association for the Advancement of Science (AAAS, which publishes Science), celebrating its 150th anniversary this year. President Bill Clinton addressed a packed hall, unveiling Neil Lane as his next science adviser and Rita Colwell as the next NSF director (Science, 20 February, p. 1122). But there were more reasons to celebrate: symposia on everything from the earliest Americans to martian life-forms, two of the topics featured in this special news section.1

    When several prominent archaeologists reached a consensus last year that humans lived in South America at least 12,500 years ago, their announcement struck a lethal blow to what had been a neat picture of the peopling of the Americas—that the first settlers were big-game hunters who had swept over the Bering land bridge connecting Asia and North America about 11,000 years ago. But this revised view of prehistory, based on 2 decades of study of the South American site called Monte Verde in Chile, has spawned a new mystery: When did the ancestors of Monte Verde's inhabitants first set foot in North America? Archaeologists trying to address that question have come up empty-handed, as there are few reliably dated digs in America older than the Chilean site.

    At the AAAS meeting, however, a possible answer emerged from another field—linguistics. Using known rates of the spread of languages and people, Johanna Nichols, a linguist at the University of California, Berkeley, estimates that it would have taken about 7000 years for a population to travel from Alaska to Chile. Because that would put the first Americans' arrival squarely in the middle of the last major glacial advance, Nichols proposes that “the first settlers began to enter the New World well before the height of glaciation”—earlier than 22,000 years ago.

    That date is early but is in accord with recent genetic studies suggesting that the diversity of DNA across American Indian populations must have taken at least 30,000 years to develop (Science, 4 October 1996, p. 31). In addition, Nichols's extensive analysis of Northern Hemisphere languages also suggests that several groups of Asians entered the New World, where they adapted rapidly to a range of habitats and adopted diverse ways of hunting and gathering.

    This picture is winning favor with linguists. “I believe that her general analysis of the linguistic situation in the Americas is essentially right,” says linguist Victor Golla of Humboldt State University in Arcata, California. “We need a much longer period of diversification among American linguistic stocks than the 11,500 years” allowed by the old view, he says. And although not totally embracing the linguistic findings, archaeologists acknowledge that, combined with other recent findings, Nichols's results indicate that the old, simple view of the peopling of the Americas is dead. “The bottom line,” says University of Kentucky, Lexington, archaeologist Tom Dillehay, who excavated Monte Verde, is that “the picture is a lot more complex than it was.”

    To try to get a better fix on how long it would have taken people entering the New World to get to Monte Verde, Nichols surveyed 24 language families that had spread over vast distances, such as Eskimoan languages that traveled from Alaska to Greenland and Turkic tongues that migrated from Siberia to central Europe. She found that the fast-moving languages that spread on foot—the only way the first American settlers could travel—moved 200 kilometers per century on average.

    With this yardstick, Nichols calculated that even if early Americans made a beeline, taking the shortest routes over the 16,000 kilometers of varied terrain from Alaska to southern Chile, the trek would have taken at least 7000 years. This would have put the Monte Verdeans' ancestors in Alaska when glaciers made it “probably impossible” to enter the continent, she says. Instead, Nichols argues, the evidence “strongly suggests” a migration before a major glacial advance began 22,000 years ago.

    Nichols checked her result against those obtained by other methods. For example, the New World has 140 language families—almost half of the world's total—and she estimated how long it would have taken this rich diversity of tongues to develop. Nichols began by surveying nearly all the language families of the Northern Hemisphere, from Basque to Indo-European, to see how often new language families have split off from an ancestral stock. She found that, on average, 1.5 new language families arose in each ancestral stock over the last 6000 years. Plugging that rate into computer models—which included an allowance for new migrations that carried in new languages after the glaciers retreated—yielded 40,000 years as the minimum time required to produce so many language families.

    Nichols also found that languages along the coasts of the Pacific Rim, from Papua New Guinea north to Alaska and then down the west coast of the Americas, share a remarkable set of grammatical and phonological features, such as the sound “m” in the second-person pronoun (the singular “you” in English), verb order, and numerical classifiers—words used in some languages when a number modifies a noun. These features set apart the coastal language families from those farther inland, indicating that coastal tongues were probably imported by later settlers.

    These kinds of features prompted Nichols to propose the following scenario: The first immigrants from Asia crossed the Bering land bridge “well before” 22,000 years ago and made it to South America. After the glaciers retreated, some people spread north, where they gave rise to the Southwest's Clovis culture, perhaps, and to other peoples. Meanwhile, human beings were again on the move along the Pacific Coast in Asia, with some language families heading south to Papua New Guinea and others north over the land bridge into Alaska—where they could have crossed once the ice sheets melted 12,000 years ago. Yet another group arrived at least 5000 years ago, she argues, giving rise to the Eskimo-Aleut family of languages.

    These early dates from linguistics and genetics are prompting archaeologists to reexamine and take more seriously their earliest sites of human occupation, including possible signs of a human presence at Monte Verde as early as 33,000 years ago, says Dillehay. “These findings of great antiquity from linguistics and genetics help us out, but in the end, we have to get the actual time dimension from the archaeological record.” To linguists, however, a thousand words are worth a fossil.


    Gene Diversity Muddles Heart Disease Story

    1. Robert F. Service

    From 12 to 17 February, some 5400 people descended on Philadelphia for the annual meeting of the American Association for the Advancement of Science (AAAS, which publishes Science), celebrating its 150th anniversary this year. President Bill Clinton addressed a packed hall, unveiling Neil Lane as his next science adviser and Rita Colwell as the next NSF director (Science, 20 February, p. 1122). But there were more reasons to celebrate: symposia on everything from the earliest Americans to martian life-forms, two of the topics featured in this special news section.

    The genetics of heart disease is about as simple as a bureaucrat's flow chart: At least 50 genes are thought to play some role in the disease, and the significance of known mutations appears to depend in part on an individual's ethnic background. But now the picture is more muddled—and for clinicians, perhaps more depressing—than ever. Findings reported at the meeting suggest that healthy people have such widespread variation in one candidate heart disease gene that it is virtually impossible, for now, to tease out mutations that increase risk of the disease.

    Pennsylvania State University population geneticists Kenneth Weiss and Andrew Clark and their colleagues examined one region of a gene coding for a protein called lipoprotein lipase. LPL is an enzyme that can reduce blood levels of certain fats linked to heart disease, so researchers have speculated that some variants of LPL could raise the risk of heart disease. But the Penn State team also found a high degree of variation in the gene among healthy people from three different ethnic groups. Such variability “makes it more difficult to figure out which [mutations] are causal and which are just evolutionary noise,” says Trudy Mackay, a population geneticist at North Carolina State University in Raleigh. If similar variability turns up in other candidate genes now being studied, Weiss adds, geneticists may have a hard time designing simple genetic screens for the disease.

    The finding emerged from a project to sequence 15 candidate heart disease genes, among them LPL, from thousands of healthy people. The project leaders—who also include Charles Sing at the University of Michigan, Ann Arbor, and Debbie Nickerson at the University of Washington (UW), Seattle—hope to decipher the widespread public health impact of the presence of different variations in these genes in the general population. The DNA came from healthy individuals of three disparate backgrounds: Finns from North Karelia, Finland, mixed Europeans from Rochester, Minnesota, and African Americans from Jackson, Mississippi.

    In the first stage of the project, Nickerson and her UW colleagues sequenced a 10,000-base-pair region of LPL from 72 people, 24 members from each group. The researchers found variation at 88 sites along the sequence. Compared to a reference gene sequenced earlier, each individual varied at an average of 17 sites. Each group had some unique variable sites.

    But in spite of the individual and ethnic variations, each sequence seemed to fall into one of two families of related variants, present in a 3-to-2 ratio in all three groups. The families, said Weiss, must have arisen “deep in human history and have been maintained in a diversity of human populations.” Multiple forms of a gene usually persist because individuals who inherit two variants gain some type of adaptive advantage. That's the case, for example, with the sickle cell anemia gene, which confers protection against malaria. But “we have no idea” what the corresponding advantage of LPL variability might be, Weiss says. Random mutations might account for a misperception that there are two gene families, he points out. Hoping to settle the question, his team is now sequencing LPL genes from another ethnic group, Native Americans, to see if the two gene families show up in this population as well.

    Whatever the cause of this variation, says Weiss, “the bottom line is that there is a lot of variation in human genes.” This, he says, will undoubtedly slow efforts to sort out the key genetic mutations that cause heart disease in individuals or specific ethnic groups.


    10-Gallon Molecule Stomps Tumors

    1. Amy Adams
    1. Amy Adams is a science writer in Santa Cruz, California.

    From 12 to 17 February, some 5400 people descended on Philadelphia for the annual meeting of the American Association for the Advancement of Science (AAAS, which publishes Science), celebrating its 150th anniversary this year. President Bill Clinton addressed a packed hall, unveiling Neil Lane as his next science adviser and Rita Colwell as the next NSF director (Science, 20 February, p. 1122). But there were more reasons to celebrate: symposia on everything from the earliest Americans to martian life-forms, two of the topics featured in this special news section.

    Fourteen years ago, Jonathan Sessler set out to make a bigger version of porphyrin, a four-leaf clover-shaped pigment that ferries iron and other metals in the blood. Tinkering with the molecule could have a big payoff, because porphyrin mysteriously tends to pack inside tumor cells. Sessler thought that supersizing porphyrin might enable the molecule to lug larger cargo, such as a cancer drug, into a tumor cell. And there was another, lighter reason for pumping up porphyrin: “Everything is bigger in Texas,” says Sessler, a chemist at the University of Texas, Austin. State pride was at stake.

    At the AAAS meeting, Sessler unveiled the fruits of his effort: a porky porphyrin, suitably named texaphyrin, able to deliver heavy metals to tumors. In pilot trials, the new molecule has shown promise in enhancing the effects of radiation on inoperable brain tumors, and it is winning raves from cancer experts. “They're fascinating compounds,” says Percy Ivy, a pediatrician at the National Cancer Institute (NCI), whose “unique properties are intriguing to us.” NCI has chosen texaphyrins as part of its Decision Network—a group of what it considers to be the most promising therapies.

    The road to texaphyrin wasn't easy. After months of fruitlessly trying to make the molecule, Sessler says, “everybody urged us to give it up.” Then one day a postdoc picked up a forgotten test tube in which the element cadmium had been used to try to stabilize the molecule's lobes. “In retrospect, we should have known that cadmium would work,” Sessler says. Analyzing the residue, he found that he and his co-workers had in fact synthesized a new five-lobed porphyrin shaped like the star on the Texas flag. Celebrating that evening, Sessler penned a whimsical note announcing the discovery of the molecule to the state legislature. Last year, he says, he received a Texas-sized thanks: “They sent me a $150,000 check to continue research.”

    This new, larger molecule had an increased appetite for electrons, which Sessler hoped would make it useful in radiation therapy. Radiation breaks down water inside cells into electrons, protons, and uncharged hydroxyls—an electron-hungry version of the free radical that kills cells by stripping electrons from DNA. If texaphyrins could sop up excess electrons, Sessler hoped, the hydroxyls would have a free hand at killing tumor cells. Because texaphyrin has a 20% larger carrying capacity than porphyrin does, Sessler was able to attach to it an atom of the heavy metal gadolinium, which is visible by magnetic resonance imaging (MRI).

    To test whether the gadolinium-laden texaphyrins find their way into tumor cells as porphyrins do, Sessler and colleagues at Pharmacyclics in Sunnyvale, California, used MRI to scan cancerous mice before and after a texaphyrin injection. The scans indicated that texaphyrin found its target: The mouse tumors glowed bright white for several days after an injection. After irradiation, the tumors decreased or disappeared altogether. About half the mice survived the 140-day study, Sessler says, suggesting that texaphyrin was indeed sopping up excess electrons. Only 10% of mice irradiated without texaphyrin survived. “We should get more killing for lower doses,” says Richard Miller, president and CEO of Pharmacyclics.

    Because, as Sessler points out, “mice are not men,” the team moved on to people. In an early cilnical trial, Sessler's group injected gadolinium-laced texaphyrin into 39 brain tumor patients 2 hours before radiation therapy for a period of 10 days. The patients, who were expected to live only 2 to 4 months, survived on average 188 days. Those in the highest dose group survived almost a year, on average. “There are people alive who would ordinarily be dead,” Sessler says.


    Holding a Nitrogen Grudge

    1. David Malakoff
    1. David Malakoff is a writer in Bar Harbor, Maine.

    From 12 to 17 February, some 5400 people descended on Philadelphia for the annual meeting of the American Association for the Advancement of Science (AAAS, which publishes Science), celebrating its 150th anniversary this year. President Bill Clinton addressed a packed hall, unveiling Neil Lane as his next science adviser and Rita Colwell as the next NSF director (Science, 20 February, p. 1122). But there were more reasons to celebrate: symposia on everything from the earliest Americans to martian life-forms, two of the topics featured in this special news section.

    Philosophers may still debate whether people are prisoners of their pasts, but many ecologists are now convinced that forests are captives of a biochemical memory in the soil that nourishes them. At the meeting, researchers presented provocative new evidence suggesting that land use decades ago dictates how forests today absorb and release nutrients.

    John Aber, a biogeochemist at the University of New Hampshire, Durham, described how unexpected results from a study of New England forests prompted him to brush up on local history. In 1989, Aber began to probe why some forests stockpile nitrogen compounds, absorbing these crucial nutrients and storing them up in leaves and stems, while others discharge them into ground water. Aber hoped his work might help researchers predict which forests will leak nitrate—a potentially toxic drinking water pollutant. The work could also help identify which forests might be harmed by storing too much nitrogen, now being pumped into the environment in unprecedented amounts by fossil-fuel burning and fertilizer use.

    When Aber began his study, the conventional wisdom was that forests near cities, where nitrogen deposition is high, were like saturated sponges: Adding more nitrogen would produce more nitrate leaks. Unpolluted forests, however, were considered less saturated and able to soak up more nitrogen. To test the idea, Aber added nitrogen to a relatively pristine deciduous forest plot near Orono, Maine, and to two “saturated” plots—a pine grove and an oak stand—in Harvard University's experimental forest in Petersham, Massachusetts. Contradicting theory, Aber found that even when the biologically available nitrogen load of the Harvard oak stand was tripled, it released no nitrate for 8 years. Meanwhile, the Maine plot—predicted to have plenty of nitrogen storage capacity—immediately began leaking.

    These findings also defied another prediction: that deciduous forests should take up and use more nitrogen than evergreen groves, because deciduous trees need the element to grow a fresh crop of leaves each spring. The Harvard forest's oak stand, however, used less nitrogen than did its pine grove. “The results were a really big surprise,” Aber says.

    Aber then sought an explanation in the history books. He realized that the Harvard evergreens grew on a century-old crop field that had been regularly plowed and fertilized with manure, which enriched the soil with nitrogen. But the nearby oak stand had been cut periodically and burned by wildfires. “Land use explains the results,” he says. “The pine forest is like a saturated sponge, so it produced more nitrate. But the oak plot was squeezed hard, the nitrogen depleted by harvest and fire.” That explains why the oak stand stored more nitrogen than did the pine grove or the similar deciduous plot in Maine, which had not suffered such a disturbance.

    The findings suggest that land-use history deserves the increased attention it is starting to get in ecology, says Scott Bailey, a biogeochemist with the U.S. Forest Service's Northeast Research Station in Durham, New Hampshire. Adds Aber: “We have too long underestimated the fact that the landscape has a very long memory.”


    Prodding Cells to Make Proteins

    1. Gretchen Vogel

    From 12 to 17 February, some 5400 people descended on Philadelphia for the annual meeting of the American Association for the Advancement of Science (AAAS, which publishes Science), celebrating its 150th anniversary this year. President Bill Clinton addressed a packed hall, unveiling Neil Lane as his next science adviser and Rita Colwell as the next NSF director (Science, 20 February, p. 1122). But there were more reasons to celebrate: symposia on everything from the earliest Americans to martian life-forms, two of the topics featured in this special news section.

    Like polite passengers on a jammed airplane, cells in tissue culture avoid taking up more space when they brush up against nearby cells: They stop growing. How cells translate a neighbor's touch—and other mechanical forces—into biochemical signals has long mystified scientists, however.

    At the AAAS meeting, Donald Ingber of Children's Hospital and Harvard Medical School in Boston offered new insight into this puzzling process. He and his colleagues have observed cells hastily setting up protein factories after their internal skeletons are pulled or twisted by forces applied to their surfaces. The findings, in press at Nature, could lead to a better understanding of how cells grow differently under microgravity and may also aid researchers trying to create better artificial tissues such as blood vessels and, eventually, entire organs.

    Ingber has long championed the idea that the cell's internal skeleton is an intricate tension-bearing system that translates forces through the cell. His team had shown in previous work that tugging on particular parts of the cell's surface can rearrange the nucleus, causing structures called nucleoli to line up (Science, 2 May 1997, p. 678). The latest experiments “take that a step further,” says cell biologist Michael Sheetz of Duke University Medical Center, showing that mechanical forces on the cell's internal skeleton directly influence protein assembly.

    To probe a cell's response to mechanical forces, the scientists coated magnetic beads with fibronectin, a protein that specifically attaches to integrin receptors—cell surface proteins that reach through the cell membrane to the cell's internal skeleton. They allowed the beads to attach to receptors on human endothelial cells, which line blood vessels. As soon as the cells bound to the beads, they set up what Ingber calls “microcompartments” for protein synthesis—regions near the integrin receptor in which ribosomes and other protein-synthesis molecules were observed to gather. Applying a magnetic field that twists the beads recruited more protein machinery, whereas chemicals that disrupt cytoskeletal tension inhibited the recruitment. In another set of experiments about to be submitted for publication, Ingber and his colleagues found that twisting the integrin receptor also revs up the production of proteins in the cell nucleus that help to regulate gene expression, while tugging at other receptors had no such effect.

    The work demonstrates one way that cells might sort through multiple competing signals, Ingber says. For example, although groups of cells or tissues may be exposed to the same set of growth factors, only those cells under a certain amount of tension might respond, he says. The work could have practical implications as well. For instance, understanding how a lack of gravity affects internal cell tension will be crucial for studying cell growth in space, says Peter F. Davies of the Institute for Medicine and Engineering at the University of Pennsylvania. In addition, he says, the work could help bioengineers design better scaffolding to support artificial tissues.


    Preventing a Mars Attack

    1. Martin Enserink
    1. Martin Enserink is a science writer based in Amsterdam.

    From 12 to 17 February, some 5400 people descended on Philadelphia for the annual meeting of the American Association for the Advancement of Science (AAAS, which publishes Science), celebrating its 150th anniversary this year. President Bill Clinton addressed a packed hall, unveiling Neil Lane as his next science adviser and Rita Colwell as the next NSF director (Science, 20 February, p. 1122). But there were more reasons to celebrate: symposia on everything from the earliest Americans to martian life-forms, two of the topics featured in this special news section.

    The supposed nanofossils riddling the famed meteorite from Mars may seem harmless enough, but consider this nightmare scenario: What if the next rock to arrive from Mars carries live martian microbes able to infect and decimate plants, animals, or people? Such fears, however remote, are high on the agenda at NASA, which is preparing a $500 million mission that would bring about 0.5 kilogram of martian soil and rocks back to Earth in 2008. At the meeting, scientists and NASA officials outlined their plans for containing any unwelcome visitors lurking in martian samples.

    Nobody disputes the scientific benefits of bringing a chunk of Mars to Earth. Remote robots can't perform all the analyses scientists are eager to do, and it's unlikely they could settle the big question: Did Mars ever harbor life? “We'll never be satisfied until we bring a rock back, crack it open, and say ‘Aha!’—or not,” says Wesley Huntress, NASA's associate administrator for space science.

    Red alert?

    Bringing martian sample home in 2008 could be perilous.


    But when that rock arrives 10 years from now, it will get a reception worthy of the Ebola virus. No stranger to biosafety, NASA in the 1960s set up a quarantine lab for the astronauts on the Apollo 11 to 14 moon-landing missions and the roughly 100 kilograms of lunar material they brought back. Many scientists deemed these precautions superfluous—the moon seemed as dead as a celestial body could get—so they were dropped on later moon missions.

    Now, the stakes are higher. Tougher environmental regulations governing imported materials, and the meteorite nanofossils found last year, however controversial, are making NASA think hard about protecting earthlings from any martian bugs. There's also a strong scientific argument for a quarantine: Without it, earthly microbes could contaminate the sample, complicating the search for extraterrestrial life.

    At the meeting, Jonathan Richmond sketched how NASA plans to handle such samples. Richmond is director of the Health and Safety Office at the Centers for Disease Control and Prevention (CDC) and a member of NASA's Mars Sample Containment Protocol Subgroup. At the landing site on Mars, he said, robots will scoop up a sample “about the size of a baked potato” and place it in a double-layered canister that will be hermetically sealed and sterilized on the outside before leaving Mars. If the canister should break or leak on the flight home, its contents will be sterilized, or the ship may be redirected from Earth.

    Upon arrival, the canister will be taken to a planned Mars Receiving Laboratory and placed in a low-pressure biological safety cabinet or “glove box.” The outer shell, which by then would be considered contaminated by Earth, will be resterilized. Scientists will sample the Mars atmosphere between the shells, after which the outer shell will be stripped off and the inner one moved to a second glove box. There, the last barrier will be broken and the sample studied.

    As elaborate as these precautions may seem, no one can guarantee they will suffice. For one thing, nobody knows what steps might be needed to contain an extraterrestrial bug. “One can read The Andromeda Strain and get ideas about totally different microbes that can dissolve rubber and glue,” Richmond admits. “If that's the case, we'll have to have some pretty good disinfectants available.” To most scientists, however, that scenario seems farfetched. The chances of finding anything living are remote, and the likelihood of martian critters posing a health threat is “very, very slim,” contends microbiologist Rita Colwell, tapped earlier this month to head the National Science Foundation. The most important rationale for biocontainment, she says, is to protect the sample—not the planet—from contamination.

    But society might see things differently, warns ecologist Margaret Race of the Search for Extra-Terrestrial Intelligence Institute in Mountain View, California. “Complete openness” about the risks and uncertainties of the mission will be essential, she noted—but even that will probably not prevent news stories warning of martian bugs on the rampage. “We can guarantee that the National Enquirer is going to love this one,” she says.


    Population Growing Pains

    1. Jocelyn Kaiser

    From 12 to 17 February, some 5400 people descended on Philadelphia for the annual meeting of the American Association for the Advancement of Science (AAAS, which publishes Science), celebrating its 150th anniversary this year. President Bill Clinton addressed a packed hall, unveiling Neil Lane as his next science adviser and Rita Colwell as the next NSF director (Science, 20 February, p. 1122). But there were more reasons to celebrate: symposia on everything from the earliest Americans to martian life-forms, two of the topics featured in this special news section.

    Does adding more people to the planet make society any worse off? Lately economists have tended to reject gloom-and-doom scenarios of impending environmental catastrophe, concluding that population growth should only slightly perturb living standards. At the meeting, however, two economists unveiled a new analysis suggesting that more people do impose a cost on society—one that could run thousands of dollars a head. If the analysis holds up in this hotly debated area of economics, the results could be “extremely important,” says population biologist Joel Cohen of Rockefeller University in New York City. “It makes clear the interest of the present generation in slowing population growth.”

    Economists William Nordhaus and Joseph Boyer of Yale University took a fresh look at the old question of how population growth might affect a country's economic well-being. They used global data for 13 regions on gross domestic product, capital, and population from the 1960s to the 1990s to project the costs to society of each additional person, years into the future. The team went a step further than other economists have by estimating, for the first time, the economic costs of a slightly warmer planet as each person's activities spew out more greenhouse gases. Also unlike previous models, the researchers analyzed how parental decisions about family size might affect the economic well-being of not just the current generation but future generations, through the 22nd century.

    Per capita.

    Population growth may jack up societal costs by 2200.


    That approach “makes all the difference in the world,” Nordhaus says. When the duo counted only the cost to the current generation in their model, the net cost to society per extra person was close to zero. But when they included eight or so subsequent generations, the net cost per additional person soared to about $100,000 in the richest countries and $2500 in developing countries, for example in sub-Saharan Africa. Most of these costs arose from diminishing returns as land and capital were divvied up among descendants. The portion of the cost due to increased greenhouse gases was surprisingly small, Nordhaus says—only a few percent.

    Because there are many uncertainties in the model, such as the economic toll of global warming, Nordhaus says he'd like to spend the next few years refining the model before he'll be ready to “make earth-shattering pronouncements.” But if it holds up, experts say, it may add incentive for countries to adopt policies that encourage people to have smaller families.

Stay Connected to Science