News this Week

Science  15 Jan 1999:
Vol. 283, Issue 5400, pp. 302
  1. U.S. R&D SPENDING

    Computing, NSF to Get Top Billing in 2000 Budget

    1. David Malakoff*
    1. With reporting by Jeffrey Mervis and Eliot Marshall.

    An initiative to boost computing research is expected to be the science highlight of an otherwise lackluster fiscal year 2000 budget proposal that President Bill Clinton is preparing to send Congress on 1 February.

    Administration officials have been warning for months that a harsh budget climate might freeze civilian research and development spending. But Science has learned that plans for the year beginning 1 October 1999 will include moderate increases for selected science agencies. Growth at the National Institutes of Health (NIH) would be minimal after a record-breaking increase in 1998, however, and military spending on basic research at universities could slump under the proposed budget, whose prospects are unusually fluid.

    Although the request is still being fine-tuned, informed sources say the White House is expected to ask Congress to give the National Science Foundation (NSF) the largest percentage increase of any basic research agency—some 6%, or roughly $200 million, to nearly $3.9 billion. The budgets of several other major research agencies, however, would barely keep pace with inflation. Insiders expect Clinton to ask for a 2.2%, $328 million increase for the $15.6 billion NIH, for example, and about a 3%, $200 million increase for the Department of Energy's (DOE's) $7 billion research portfolio.

    At the Department of Defense, science lobbyists say the Administration may propose cuts in basic research that exceed 14%. The drop would come despite an overall rise in defense spending of $12 billion, to $296 billion, the first major increase in 15 years. Cuts in the military science budgets, which provide up to half the government funds received by university math and engineering departments, “would be a serious concern for the research community,” says Michael Lubell of the American Physical Society in College Park, Maryland.

    Clinton is scheduled to preview the budget next week in his State of the Union address. But a constellation of factors makes the outcome of this year's budget debate over R&D spending extremely volatile. The biggest unknown, analysts say, is how a sharply partisan Congress, already rent by impeachment, will decide to spend a budget surplus now expected to reach $76 billion. The White House has set aside the surplus, which appeared last year for the first time since 1969, to shore up the Social Security retirement system, which is predicted to become insolvent in the 2030s. Some Republicans, however, want to use a portion of the excess funds to provide a tax cut, but narrow GOP majorities in both House and Senate make such a rebate a long shot. A stalemate could throw open the door to a last-minute spending spree similar to the one last fall that produced a 15%, $2 billion increase for NIH and swelled other research budgets (Science, 23 October 1998, p. 598). “Failure to get a Social Security deal could open the spending floodgates,” says one congressional aide.

    Any move to spend some of the surplus on government operations, however, would require Congress and the Administration to take the politically risky step of openly exceeding spending caps imposed by the 1997 Balanced Budget Act. In 2000, the act limits total discretionary spending—the one-third of the overall $1.8 trillion federal budget not tied up in mandatory payments such as medical assistance to the elderly and interest on the federal debt—to $571 billion, $5 billion less than this year. Staying within the cap, says Lubell, “means tighter times for science,” all of which falls within discretionary spending.

    An additional complication is the expiration of a ban on moving funds between the military and civilian parts of the budget. The so-called “firewall” was erected several years ago to prevent Congress from transferring military funds to civilian programs at the end of the Cold War. Ironically, however, its abolition has raised the prospect that lawmakers intent on boosting military readiness could now raid more vulnerable civilian programs, including research budgets.

    Despite the uncertainty, Administration officials seem ready to make a high-profile push for the new computing initiative—which is so far nameless but reportedly has the backing of Vice President Al Gore. The initiative, which will initially involve NSF, DOE, NASA, and the Defense Advanced Research Projects Agency, has its origins in an August 1998 report issued by a White House task force. It recommended that the government add $1 billion over 5 years to the estimated $1.5 billion a year it now spends on information technology research (Science, 21 August 1998, p. 1125). The panel said the increase is needed to revitalize basic research on software, hardware, and computer networks, and to maintain U.S. leadership in the field.

    Although the White House budget request is likely to fall short of the $200 million the panel recommended, an expected $150 million in new funds for NSF would be “a very positive start,” says computer scientist Ken Kennedy of Rice University in Houston, Texas, who co-chaired the panel. Congressional aides say their bosses are likely to respond favorably, although the plan's ties to Gore could be a problem for Republicans wary of giving the presidential candidate any campaign fodder. Indeed, aides say that “Gore's fingerprints” could imperil several marine science initiatives touted by the vice president at a major oceans conference last year and proposed by the National Oceanic and Atmospheric Administration.

    Administration officials are expected to have few other new science initiatives to tout, however. At NSF, for instance, a request for $40 million toward a heavily instrumented $70 million jet to study the upper atmosphere was denied by White House budgeteers to make room for the computer initiative. In addition, agency officials have again shelved plans for a $25 million Polar Cap Observatory in northwest Canada after being thwarted for the past 2 years by Senator Ted Stevens (R-AK).

    Congress isn't likely to add funds for such projects. But lawmakers can be counted on to find extra funds for NIH, seen by pinched budgetmakers as having earned several years' worth of increases last year. With strong backers in key positions on the appropriations committees and broad support from a host of lobbying groups, who are calling for another 15% increase, NIH's budget traditionally emerges from Congress fatter than it arrived—no matter which party is in power. This year is likely to be no exception, although some legislators question whether biomedical bureaucrats could effectively spend another major windfall. “A 15% increase [for NIH] would be an even larger victory this year” than last, says a congressional aide.

  2. PALEOANTHROPOLOGY

    Did Early African Hominids Eat Meat?

    1. Gretchen Vogel

    Food is one of modern humans' all-consuming passions—and that was perhaps even more true for our early ancestors, who had to work much harder for their calories. But exactly what delicacies tempted the early hominid palate has long been a subject of debate, fueled by the fact that anthropologists had to infer ancient diets from indirect evidence such as tooth wear and jaw and tooth shape. Now on page 368, researchers use a clever new method based on the chemical makeup of teeth to determine the kinds of food an early hominid ate in African woodlands 3 million years ago.

    Paleoanthropologist Julia Lee-Thorp of the University of Cape Town in South Africa and graduate student Matt Sponheimer of Rutgers University in New Brunswick, New Jersey, examined carbon isotopes in the tooth enamel of Australopithecus africanus, a small-brained hominid that walked upright but was probably also at home in the trees. Researchers thought that this species subsisted on forest fruits and leaves, but the isotopic clues show that it ate a varied diet, including either grassland plants or animals that themselves fed on grasses.

    Other researchers are excited about the work. “The data are just fascinating,” says paleoanthropologist Margaret Schoeninger of the University of Wisconsin, Madison. Adds paleoanthropologist John Kingston of Yale University: “This [direct analysis] is what we want to see.” Many theories of human origins invoke a switch to a meat-rich diet to explain the sudden swelling of brain power in our own genus, Homo; the new data raise the possibility that meat-eating is not the exclusive province of Homo but a strategy adopted by more primitive species too.

    The isotope analysis offers a glimpse into ancient animals' diets and habitats, because different kinds of plants use carbon slightly differently. Trees, bushes, and shrubs, called C3 plants, select against the heavier isotope, carbon-13 (13C), when they convert carbon dioxide into sugars and tissues. C4 plants such as tropical grasses and sedges, on the other hand, use 13C more easily and have more of it in their tissues. Herbivores incorporate the isotopic signature of these plants into their bodies, and meat eaters absorb the signature of their prey.

    To find out what A. africanus ate, Sponheimer and Lee-Thorp compared the carbon isotope ratios of four hominid specimens with those of 19 other creatures found in a bone-filled cave about 325 km north of Johannesburg. The data fell into three clusters. One group of animals, including a three-toed horse and a warthog, had relatively high ratios of 13C to 12C, marking them as grassland feeders. Another group, including a rhinoceros and an impala, had low ratios and probably got most of their food from the forest. In the middle were the scavenging hyenas—and the hominids. Thus A. africanus must have gotten at least some food from eating grass, grass seed, or the meat of grass-eating animals. “Maybe their hearts and homes were in the trees,” says Sponheimer, “but their bellies were tied to the open areas.”

    And they may have been filling those bellies with meat, although they lived half a million years before the first known meat-eating humans. The tooth wear patterns of A. africanus lack the telltale scratches of a grass eater, so the isotope data suggest that it ate some sort of grass-eating animals, says isotope geologist Paul Koch of the University of California, Santa Cruz. No one is suggesting australopithecines ate like the clawed hyenas, but they could have hunted small animals or scavenged already-dead carcasses, he says.

    Schoeninger accepts the isotope ratio data but is not so sure that A. africanus ate meat. She notes that the early hominids thought to have eaten meat, 1.8-million-year-old Homo specimens found in East Africa, had much smaller teeth and chewing muscles. To her, A. africanus's big teeth and powerful jaw suggest that it mainly ate nuts, cracking them open with its teeth. The extra 13C could have come from grass seeds or grass-eating insects, she says. Isotopic ratios of other elements, such as oxygen or strontium and calcium, might eventually separate the carnivores from the herbivores, she says.

    Whatever they were eating, the work shows that A. africanus spent some time in open areas rather than in dense forests, although it was apparently adapted for climbing. And clearly the hominids were willing to try a range of foods: They had a wider range of isotope values than all but one of the other animals. These hominids, although they may not have been our direct ancestors, apparently possessed one of the key traits of our lineage, says anthropologist Jeffrey McKee of Ohio State University in Columbus: “They were adaptable. They weren't specialized animals.”

  3. AIDS

    T Cell Production Slowed, Not Exhausted?

    1. Michael Balter

    Ever since the very early days of the AIDS epidemic, almost 2 decades ago, researchers have recognized the disease by its signature symptom: a progressive loss of CD4 T lymphocytes, the primary immune cell targeted by HIV. Yet just how HIV causes T cell depletion is still the subject of vigorous debate. Does it destroy T cells so quickly and efficiently that the immune system exhausts itself trying to replace them? Or does it disrupt the immune system's ability to produce T cells in the first place? Now, in the January issue of Nature Medicine, a team led by immunologist Joseph McCune of the University of California, San Francisco, and endocrinologist Marc Hellerstein of UC Berkeley reports results obtained by using a new technique that for the first time provides a direct measure of how many new cells are produced over a given time period. The findings, the team says, support the notion that HIV's most important and insidious talent is to interfere with T cell production.

    For some researchers, the new paper essentially resolves the controversy. Immunologist Giuseppe Pantaleo of Vaudois Hospital Center in Lausanne, Switzerland, told Science the results show that although HIV may be destroying some T cells, “this is not sufficient to explain the loss. … There is no exhaustion of the [production] machinery.” In an accompanying article in Nature Medicine, Pantaleo summarily declares that the study “puts an end to four years of exciting (although often harsh) debate” over the issue. Not so fast, counters David Ho, director of the Aaron Diamond AIDS Research Center in New York City and a prominent exponent of the “immune exhaustion” model. He argues that the results do not necessarily contradict his model. “The jury is still out,” Ho says.

    To tackle the question, the UC researchers used an innovative method they first described last year (Science, 20 February 1998, p. 1133). They intravenously infused subjects with a solution of glucose—a precursor of deoxyribose, one of the chemical building blocks of DNA—in which the glucose molecules contain deuterium, a nonradioactive isotope of hydrogen. They then took blood samples at various times after the infusion had ended. As T cells divided, the deuterium-labeled DNA was progressively replaced by unlabeled DNA, allowing the team to calculate the production rate of new cells as well as their average life-span. The team conducted this test on three groups of subjects: uninfected controls, HIV-infected patients undergoing antiviral therapy, and infected patients not yet receiving therapy.

    The team found that the average T cell life-span in untreated HIV-infected patients was one-third that in controls, consistent with a certain amount of cell killing by HIV. However, the T cell production rate was no higher than that of the controls, as would be expected if the immune system was working overtime to replace these destroyed cells. And patients taking antiviral drugs had higher T cell production levels than in the control and untreated groups, the opposite of what would be expected if increased production were simply a response to T cell destruction by HIV. Instead, the authors propose, antiviral therapy leads to a “disinhibition” of the production machinery.

    “The thesis of our paper is that HIV affects the system of production more than it induces destruction of mature T cells,” McCune says. But some researchers believe this conclusion is premature. For example, immunologist Angela McLean of Britain's Institute for Animal Health in Compton says that the new technique may underestimate the actual rate of T cell production in the untreated patients, especially if new cells become infected by HIV and die before they can be counted.

    Ho, while lauding the new methodology as “a powerful new technique,” says that even if the untreated HIV-positive patients in the UC study did not produce greater numbers of T cells than uninfected controls, this does not necessarily contradict his model of immune exhaustion, because these patients had much lower CD4 counts than the HIV-negative group. This means, Ho says, that the same production rate would represent a much faster turnover of their total T cell pool. “The burden [to the immune system] is much greater.”

    McCune says that if HIV is indeed interfering with production of new T cells, the findings might point to new strategies for enhancing this production. For example, T cells could be cultured outside the body for reintroduction later in the disease, or patients could be given signaling molecules called cytokines to prompt immune system cells to divide. But whether these measures are warranted depends on discovering what HIV is actually doing to the immune system. The new methods, researchers say, are an important step in that direction. Says McLean: “We may now have a technique that can eventually bring the jury in.”

  4. NEUROBIOLOGY

    How Stimulant Drugs May Calm Hyperactivity

    1. Jean Marx

    To calm down children diagnosed as hyperactive, pediatricians give them small doses of stimulants—drugs such as the widely prescribed Ritalin. It seems paradoxical. And the paradox is even more vivid to neuroscientists, who know that high doses of these drugs raise brain levels of dopamine, a neurotransmitter that promotes activity. New results from a research team at Duke University Medical Center in Durham, North Carolina, now point to a surprising explanation for the drugs' puzzling effectiveness: Rather than working through dopamine, low doses may instead raise the concentrations of another neurotransmitter, serotonin, known to have calming effects.

    This conclusion comes out of a series of experiments that the Duke team, led by neurobiologists Raul Gainetdinov and Marc Caron, performed on a strain of genetically altered mice that have behavioral symptoms similar to those of children with attention-deficit hyperactivity disorder (ADHD). As the team reports on page 397, treatment with psychostimulant drugs, including methylphenidate (Ritalin), relieves the symptoms in the mice, apparently without affecting brain dopamine concentrations. Compounds that boost serotonin in the brain have similar effects, a result suggesting that this neurotransmitter is instead the one that tones down the animals' hyperactivity.

    ADHD expert Russell Barkley of the University of Massachusetts Medical Center in Worcester describes the work as “an impressive series of studies, done very carefully.” Many ADHD researchers are not convinced, however, that the mouse results apply to human patients. They think that in humans the stimulants probably work through dopamine in some way—perhaps by raising levels so high that neurons become inured to it. But if humans do respond in the same way as the mice, researchers might be able to design new ADHD therapies that work by mimicking serotonin's effects.

    The new experiments use a mouse strain that Caron and his colleagues created 3 years ago by inactivating the gene for a protein called the dopamine transporter (Science, 16 February 1996, p. 909). The transporter, DAT for short, has the job of picking up dopamine released by nerve activity and transporting it back into the neurons that produced it. Because extracellular dopamine concentrations remain high in the brains of the knockout animals, they are much more active than normal mice. “They're hyperactive for a good reason,” Caron says. “They have lots of dopamine.”

    Indeed, there is some reason to think that a DAT abnormality could contribute to ADHD. At least three genetic linkage studies have picked up an association between a particular variant of the gene that encodes DAT and the condition. And Gainetdinov, Caron, and their colleagues now have evidence that their knockout mice closely mimic the symptoms of hyperactive children. Novel environments, for example, can exacerbate the symptoms of ADHD in children. Similarly, the knockout mice were some 12 times more active than normal mice when first placed in new surroundings, and their high activity levels persisted for the entire 4-hour observation period.

    Hyperactivity in children is also associated with an attention deficit. So the team tested the mice for a similar problem by putting them in a maze consisting of eight arms radiating from a central area, each of which is baited with food. The mice are supposed to learn to proceed from one arm to the next without backtracking—a task the normal animals learn within a half-dozen or so test sessions. “But the DAT knockouts don't ever learn,” Caron says. Instead, they become distracted, returning to arms they've already visited, or rearing up and looking around the maze.

    But methylphenidate and the other psychostimulant drugs tested had a dramatic effect on the knockout mice. The drugs calmed their hyperactivity in the novel environment. And they did this without measurable effects on brain dopamine concentrations. “This means that [the drugs] hit some other system” in the brain, Caron says.

    Further tests pointed to the serotonin system. Although the researchers didn't measure brain serotonin levels, they found that giving the animals a serotonin mimic had effects similar to those of the psychostimulants. So did giving them an inhibitor of the serotonin transporter, the drug Prozac, or administering other treatments designed to raise brain serotonin levels, such as injecting the animals with compounds, such as 5-hydroxytryptophan or L-tryptophan, that are converted to the neurotransmitter in the brain. These results indicate, Caron says, that the psychostimulants calm the hyperactive mice by raising serotonin concentrations to balance the animals' high brain dopamine levels.

    The next goal, he says, is to try to identify which of the 15 or so different receptors through which serotonin exerts its effects in the brain might be involved in the response to the psychoactive drugs. It might then be possible to design drugs for treating ADHD children that act on just that receptor. Such drugs might have fewer of the drawbacks of the psychoactive drugs, such as reduced appetite and possible drug dependence.

    Other researchers who work on the condition caution, however, that although methylphenidate may work by raising serotonin levels in the DAT knockouts, the mouse model may not reflect what is happening in humans. Not everyone has been able to confirm the linkage between the variant DAT gene and ADHD, and even if the linkage does hold up, a defective DAT gene won't be the whole explanation for ADHD, which most researchers, including the authors of the paper, believe is caused by defects in several different genes.

    What's more, any DAT defect in humans will likely be far less severe than that in a complete knockout. “A mouse that's never had a dopamine transporter is a very unusual creature,” says Xavier Castellanos, who studies ADHD at the National Institute of Mental Health. “I'd be cautious about extrapolating to the idea that Prozac will help [ADHD children].” Indeed, some researchers have tried Prozac in ADHD children with what Barkley describes as “not very impressive” results.

    Still, Barkley says, the Caron team's results raise the “very tantalizing hypothesis” that there might be a subtype of ADHD due to an altered DAT, which might respond to Prozac or other compounds that raise brain serotonin levels. The work is spawning fresh ideas about the disorder and how to treat it, agrees James Swanson of the University of California, Davis. “This is exactly what animal models are for.”

  5. SCIENTIFIC EXCHANGES

    DOE Blocks Physicists From Indian Meeting

    1. James Glanz

    CHICAGO—The U.S. Department of Energy (DOE) has refused to allow physicists at its national laboratories to travel to a major particle physics conference in India this week in apparent retaliation for that country's series of nuclear tests last May. At least seven physicists from the Fermi National Accelerator Laboratory and Argonne National Laboratory, both outside Chicago, were denied requests for travel to the 13th Topical Conference on Hadron Collider Physics, which began yesterday at the Tata Institute for Fundamental Research in Mumbai (formerly Bombay). Scientists say that such bans were not imposed even at the height of the Cold War for similar travel to the Soviet Union and China.

    The decision, which does not affect U.S. university researchers, has generated confusion and shock among physicists and lab officials, who fear they have become pawns in a political chess match. The conference “has nothing to do with weapons,” says Daniel Green, a Fermilab physicist scheduled to give an overview talk on particle experiments at the conference before DOE denied his travel request. “We never even did this with the Russians at the worst part of the Cold War,” says John Peoples, Fermilab's director. “This is a precedent.”

    Just as onerous, according to Peoples, was a separate decision last summer not to allow Indian physicists from the Tata Institute to continue their collaboration on the D0 project at Fermilab—even though the Indian government has already contributed half a million dollars' worth of hardware for detectors, which track collisions of subatomic particles. As a Kafkaesque touch, Peoples says that DOE also ordered him—in writing—to remove the Indian flag from the United Nations-like display in front of Fermilab's main building. “I think that a public outcry is called for,” says Andrew Sessler of Lawrence Berkeley National Laboratory in California, who is president of the American Physical Society. The Indian organizer of the conference, Tata's Vemuri S. Narasimham, called the decision “scientific harassment” and said the invitees “will certainly be missed.”

    Tata, one of the country's most prestigious scientific institutes, was placed on a list of restricted sites because it conducts joint research with the Bhabha Atomic Research Center, “which is at the heart of India's nuclear weapons program,” says an official at the State Department, which made the final decision. “For this reason, we concluded that participation by [DOE] lab scientists in a conference sponsored by the Tata Institute was not appropriate,” he says. “We're very disappointed,” says Harry Weerts, a D0 spokesperson at Michigan State University in East Lansing who will speak at the conference. “Our research has nothing to do with the people who make atomic bombs.”

    Scientists at national labs must receive approval for any foreign travel. The first hint of possible trouble came last fall when Fermilab's Rajendran Raja, an Indian-born U.S. citizen, was asked by D0 colleagues to submit his request as a “test case.” After some delay, Raja and other physicists were told late last month that their requests had been denied because of Tata's status in the wake of the nuclear tests.

    “This will alienate the Indian scientific community, which is largely pro-U.S.” says Raja, who has received permission from Peoples to travel on his own time to give his talk and visit family. “And it'll strengthen the hard-liners in India.”

  6. FREEDOM OF INFORMATION

    Scientific Leaders Balk At Broad Data Release

    1. Bruce Agnew*
    1. Bruce Agnew is a writer in Bethesda, Maryland.

    Prompted by concerns that the public may soon get access to all federally funded researchers' records, Congress, the Office of Management and Budget (OMB), federal research agencies, and the scientific community are on the verge of a great debate over the meaning of the word “all.” By the end of this month—possibly within a few days—OMB is expected to take the first step toward implementing a new congressional mandate that “all data” produced with federal funding be given to anyone who seeks them under the Freedom of Information Act (FOIA) (Science, 6 November 1998, p. 1023).

    But even before OMB formally launches the debate—it plans to issue a Notice of Proposed Rule Making and ask for public comment—science policy officials have begun building a case against too freewheeling an interpretation of the new law. Together with key congressional allies, they warn that federally funded research could be hobbled if scientists are forced to disclose experimental results before they are published in a scientific journal, before the end of an ongoing clinical trial, in a form in which human research subjects might be identifiable, or in a manner in which confidential data provided by collaborators might be revealed, among other issues.

    Officials are urging scientists and scientific organizations to weigh in with their own worries. The Association of American Universities and the Council of Government Relations already have done so. In a 4 December 1998 letter to OMB, they not only cautioned against too-broad FOIA disclosure but also asked for clarification of who should bear the potentially “significant” costs involved.

    National Institutes of Health (NIH) director Harold Varmus calls the provision “a potential burden—even a threat—to our investigators.” A National Science Foundation (NSF) spokesperson says it could create “tremendous burdens” for both scientists and the agency. “Most of us believe there should be procedures in place and understanding of how you do share data,” says Wendy Baldwin, NIH deputy director for extramural research. “The problem is, this is too blunt an instrument.”

    Not just blunt, but sudden. The FOIA provision, sponsored by Senator Richard Shelby (R-AL), was written into last October's massive omnibus appropriations bill at virtually the last moment, replacing language that originally ordered OMB simply to study the issue. The new 106th Congress could back away from Shelby's sweeping new FOIA requirement, and some members say it should. Representative George Brown (D-CA), ranking Democrat on the House Science Committee, last week introduced legislation to repeal the provision and start over—with hearings to determine what the data-access problem really is. “Documentation of this problem has been no more than anecdotal,” said Brown in a statement. “We should not jeopardize this [government-sponsored research] enterprise by taking a hasty, ill-considered approach to remedy an alleged problem.”

    Even though FOIA provides exemptions for personal and proprietary data, Brown said individuals and companies might shy away. “Significant loss of voluntary participation in public health and biomedical research would be devastating,” he said, adding that mandating release of all data “would undermine” intellectual property protections.

    In addition, 22 other House members—six Republicans and 16 Democrats—joined Brown in a 7 December 1998 letter to OMB director Jack Lew warning of “a number of negative, unintended consequences.” Co-signers include Representatives Vernon Ehlers (R-MI), an influential Science Committee member, and John Porter (R-IL) and James Walsh (R-NY), who chair the House Appropriations subcommittees that oversee the budgets for NIH and NSF. They might revisit the issue in this year's appropriations bills.

    But congressional aides caution against betting that lawmakers will undo the FOIA requirement as nonchalantly as they enacted it. Some members want to let the process work and see if OMB can come up with a rule that the scientific community can live with, says one Science Committee staffer. That doesn't seem likely. No matter how imaginative OMB may be, it will be hard to soften the meaning of the words “all data.”

  7. GENOMICS

    India Prepares Research, Policy Initiatives

    1. Pallava Bagla

    CHENNAI, INDIA—Senior government officials and scientists last week endorsed a series of steps to bring India into the mainstream of global genomics research. But the proposals—which include studies of human diversity and plant genomes, participation in an international rice genome project, and new laws to permit the patenting of novel genes—seem likely to face vocal opposition on social, ethical, and political grounds.

    Gene pool.

    James Watson (left) meets with geneticist Sharat Chandra and Manju Sharma at Genome Summit.

    CREDIT: P. BAGLA

    “Biotechnology will be the key to India's future, both for modern agriculture and the pharmaceutical industry,” Nobelist James Watson, emeritus director of New York's Cold Spring Harbor Laboratory, told some 5000 Indian scientists gathered last week at a daylong Genome Summit held as part of the annual Indian Science Congress here. “India should take DNA technologies far more seriously if it does not want to be left behind.” At the same meeting, Manju Sharma, secretary of the government's Department of Biotechnology, promised to strengthen “our nascent genomics program … so that India can put the right foot forward into the next millennium.”

    Sharma has asked the government for up to $40 million for genomics research annually for the next 10 years, some 16 times more than the present rate of funding. Part of that would create a national network of centers of excellence to coordinate the expertise needed to understand the human genome, starting with a Center for Human Genetics Research in Bangalore planned for later this year. The Congress endorsed her request, which is currently before the prime minister.

    Biotechnology advocates say that India is a “living laboratory” for studying human genetics. A recent study by the Anthropological Survey of India found “4635 distinct human communities like castes and tribes, including as many as 75 endangered tribal groups, 324 functioning languages, and 25 scripts.” But there is great concern that any information obtained from such a diverse population will be exploited by multinational drug and food companies and not benefit the Indian public. Vandava Shiva, head of the Research Foundation for Science, Technology, and Ecology, wants a 5-year moratorium on commercial transgenic products “to ensure biosafety and protect the rights of small farmers.” The near absence of domestic industry involved in genomics work to date is another major stumbling block to progress, as are antiquated laws on intellectual property.

    The new network would build on several tentative steps India has taken recently to stimulate genomics research. In 1994, for example, it began a $2.5 million initiative to explore the genetic diversity of its people and to study the country's most common genetic diseases. The initiative has created 14 genetic counseling centers across the country for molecular diagnosis and treatment of a variety of genetic disorders, including thalassemia and muscular dystrophy. Sharma also would like to expand the number of counseling centers.

    A small plant genome program has also been started. Last year India put up $250,000 for a Plant Genome Research Center at the Jawaharlal Nehru University in New Delhi, but debate continues on whether the first plant to be targeted should be an edible legume, rice, or a medicinal plant. Last week Rajendra S. Paroda, director-general of the Indian Council of Agricultural Research in New Delhi, told Science that the country “will participate as an equal partner in the rice genome initiative,” an international effort led by Japan. Although details of the program are still sketchy, experts hope that India will take up work on sequencing at least one chromosome.

    Agreeing on the need for genomics research doesn't remove the obstacles facing scientists, however. India still does not recognize product patents in the areas of agriculture, pharmaceuticals, and medicines, and patenting life-forms is prohibited. In addition, most biotech companies are involved in vaccine development and tissue culture, not genomics. “India should get its local industry onboard and should be looking seriously at private sector funding for its genome program,” says Watson, the former head of the genome project at the U.S. National Institutes of Health, adding that his “biggest mistake [there] was not to appreciate the commercial value of the human genome.”

    Although the political climate is not hospitable to a quick change in patent law, India's status as a founding member of the World Trade Organization requires it over the next several years to recognize product patents and to harmonize its policies with the rest of the world. And there are signs that officials may be ready to mount such an effort. Breaking a long-standing taboo on even discussing the subject, Sharma told Science that “India should start permitting the patenting of genes, and rules should be so changed that our scientists can patent the novel genes and products they find.”

    In addition to possible legal reforms, there remains a need to educate the public about genetically modified organisms. The possible introduction of the so-called “terminator gene” in Indian crops has sparked numerous protests, and on 2 December farmers' organizations destroyed seven sites in southern India that were testing a transgenic variety of cotton developed by the Monsanto Co. M. S. Swaminathan, a geneticist and chief of the M. S. Swaminathan Research Foundation here, says “good bioethics, biosafety, and biosurveillance policies and practices are needed to dispel these fears.” Last week, at a national meeting sponsored by the foundation in conjunction with the Congress, more than 100 scientists and policy-makers proposed a high-level and independent National Commission on Genetic Modification of Crop Plants and Farm Animals to advise the government.

    Government officials declined to comment on the value of such an initiative. But Sharma warned that prompt action is needed. “India has a billion mouths to feed, and there is no question of increasing the arable land. The only option is to increase productivity through the judicious use of biotechnology.”

  8. FUTURE FOOD

    Crop Scientists Seek a New Revolution

    1. Charles C. Mann

    The tools that more than doubled world grain harvests since 1960 have lost their edge. Bold efforts to bioengineer crops seem the only hope for a new surge in harvests

    Every year, the U.S. National Corn Growers Association sponsors a competition among farmers for the highest maize yield. The contestants have always vied furiously for the title in this agricultural Superbowl, but now some scientists are taking an interest, too. During the 3 decades of the competition, U.S. maize harvests have risen continuously from an average of 5 metric tons per hectare in 1967 to 8 t/ha in 1997, according to the U.S. Department of Agriculture (USDA). Yet the highest, prize-winning yields have stayed roughly constant (except in years of flood or drought), at about 20 t/ha for irrigated fields. “It's a striking pattern,” says Kenneth S. Cassman, an agronomist at the University of Nebraska, Lincoln: “Steady progress upward on the average, but at the top—the best of the best—it doesn't appear that the maize yields have changed in 25 years.”

    Just keeping up.

    World grain harvests continue to rise, but because of population growth, per capita production has flattened.

    SOURCE: FAOSTAT (faostat.fao.org()

    The apparent ceiling on maize yields—and hints of similar ceilings for rice and wheat—has led Cassman and other researchers to argue that cereals harvests have physical limits, and farmers may be nearing them. Agricultural economist Vernon Ruttan of the University of Minnesota, St. Paul, says that while he was working at the International Rice Research Institute (IRRI) in Los Baños, the Philippines, in the early 1960s, “it was fairly easy for me to tell myself a story about where future yield increases were going to come from.” Today, he says, “I can't tell myself a convincing story about where the growth is going to come from in the next half-century.”

    If the question is whether farmers can raise average yields closer to the maximum, says Thomas R. Sinclair of the USDA Agricultural Research Service at the University of Florida, Gainesville, “I would guess that there is” some room for advancement. But if the question is whether breeders can raise the “physiological potential” of cereal crops, Sinclair says, “I don't think the evidence there is very encouraging. … It's hard to see where improvements in that would come from.”

    The stakes are enormous. In the 1950s and '60s, agricultural scientists at IRRI and the International Maize and Wheat Improvement Center, a Mexico City-based laboratory better known by its Spanish acronym, CIMMYT, developed the package of improved crop varieties and agricultural management techniques collectively known as the Green Revolution. The critical advances of the Green Revolution—and other work by the 16 international agricultural research centers that make up the Consultative Group on International Agricultural Research (CGIAR)—helped world grain harvests more than double since 1960. Despite a huge population increase since then, per capita food production has grown by almost a quarter; the number of people eating less than 2100 calories per day, a standard index of malnutrition, has fallen by three-quarters. Driven by soaring harvests of rice, wheat, and maize, the world's most important crops, the global boom in agricultural production is one of the century's greatest technological achievements.

    Now, though, researchers will have to do it all over again. By 2020, global demand for rice, wheat, and maize will increase 40%—an average annual increase of almost 1.3%—according to a projection released in August by Mark W. Rosegrant, Claudia Ringler, and Roberta V. Gerpacio of the International Food Policy Research Institute (IFPRI), a CGIAR think tank in Washington, D.C. “What everyone wants to know,” Rosegrant says, “is whether that additional demand can be met, and whether it can be met without undue environmental or economic cost.”

    Since the early 1980s, says the United Nations Food and Agricultural Organization (FAO), global cereals harvests have been rising at a rate of about 1.3% per year—just enough to meet the projected increase in demand. But this rate of increase is half what it was in the 1970s, suggesting the possibility of a long-term falloff. Most of the relative decline is due to economic upheavals in formerly communist nations and planned reductions in incentives like farm subsidies in other developed countries, which have caused farmers to take land out of cereals production. But productivity increases—rises in cereal yields per hectare—have been slipping, too, from 2.2% per year in 1967–82 to 1.5% per year in 1982–94.

    To many agronomists, the slackening is a sign that the now-familiar tools of the Green Revolution are facing diminishing returns. The burgeoning harvests the world will need tomorrow will have to come, they say, from radically new, completely untried innovations in genetic engineering. But even if those innovations pan out—which is far from certain—researchers fear that farmers may not have enough water to grow the new crops or may be forced to use so much fertilizer on marginal land that they will poison ecosystems and permanently damage soils. “When you add up everything that has to be done, and the narrowing range of options for how to do it, the challenge is dauntingly large,” says Tony Fischer, crop-sciences program manager at the Australian Centre for International Agricultural Research in Canberra.

    Slower growth.

    Annual increases in cereal production have slowed since the Green Revolution of the 1960s and 1970s.

    SOURCE: WORLD RESOURCES INSTITUTE

    Not everyone shares this alarm, as vigorous debate at recent meetings in Baltimore* and Irvine, California, showed.** “People have been predicting yield ceilings for millennia, and they've never been right,” says Matthew Reynolds, a plant physiologist at CIMMYT. Indeed, some skeptics argue that the slowdown in productivity growth might actually be a sign of progress, because it shows that many nations are enjoying food surpluses. As for meeting future demand, they say, it is a good bet that some of the many efforts to re-engineer crops will pan out. “If I were an agricultural policy developer in a developing country today, I'd be more worried about too much food in the world than too little, because it would drag the prices down,” says D. Gale Johnson, an agricultural economist at the University of Chicago. With varying degrees of caution, official projections from the World Bank, FAO, and IFPRI agree with Johnson: Agricultural researchers can repeat the Green Revolution.

    But the plant breeders themselves are not sanguine. “Those maximum rice yields have been the same for 30 years,” says Robert S. Loomis, an agronomist at the University of California (UC), Davis. “We're plateauing out in biomass, and there's no easy answer for it.”

    Exploited opportunities

    When scientists speak of yields, they can be referring either to a crop's “actual yield”—the harvests produced in real-world conditions—or its “potential yield”—the theoretical maximum weight of grain a unit of land can produce, given perfect weather, optimal use of fertilizers, and no pests or pathogens. Yields can thus be raised either by lifting actual yields closer to the ceiling, usually by improving crop management or developing strains that are resistant to pests, disease, and stresses, or, more ambitiously, by raising the ceiling itself.

    The Green Revolution did both. Plant breeders at CIMMYT and IRRI dramatically increased the potential yield of wheat and rice by creating shortened, “dwarf” varieties with strong stalks that could hold more grain. This boosted the “harvest index”—the percentage of the plant's mass that is grain—to about 50%, almost double the previous figure (Science, 22 August 1997, p. 1038). Dwarfing didn't work on maize, the third major cereal, because the shorter plants shaded themselves too much. So maize breeders took a different approach, says Don Duvick, an agronomist at Iowa State University in Ames: producing strains that could be planted more densely. “We were able to breed to withstand the stresses of that [crowding] and get the yields up in that way.”

    In addition to these triumphs of plant breeding, the Green Revolution had two other, equally important reasons for its success: chemical fertilizer and irrigation. Between 1961 and 1996, according to FAO statistics, global fertilizer use more than quadrupled, rising from 31 million metric tons to 135 Mt. Meanwhile, the world total of irrigated land almost doubled, from 139 million ha to 263 Mha.

    Those trends are now stalling. In developed countries, for example, heavy fertilization boosted yields but also contaminated water supplies, leading environmentalists to argue that farmers should cut back, not increase, fertilizer use. Moreover, poor nations in places like Africa cannot afford agricultural chemicals, limiting their use where they are most needed. Rosegrant projects that fertilization will continue to rise but that the rate of increase will fall sharply. Future harvest boosts, he believes, will come less from using more fertilizer than from using existing levels more efficiently. “By working hard,” he says, “you can squeeze out increases inch by inch.”

    One hope for squeezing higher yields out of the same input, according to John Sheehy, an agricultural systems modeler at IRRI, is to track fertilization more closely to plants' nitrogen requirements. “Rice accumulates half its nitrogen by the time it acquires a quarter of its above ground biomass,” he says. To maximize yields, farmers thus need “a huge basal application” of fertilizer in the first 40 days after sowing, followed by gradually diminishing levels thereafter. In preliminary tests at IRRI, a tracking regimen lifted yields by as much as 20%. But, as Sheehy concedes, such intensive management is easier in laboratories than in farmers' fields. “Translating this work into the real world will be a challenge,” he says.

    Prospects for expanding irrigation are only somewhat better. Irrigation, too, has caused environmental damage, depositing toxic salts on poorly drained agricultural land. And because irrigation is already used in most of the areas where it is practical, future water projects will be increasingly expensive. Worst of all, the International Water Management Institute, a CGIAR laboratory in Sri Lanka, projects that by 2025 as many as 39 countries—including northern China, eastern India, and much of Africa—will be so short of water that they will be forced to reduce irrigation rather than expand it.

    Researchers like Sandra Postel of the Global Water Policy Project in Amherst, Massachusetts, believe that technological innovation can advance water-use efficiency, even in poor countries. Rather than watering crops by flooding whole fields, for example, farmers in parts of northern India are employing cheap, movable plastic pipes dotted with pinholes to “drip-irrigate” their fields. New, low-cost, high-efficiency sprinklers are also under development. But Postel expects the gains to be modest: “To satisfy the nutritional needs of 8 billion people and protect the environment, we'll have to get twice as much agricultural productivity out of each unit of water we're using.” She regards the task as “doable, I guess, but an awful lot of things are going to have to come out just right.”

    Immovable ceilings?

    Given the difficulties of raising yields by improving the use of agricultural inputs, many agronomists believe that a second Green Revolution will have to rest even more heavily than the first on creating crop varieties with higher potential yields. Again, traditional strategies may be losing their edge. “Most plant breeders and those who work in association with them would go along with the idea that there's very little scope in wheat and rice for increasing the harvest index beyond the present value of about 50%,” says Roger Austin, an agricultural consultant in Cambridge, England. Breeding still shorter plants ultimately produces such a low, uneven canopy of leaves, he says, that it is “difficult to get uniform interception of light,” which interferes with photosynthesis; breeding for thinner, lighter stalks makes plants more likely to collapse under the weight of their own grain. As for further crowding of maize, in Sinclair's view, “that opportunity has been pretty well exploited.”

    In recent years, Sinclair says, plant breeders have succeeded mainly in creating varieties that are less susceptible to pests and disease, or that can tolerate hostile environmental conditions, like drought or salty soil. “The problem,” he says, “is that pest resistance isn't increasing potential yield. It's just protecting what you have already.” In his view, “we're getting better at approaching the ceiling. What has been elusive is actually raising the ceiling.”

    Worse, breeders are finding it increasingly hard to keep even the current small harvest increases coming. “Average annual maize yields keep right on going up by 90 kg/ha” in the major maize-producing U.S. states, says Cassman, “but the investment in maize-breeding research has gone up fourfold. The efficiency of translating this investment into yield is therefore down by 75%.” He adds, “When every step forward is harder to take, that's a sign of diminishing returns.” Still more troubling to Cassman, this ever-increasing effort is required to produce a constant linear rise in yields, when the projections by FAO, IFPRI, and the World Bank are for an exponential increase in demand.

    Some agronomists believe traditional plant breeding still has plenty of life left. Chinese rice researchers, for example, are exploiting the surge of productivity and vigor often seen in first-generation crosses to develop a “superhybrid” rice (see sidebar, p. 313). But other post-Green Revolution efforts to breed more productive varieties have run into difficulties. After 10 years, an ambitious IRRI program to design a “new plant type” of rice that would combine multiple innovations is a study in frustration.

    Rice grows as a clump of as many as 30 stemlike “tillers” that bear the flowers and grain on “panicles.” Because a third or more of the tillers may not end up producing grain, the designers of the new plant type proposed creating rice cultivars that would have fewer tillers with bigger, more productive panicles. The smaller number of tillers would also reduce the diameter of the plant, allowing farmers to plant more of them in the fields. Moreover, IRRI breeders wanted the new plant type varieties to have thicker, more upward-angling leaves, which would catch more sunlight than current varieties, boosting the rate of photosynthesis. By combining all these changes, IRRI hoped to lift potential yields in the tropics to 12 t/ha, compared to today's potential yield, which has stagnated at about 10 t/ha.

    Because indica, the most common subspecies of cultivated rice, did not seem to have genes for reduced tiller number, IRRI began the new plant type in 1990 with germ plasm from another subspecies, tropical japonica. This indeed increased the size of the grain-bearing panicles, but the bigger panicles did not fill up with rice, because the plants could not shunt enough energy into them. Making the plants more compact compounded the problem, because the crowded “spikelets” atop the panicles couldn't develop properly. “When you say you can make a watch run better by fixing up these six pieces, it affects the other pieces, too,” Iowa State's Duvick says. “You fix A, you hurt B.”

    At the Baltimore meeting, IRRI crop physiologist Shaobing Peng, who helps direct the new plant type project, revealed that last summer some new plant type rice finally produced better yields than conventional varieties—a sign that IRRI may be on the right track. But the new plant type was still extremely vulnerable to stem borers, a common pest. “The tillers are strong but not tough,” IRRI's Sheehy says. “Borers chew right through them, which poses a genuine research problem.” Although he believes the new plant type will eventually be “vindicated,” its progress is a sobering reminder of the difficulties of raising yield.

    “We've been working on rice yields for so many years without making the kind of progress we'd like to make,” Peng says. “We may be able to create the new plant type without biotech, but that is where new opportunities will have to come from in the future.”

    Big biotech

    Peng and the other agronomists who regard genetic engineering as the key to surpassing the yield barrier have more in mind than the products of today's biotech industry, which now cover almost 20 million ha in North America alone. The vast majority of these crops are the result of single-gene transfers, in which one or more genes coding for desired characteristics—such as herbicide resistance or an antibacterial compound—are smuggled into the organism from an outside source. Such efforts, although important to raising actual yields, are unlikely to raise potential yields. To break yield barriers, the plants will have to be thoroughly re-engineered.

    Among the more widely discussed biotech possibilities is altering the stomata, the porelike openings that stipple a plant's epidermis and control the in- and outtake of oxygen, carbon dioxide, and water. In most plants, the stomata are edged by two cells that resemble a pair of parentheses. When the plant takes in water, the stomatal cells swell open, allowing water to escape and permitting gas exchange; when the surroundings become drier or hotter, the stomata close. Because the stomata stay open longer than needed, most of the water that wheat and rice take in ends up in the atmosphere rather than being used in photosynthesis. “If you're irrigating, you might put up with the water loss in the name of getting the greatest biomass possible,” says UC Davis's Loomis. “But if you're dry-land farming in Kansas, it might not be a good deal—you're using up water too fast.”

    To allow dry-land crops to use water more efficiently, stomata might be bioengineered to close more readily; in water-rich areas, they might be modified to stay open even longer. “That would give you better ventilation in the leaf, decreasing the canopy temperature and giving you better transport of CO2, both of which could boost the rate of photosynthesis,” says Fischer of the Australian Centre for International Agricultural Research.

    Researchers have their eyes on two molecular targets that play a role in regulating the stomata: the plant hormone abscisic acid, which triggers closing, and an enzymatic process called farnesylation, which seems to impede ABA (Science, 9 October 1998, pp. 252, 287). By altering farnesylation, researchers may, in theory, be able to adjust plants' sensitivity to ABA and thus the tendency of the stomata to close. That task is daunting enough, but other researchers would like to go even further and tinker with the mechanisms of photosynthesis itself (see next story).

    Many economists are confident that such efforts will eventually pay off and drive up crop yields again. But agronomists tend to view biotech as a long shot. Controlling such basic multigene traits, Fischer warns, is a “complex, unpredictable” task. Photosynthesis, notes Sinclair, is a process that evolution hasn't changed fundamentally “in a couple billion years.” And even if the work is a technical success, the payoff may be minor, as traditional plant breeding has already pushed up crops' harvest index and ability to capture sunlight about as high as they can go. As Sinclair put it at the Irvine meeting, “Some of the hope for biotechnology seems analogous to the dreams of mechanical perpetual motion devices over a century ago: No matter how finely tuned the machine, reality does not allow output to exceed input.”

    Still, altering photosynthesis is “the great white hope” of the future of agriculture, as agricultural consultant Austin puts it. “All the relatively obvious steps have been taken. Photosynthesis is what's left.”

    Money woes

    Re-engineering photosynthesis—or fundamentally improving crops in some other way—will require years of costly basic research, in Cassman's view. But a crucial source of support for agricultural science is eroding. For more than a century, according to Phil Pardey, an economist at IFPRI, government funding has supported long-term agricultural research. Although the biotech boom has spearheaded a recent massive increase in private-sector spending on agricultural R&D, notes Duvick, a former research director of agribusiness giant Pioneer Seeds, “even the big companies don't do a lot of long-term research.”

    But despite opposition from both the academic and corporate community, IRRI's budget in constant 1994 dollars has dropped from a high of $46.5 million in 1990 to $32.7 million in 1997, according to CGIAR figures. Similarly, CIMMYT's funding fell from $40.2 million in 1988 to $28.4 million in 1997. “We're taking away funding with the assumption that we've made it,” says Dennis A. Ahlburg, a demographer at the London School of Hygiene and Tropical Medicine's Centre for Population Studies. “But if we don't continue to support [agricultural research], we'll slide backward.”

    “The scientific challenge [of feeding the world] has been grossly understated,” Cassman says. “But even if I'm wrong, and we somehow can do it without special effort, I think you'd like to have a margin of security. … We are talking about the prospects for producing enough food to feed people in the next century, and a margin of security seems justified.”

    • * “Post-Green Revolution Trends in Crop Yield Potential: Increasing, Stagnant, or Greater Resistance to Stress?” 90th annual meeting, American Society of Agronomy/Crop Science Society of America/Soil Science Society of America, 19 October 1998. The papers have been submitted for a forthcoming special issue of Crop Science.

    • ** “Colloquium on Plants and Population: Is There Time?” 5–6 December 1998, Beckman Center of the National Academies of Sciences and Engineering, Irvine, California. Papers at www.lsc.psu.edu/NAS/The%20Program.html

  9. FUTURE FOOD

    Crossing Rice Strains to Keep Asia's Rice Bowls Brimming

    1. Dennis Normile

    BEIJING, CHINA—While plant breeders in most of the world fear that grain yields are plateauing (see main text), Yuan Longping thinks a big jump in rice productivity is just around the corner. Yuan, the director of the National Hybrid Rice Research and Development Center in Changsha, Hunan Province, says he is on the verge of creating a superhigh-yield hybrid that promises jumps of 15% to 20% in potential rice yields over existing hybrids.

    Yuan cautions that the results are based on tiny test plots and must be confirmed in larger trials over the next 2 years. Even if the new strain does live up to expectations, say other plant breeders, consumers may turn up their noses at the quality of the rice. But scientists who have heard his preliminary results think Yuan is on to something big. “When I hear Yuan Longping's enthusiasm about this and when I think about his track record, I take note of what he's saying,” says Neil Rutger, director of the Dale Bumpers National Rice Research Center in Stuttgart, Arkansas. “If [the yields are] what he claims, it is a significant achievement,” adds Sant Virmani, deputy head of plant breeding at the International Rice Research Institute in Los Baños, the Philippines.

    Yuan's efforts make use of the fact that the first generation of hybrid plants is typically more vigorous and productive than either parent—a poorly understood phenomenon called heterosis. To take advantage of heterosis, virtually all the maize in developed countries is grown from first-generation (F1) hybrid seed. But corn is much easier to hybridize than rice. Because rice is self-pollinating, getting hybrid seed requires developing lines of plants in which the male organs are sterile and can only be pollinated by the other parental line. A third line of plants is required to provide pollen to reproduce the male-sterile line for the next growing season. The technique is not only laborious but also produces small quantities of seed. As a result, hybrid rice in most countries has taken a back seat to inbred rice, in which part of one year's crop can be kept as seed for the next.

    Not in China, however. In the 1970s, Yuan made production of F1 hybrid rice seed viable with techniques that tapped his country's cheap labor. He sprayed the male-sterile plants with a growth hormone so that the panicles, or grain clusters, would emerge from the rice leaf sheath to catch pollen that was shaken loose by ropes dragged over the male-line plants. Hybrid rice now accounts for half of China's rice acreage and yields an average of 6.8 tons per hectare compared with 5.2 tons for conventional rice. By Rutger's calculation, the increased output feeds an additional 100 million Chinese every year.

    China's success has inspired hybrid rice production in India, Vietnam, and the Philippines. Several more countries are developing hybrid rice varieties suited to their own growing conditions. But even higher yields will be needed to meet Asia's projected food demand.

    To Yuan, the answer was to cross more diverse parent strains in order to achieve even greater heterosis and higher yields. Unfortunately, the more diverse the parents, the greater the chance that the offspring will be sterile, growing vigorously but producing little rice. But in the mid-1980s, Hiroshi Ikehashi, a plant breeder at Kyoto University in Japan, identified a gene in certain species of japonica rice native to Indonesia that promotes fertility in hybrids. This wide compatibility gene, which has proven relatively easy to transfer through crossbreeding, was the breakthrough Yuan needed.

    Rather than count on heterosis alone to raise yields, however, Yuan also decided to incorporate morphological improvements. Since 1996, his group has selectively bred potential parents for long, narrow, and very erect top leaves. This configuration, Yuan believes, captures sunlight more effectively. He's also bred plants to grow large panicles that hang close to the ground, reducing the risk of lodging, or falling over. “Both hybridization and morphological improvements are important,” Yuan says. “I don't think you can rely on just one or the other.”

    In 1997, one of the crosses yielded an average of over 13 tons per hectare—well above the 10.5 tons for existing hybrids grown under ideal conditions. Although that test took place on just a fraction of a hectare, the group achieved similar results last summer in trials at four separate locations totaling more than 2 hectares. If he can get 12 tons per hectare for two consecutive years, Yuan says, “I will declare that the goal of the superhybrid rice-breeding program is achieved.” He is so confident of success that he invited participants at an international conference in Cairo last fall to visit China and witness this year's harvest.

    But yields aren't everything. “The value of superhybrids will very much depend on the grain quality,” warns Miroslaw Maluszynski, a plant geneticist at the International Atomic Energy Agency in Vienna, Austria. Yuan agrees that hybrid rice became popular in China because people “needed calories more than quality.” And quality is still critical in more affluent nations such as Japan and South Korea. “Where quality is important, hybrid rice won't sell,” says Shigemi Akita, a crop physiologist at the University of Tokyo.

    Still, Yuan and others are confident that hybrids will play an increasingly important role in filling Asia's rice bowls. Studies of heterosis “are still at a juvenile stage,” he says. “The very high-yield potential of hybrid rice has not yet been fully tapped.”

  10. FUTURE FOOD: BIOENGINEERING

    Genetic Engineers Aim to Soup Up Crop Photosynthesis

    1. Charles C. Mann*
    1. With reporting by Dennis Normile in Tokyo.

    To improve crops' ability to turn atmospheric carbon into food, researchers hope to alter the principal enzyme or supercharge it with CO2

    Few nonbiologists may have heard of ribulose-1,5-bisphosphate carboxylase-oxygenase, the enzyme known as RuBisCO, but its importance is hard to overstate. The principal catalyst for photosynthesis, it is the basic means by which living creatures acquire the carbon necessary for life. By interacting with atmospheric carbon dioxide, RuBisCO—the world's most abundant protein—initiates the chain of biochemical reactions that creates the carbohydrates, proteins, and fats that sustain plants and other living things, ourselves included. But the enzyme also has another distinction, according to T. John Andrews, a plant physiologist at The Australian National University in Canberra: “RuBisCO is nearly the world's worst, most incompetent enzyme—it's almost certainly the most inefficient enzyme in primary metabolism that there is.”

    RuBisCO's ineffectiveness has been a spur to scientists since it became fully apparent in the 1970s. Indeed, the quest for a better RuBisCO is “a Holy Grail in plant biology,” says George Lorimer, a biochemist at the University of Maryland, College Park, who worked with the Swedish team that mapped the enzyme's structure in 1984. “Everyone always goes in with the hope of changing the face of agriculture.” Despite more than 20 years of effort, the hopes have not yet paid off. But recent advances in molecular biology—and the unexpected discovery of more efficient RuBisCO in red algae—have given new impetus to the long struggle to modify the enzyme. In what may be the most ambitious genetic-engineering project ever tried, laboratories across the world are trying to improve the RuBisCO in food crops by either replacing the existing enzyme with the red algae form or bolting on what could be thought of as molecular superchargers.

    No one expects quick results—“I'm not for a second trying to minimize the task,” says Andrews. The current state of the art in genetic engineering permits altering or splicing in single genes to improve a plant's resistance to pests, say, or to allow a crop to survive applications of weed-killing herbicide. To alter RuBisCO, by contrast, scientists must work with a 16-part molecule that is encoded by many genes in both the cell nucleus and the chloroplasts, where photosynthesis takes place. The enzyme also depends on a supporting cast of other enzymes, some of which will probably need to be revamped if RuBisCO is changed. But with many other avenues toward increasing crop yields seemingly blocked (see p. 310), RuBisCO has become an increasingly tempting target.

    RuBisCO is just one actor in photosynthesis, which is a complex symphony of photochemical and enzymatic reactions. During the first, “light” stage of photosynthesis, chlorophyll—a green pigment in the chloroplasts—absorbs enough energy from sunlight to split off electrons from water molecules, simultaneously releasing oxygen gas and driving the production of adenosine triphosphate (ATP), which is used in the second, “dark” stage. This step begins when RuBisCO combines with carbon dioxide to produce 3-phosphoglycerate, or PGA (see diagram). Powered by energy from the ATP, a series of reactions transforms PGA into a host of starches, sugars, and other organic compounds.

    Photosynthesis as a whole is not particularly efficient; a crop plant that stores as much as 1% of the total received solar energy is exceptional. As a result, the process offers many targets for bioengineers. But RuBisCO, far and away the biggest drag on the process, is the most appealing of them. First, it is torpid in the extreme—“perhaps the slowest known enzyme,” William Ogren, a now-retired RuBisCO researcher from the University of Illinois, Urbana-Champaign, says with only slight exaggeration. Enzymatic rates are often on the order of 25,000 reactions per second; RuBisCO turnover in higher plants can be as little as two or three reactions per second. “Not one of evolution's finest efforts,” says Ogren.

    Second, RuBisCO triggers an additional reaction that interferes with the first. In 1971 Ogren and two other researchers discovered to their amazement that besides capturing and “fixing” carbon dioxide, RuBisCO catalyzes a second, opposing reaction. In what is called photorespiration, the enzyme combines with oxygen, rather than carbon dioxide, to create a compound that is subsequently converted partly into carbon dioxide. In other words, RuBisCO catalyzes one reaction that incorporates carbon into plants and another that ultimately strips them of carbon.

    Typically, the RuBisCO in higher plants like rice and wheat is 100 times more likely to pick up CO2 than O2. But because the concentration of atmospheric O2 is many times greater than that of CO2, the greater affinity for CO2 is largely canceled. As a result, 20% to 50% of the carbon fixed by photosynthesis is lost to photorespiration. “The oxygenation reaction is—as far as we can tell, and a lot of research over decades has gone into it—just a complete waste,” says Andrews. “It doesn't do anything for the plant.”

    This striking inefficiency was no handicap when photosynthesis first evolved 3 billion years ago, because the atmosphere was almost devoid of oxygen. After hotosynthesis filled the air with oxygen and RuBisCO's weakness was revealed, it may have been too late for evolution to fix the problem, says Murray Badger, a RuBisCO specialist at The Australian National University. “It's a somewhat general correlation that the more specific and discriminatory a reaction becomes, the slower it gets,” he says. As a result, mutations that made RuBisCO target carbon dioxide better might also have made it even slower.

    Virtuous cycle.

    The energy-carrying compounds ATP and NADPH, produced by photosynthesis, power a chain of reactions that begin when RuBisCO combines with carbon dioxide and ultimately produce starches and sugars.

    SOURCE: HELENA CURTIS, BIOLOGY (WORTH PUBLISHERS, 1983)

    If genetic engineers could find a way around RuBisCO's slowness and inefficiency, they might reap a double benefit. A faster, more efficient enzyme could help crops grow and increase their biomass, letting them produce more grain at a faster rate. In addition, explains Martin Parry of the Institute of Arable Crops Research-Rothamsted in Hertfordshire, Britain, RuBisCO's lethargy means that “plants need to invest incredibly heavily in it” to fix sufficient carbon. “A very large proportion of the plant's nitrogen requirements come from the need to produce the enzyme,” which makes up as much as half the soluble protein in plant leaves. More efficient RuBisCO could thus lower crops' need for nitrogen, now mainly supplied by fertilizer in many countries.

    A better RuBisCO. The discovery of photorespiration launched the effort to remodel RuBisCO. Researchers began by comparing the form of the enzyme found in higher plants with that in cyanobacteria—blue-green algae—which is even less efficient than the higher-plant version. To find out why, Lorimer's group, then based at DuPont, collaborated with Carl Branden's x-ray crystallography group in Sweden to determine the molecular structures of both forms, hoping to find telling differences. “We spent years creating high-resolution structures of spinach [a model plant in RuBisCO research] and cyanobacteria,” says Lorimer. But despite the finely detailed results, “the sobering reality was that you can lay down [structural maps of] these two enzymes on top of each other and you're very hard pressed to see the difference.” Even if the differences could be identified, Lorimer believes, they would be so numerous and subtle “that you could not rationally reason your way to what it was that you would need to improve the enzyme.”

    The failure of the structures to provide a path for modifying RuBisCO dismayed many researchers; Lorimer's group disbanded. Hopes reawakened in 1992, when F. Robert Tabita and B. R. Read of Ohio State University in Columbus discovered that some diatoms and red algae have more-specific RuBisCO than that in higher plants. In 1997, a team led by Akiho Yokota, a plant molecular physiologist at the Research Institute for Innovative Technology for the Earth, in the Keihanna Science City near Osaka, Japan, found red algae with RuBisCO that is about three times more efficient. “We've looked at a lot of the red algae,” Tabita says, “and the trend is always the same, 2 1/2 to threefold higher than normal plants.” No one yet knows why.

    To try to exploit this advantage, Andrews's group is one of several that are attempting to insert RuBisCO genes from red-algal chloroplasts into the chloroplasts of higher plants, using techniques for manipulating chloroplastic DNA developed by Rutgers University biochemist Pal Maliga. “If it can be done, it would be really amazing,” says Yokota, who also works at the Nara Institute of Science and Technology, in Kansai Science City, near Nara, Japan. Other groups are working on related approaches at Rothamsted, Ohio State, and the University of Nebraska, Lincoln.

    “This is a little bit like transferring a V-8 engine from a big automobile into a small car,” says Andrews. “It may not work.” Even if the enzyme functions in its new, transgenic home, he cautions, “it's not enough simply to get the RuBisCO in there; it has to be assembled and produced in the right form, and also be connected to the regulation system that the chloroplast keeps control of RuBisCO with.” Andrews hopes to see results in “about 10 years.”

    Supercharging photosynthesis. While most researchers trying to modify the genetic basis of photosynthesis are focusing on RuBisCO, a few are trying another, perhaps even more ambitious, strategy. Just as small engines can go faster if they are equipped with a supercharger, which force-feeds them with fuel, some plants have their own photosynthetic supercharger, known technically as the C4 cycle. In C4 plants, the bundle cells where photosynthesis takes place are surrounded by specialized “mesophyll” cells, which temporarily fix carbon dioxide and jam it into the bundle cells at such high concentrations that the oxygen reaction is effectively blocked. The C4 cycle requires so much energy that C4 plants cannot grow in dim light, but in the right, well-illuminated conditions, C4crops like sugarcane photosynthesize more efficiently than any others. About 5% of all terrestrial higher-plant species use the C4 cycle; maize is economically the most important.

    A joint team at Japan's National Institute of Agrobiological Resources and Nagoya University led by Nagoya microbiologist Makoto Matsuoka is now attempting to reproduce the C4 cycle in rice. For the transformation to succeed, a host of altered enzymes would have to work together properly, and the plant's structure may have to be changed to create the equivalent of mesophyll cells. As a result, the project may well be the most fundamental genetic alteration that humankind has ever tried in any organism. “Don't hold your breath,” Lorimer says.

    Indeed, Matsuoka cautions, “I don't think we can really make a true C4 [rice] plant.” Rather than transferring the whole genetic structure for the C4 cycle from, say, maize into rice, his team is trying to identify nonfunctioning equivalents of C4-type genes in rice and selectively replace them with their active counterparts from maize.

    In a paper in press at Nature Biotechnology, Matsuoka's group reports taking a first step by replacing three silent rice genes with their more lively equivalents in maize, including the important enzyme phosphoenolpyruvate carboxylase (PEPC), which catalyzes the beginning of the C4 cycle. “We succeeded in getting [PEPC] highly expressed in a rice plant,” Matsuoka says. “This is a world first.” After transferring each gene to a different rice plant, the group is now crossing the results to obtain rice that produces all three enzymes.

    That may not be enough to replicate C4-like photosynthesis in rice, says Matsuoka. Rice has mesophyll-like cells that are not photosynthetically active, and these may have to be activated. And some of the changes may actually be deleterious—the transgenic rice with only the C4 enzyme NADP-malic tends to die quickly, for example. Still, Matsuoka says, preliminary evidence suggests that active PEPC in rice cuts the destructive oxygen reaction by about a third.

    Even as the work to alter photosynthesis begins to gain momentum, some critics question whether it will benefit agriculture. Since at least 1970, research has shown little correlation between crops' photosynthesis rates and their yields, suggesting that improvements in RuBisCO won't automatically translate into better harvests. But according to Steven P. Long, a plant physiologist at the University of Essex in the U.K. the correlation may simply be hidden by the propensity of higher yielding cultivars to have bigger leaves, which increases the amount of self-shading and thus lowers the mean photosynthetic rate. When he and his colleagues temporarily boosted photosynthesis rates in wheat by flooding open fields with enough CO2 to increase local atmospheric levels by 50%, grain yield went up 10% to 12% in two consecutive growing seasons.

    Long is more skeptical about the value of importing the C4 cycle into crops like wheat and rice. Because the C4 cycle imposes a high energy cost on the plant's metabolism, it only pays off at higher temperatures—that's why there is no winter maize crop. “You can model it fairly easily,” Long says. “Below 28°C, the [standard photosynthesis] is more effective, and above 28°C the C4 is more efficient.” The payoff threshold will rise even higher as the atmosphere's CO2concentration increases because of human activity. And if scientists like Andrews succeed in increasing the specificity of RuBisCO, the threshold for adding the C4cycle will go still higher—perhaps to 40°C, he says. “There'd be no point in going to C4 then.”

    Nonetheless, Long favors working on both approaches. “These are such major steps that we don't even know how many unknowns there are in doing this. Pursuing all options is well worthwhile.”

    “We've now reached the limits of what we can do [with conventional breeding],” Rothamsted's Parry says. “So therefore we have to solve the next problem, which is putting in a bigger engine.” It's a challenge, he observes, “with considerable practical interest.”

  11. CLIMATE CHANGE

    Landscape Changes Make Regional Climate Run Hot and Cold

    1. Jennifer Couzin*
    1. Jennifer Couzin is a former intern at Science.

    Researchers are only now measuring and trying to understand the impacts of agriculture, deforestation, and development on regional climate

    Step across the U.S.-Mexican border anywhere in Arizona, and you will see why some climate researchers say that so far, global warming has been overrated compared to other human impacts on climate. Because of overgrazing on the Mexican side, there's little vegetation to release water vapor—a process called transpiration, which cools the air. Temperatures are as much as 4 degrees Celsius warmer on some afternoons than in the United States, just a few dozen meters away. The Mexican warming, which researchers at Arizona State University in Tempe are tracking, is only one of many regional climate changes that are starting to capture researchers' attention.

    Climate change is best known on the largest and smallest scales: the global warming expected from the buildup of greenhouse gases, and the heat-island effect created by city buildings and pavement. But caught between the two are the climatic effects of deforestation, grazing, agriculture, and development, which have profoundly altered vast swaths of land on almost every continent. Climate researchers have long suspected that land use can change climate, but “there's very much a lack of studies that have been done at a regional level,” says Randall Cerveny of Arizona State. Now, he and others are homing in on this middle ground, measuring and trying to understand the cooling effect of irrigated farming in Colorado, the warming and drying seen in deforested areas of the Amazon and elsewhere, and variations in storm intensity in the U.S. Northeast that match the weekly rise and fall of air pollution.

    Computer climate models aren't refined enough for researchers to trace all of the causal links between human activity and regional climate. But recent results point to sizable effects. “We are having a bigger impact on the environment through our local and regional land practices than through the standard global greenhouse response,” says Gordon Bonan, an ecologist who works with land-surface models at the National Center for Atmospheric Research (NCAR) in Boulder, Colorado.

    One of the few regions that has been carefully studied is the plains of Colorado, where millions of acres of irrigated land have replaced dry prairie on the state's eastern front. A team of atmospheric scientists, ecologists, and hydrologists at Colorado State University in Fort Collins has found that over the last decade, in step with an expansion of irrigated lawns and an increase in water-hungry crops like corn, the mean July temperature on the eastern slope of the Rockies has dropped by as much as 2 degrees Celsius.

    To test whether irrigation really was the instigator, the Colorado State team used a computer model to compare heat and moisture fluxes in the existing land-cover pattern with the fluxes expected in two other scenarios: the pre-European landscape, when the prairie was unbroken, and a future landscape with even more irrigated farming. With more irrigation, they found, summer temperatures cooled measurably, allowing conifers favoring wet, cool conditions to establish themselves lower on the mountains. Driving the cooling, says Roger Pielke Sr. a professor of atmospheric sciences at Colorado State and a member of the study team, is transpiration from plants, which cools the air and produces clouds over the plains. Winds then sweep the cool, damp air upslope, cooling the nearby mountains and increasing precipitation there.

    The reverse is true in another intensely studied area, the Amazon rainforest, where logging and burning have replaced large tracts of forest with grassland. Because grasslands cannot transpire as much water as lush vegetation or crops do, the tropical forest has become spotted with hot, dry patches. “You basically can create desertlike conditions in the middle of high-rainfall regions,” says Michael Glantz, a senior scientist at NCAR. He and others estimate that temperatures are about 1 degree Celsius higher and precipitation up to 30% lower in large deforested patches, which Glantz says resemble a “lunar landscape.” Similar effects have been seen in deforested regions of sub-Saharan Africa.

    But determining just how much of the observed warming and drying is due to local human activity is often beyond current climate models. “Our ability to quantify the human land use effect on climate is still rather primitive,” says Robert Dickinson of the University of Arizona, Tucson.

    The problem, says Michael Oppenheimer, chief scientist at the Environmental Defense Fund in New York, is that the processes determining regional climate can take place at too fine a scale to be captured by most climate models, which often subdivide the landscape into regions 30 or more kilometers across and use a single number for the surface features and weather within each one. “What happens in a convective storm, cloud formation, and precipitation [are] treated at a relatively coarse scale in most models,” says Oppenheimer. Pielke's modeling system at Colorado State is considered one of the best, able to simulate how soil and vegetation interact with the atmosphere in areas as small as 6 kilometers across.

    The coarse picture of climate processes offered by most models makes it difficult to tease apart regionally induced climate effects from global ones. A global increase in carbon dioxide, for example, can dry and heat the atmosphere just as local deforestation does. Regional and global effects can also counteract each other, says Bonan. In the United States, he points out, increases in irrigated land “have mitigated some of the warming” produced by the greenhouse effect.

    Some regional effects, however, are easier to disentangle from global ones. That was the case for the 7-day cycle of precipitation and storm intensity that Cerveny and his colleague Robert Balling have seen in data from the Eastern Seaboard of the United States. In 16 years of records, they found that 20% more rain fell over the Atlantic on Saturdays and Sundays than on weekdays; over a 50-year period they also logged weaker coastal cyclones on weekends—patterns that could hardly result from any gradual global change. Instead, Cerveny and Balling argued in Nature last summer, they reflect a buildup of industrial pollutants throughout the week. The haze, especially heavy at the end of the week, provides condensation nuclei for moisture, fostering the growth of heavy clouds, which shed rain on weekends. Although the weaker cyclones are more of a mystery, the authors believe that they're also tied to the rainfall increase.

    Another unmistakably regional effect is seen halfway around the world, where decades of overirrigation have drained the Aral Sea, which straddles the former Soviet republics of Kazakhstan and Uzbekistan. Without the water's moderating effects, summers are hotter and drier, while winter temperatures have dropped.

    The handful of scientists working full-throttle on regional climate change think these kinds of findings could ultimately make climate change seem a more immediate issue, and perhaps a more tractable one. “We framed the climate change problem … as a global problem with a global solution,” says Roger Pielke Jr. a political scientist at NCAR. It's possible, he and others say, that focusing on how climate is changing by region will goad more individuals into taking action against, for example, deforestation, which could also have a ripple effect on global climate problems. “The global climate is the sum of these regions,” says Bonan. “[Regional] change is occurring at a spatial scale that we can respond to. … It's occurring at a time scale that we can respond to.”

  12. AMERICAN ASTRONOMICAL SOCIETY MEETING

    Soggy Cradles of Stars

    1. Govert Schilling*
    1. Govert Schilling is an astronomy writer in Utrecht, the Netherlands.

    AUSTIN, TEXAS— Thales, the ancient Greek astronomer, is said to have fallen down a well while gazing at the stars and not paying attention to his surroundings. Attendees at the American Astronomical Society meeting, held here from 4 to 9 January, could hardly be accused of the same fault, as they issued dozens of press releases and gave a string of press conferences. Although some results were oversold, some could be for the ages.

    The birthplaces of stars and planets are surprisingly damp, observations by the recently launched Submillimeter Wave Astronomy Satellite (SWAS) suggest. “We're seeing water everywhere we look,” says SWAS principal investigator Gary Melnick of the Harvard-Smithsonian Center for Astrophysics in Cambridge, Massachusetts.

    SWAS, launched by NASA on 5 December, is the first satellite ever to observe the universe at submillimeter wavelengths. This long-wavelength radiation, between the infrared and radio, is emitted by low-temperature molecules, including water, in the cool clouds of gas and dust that collapse to form new stars and planets. At the meeting, the SWAS team reported that their first 2 weeks of observations had revealed about one water molecule for every million hydrogen molecules in these cool clouds—about 10 times as much as expected from older observations made from an airplane.

    “Their value for cool water is surprisingly high,” says Ewine van Dishoeck of Leiden University in the Netherlands, “although I'm not yet convinced that they've looked at really cold clouds.” If it holds up, the discovery of abundant water in the star-forming clouds could answer long-standing questions about the starting ingredients of planetary systems and about how the clouds lose enough energy to collapse into stars and planets.

    Infrared satellites like the European Space Agency's Infrared Space Observatory (ISO) had already revealed large amounts of water vapor in the warmest parts (around 100 K) of the star-forming clouds. But submillimeter emissions from water in the cooler parts of the cloud are invisible to ISO. As a result, the chemistry of the cool parts of the clouds—direct precursors to new stars—has been a mystery. No one was sure whether the oxygen and hydrogen there combined to form water, and if they did, how much oxygen would be left to form molecular oxygen or carbon monoxide. Now, says Melnick's colleague David Neufeld of The Johns Hopkins University in Baltimore, the SWAS observations reveal that “almost all the oxygen [in the cool clouds] is in the form of water.”

    The finding supports a 30-year-old theory that water molecules are the fundamental cooling agents in molecular clouds, which have to radiate excessive heat while collapsing into stars and planets. Water molecules have lots of energy levels and get easily excited by collisions with hydrogen molecules. When they fall back to their ground state, the energy is radiated away. “This is the first time that we can back up this theory with observations,” says Melnick.

  13. AMERICAN ASTRONOMICAL SOCIETY MEETING

    Superflares From Giant Planets

    1. Govert Schilling*
    1. Govert Schilling is an astronomy writer in Utrecht, the Netherlands.

    AUSTIN, TEXAS— Thales, the ancient Greek astronomer, is said to have fallen down a well while gazing at the stars and not paying attention to his surroundings. Attendees at the American Astronomical Society meeting, held here from 4 to 9 January, could hardly be accused of the same fault, as they issued dozens of press releases and gave a string of press conferences. Although some results were oversold, some could be for the ages.

    A giant planet around another star may have announced its presence 100 years ago, although 19th century astronomers did not realize what they were seeing, say two Yale University astronomers. Bradley Schaefer and Eric Rubenstein propose that an interplay between the magnetic fields of the star and a nearby giant planet could have caused a strange brightening of the star. The same mechanism, they believe, may explain similar flare-ups seen in other stars over the years.

    In 1899, astronomers noticed a 10-fold brightening of S Fornacis, which lasted for a few hours. Since then, observers have seen a bunch of other apparently unremarkable stars pour out far more x-rays and visible light than usual for a few hours or days. The outbursts resemble solar flares, which go off when the sun's magnetic field suddenly rearranges itself. But these “superflares” are 100 to 10 million times more energetic, and no one has been able to explain them.

    Surprisingly, Schaefer found that nine of the superflaring stars are very much like the sun. Any theory must explain why the sun doesn't have them, says Schaefer, and Rubenstein's model does just that.

    In their duration and energy, the superflares are very much like the outbursts of a certain type of binary star, known as RS CVn binaries, says Rubenstein. In these systems, two magnetized stars orbit each other so closely that their magnetic field lines become entangled. Eventually, the field lines reconnect into a more relaxed configuration, like twisted rubber bands suddenly unsnapping, and energy is released in a tremendous burst of x-rays, ultraviolet radiation, and visible light.

    The nine sunlike stars don't have a close companion, but “a similar interaction could occur with a Jupiter-like planet orbiting the star at close distance,” says Rubenstein. Over the last couple of years, many of these “hot Jupiters” have been found around sunlike stars. He says the sun is relatively quiet because Jupiter and Saturn, with their strong fields, orbit at a safe distance.

    For the model to work, the superflaring stars should have strong magnetic fields. “We've checked the field strengths for two of them” by studying the stars' spectra, says Schaefer, “and they both turn out to have very strong fields.” According to Rubenstein, “the model doesn't need any new physics. We know stars with strong magnetic fields exist. We know hot Jupiters exist. And the model provides a natural explanation for the fact that the sun doesn't have superflares.”

    Solar flare expert Kees de Jager of Utrecht University in the Netherlands is cautious, however. “It's always easy to come up with a qualitative model,” he says. “I'd like to see a quantitative analysis” of whether the interaction of a star's magnetic field with a planet's really could lead to the observed energetic bursts. Rubenstein agrees. “I'll have to work on that before submitting a paper,” he says.

    Meanwhile, Schaefer thinks that watching for flares could guide searches for extrasolar planets. He proposes building a wide-angle telescope with a dedicated camera, which could scan over a million sunlike stars every night. “Superflaring stars might be the ones planet hunters should pay more attention to,” he says.

  14. AMERICAN ASTRONOMICAL SOCIETY MEETING

    Cosmic Expansion, Poco Adagio

    1. James Glanz

    AUSTIN, TEXAS—Thales, the ancient Greek astronomer, is said to have fallen down a well while gazing at the stars and not paying attention to his surroundings. Attendees at the American Astronomical Society meeting, held here from 4 to 9 January, could hardly be accused of the same fault, as they issued dozens of press releases and gave a string of press conferences. Although some results were oversold, some could be for the ages.

    An exploding star called Albinoni, shining from when the universe was less than half its present age, is providing astronomers with a fresh handle on a mysterious energy that seems to permeate the cosmos and boost its expansion rate. A preliminary analysis of Albinoni—at roughly 9 billion light-years away the most distant supernova ever seen—hints that at the farthest distances and earliest times yet probed, the expansion may not have been accelerating as it appears to be doing today. That's just what theory predicts, Saul Perlmutter of Lawrence Berkeley National Laboratory in California said at the meeting.

    Perlmutter, who leads one of two international teams that discovered the accelerating expansion from less distant explosions (Science, 18 December 1998, p. 2156), stressed that “we just have had a chance to look at our discovery image and make a rough estimate of the brightness of the supernova.” The apparent brightness of supernovae is a measure of their distance, and therefore of the rate at which cosmic expansion has swept them away over billions of years. The supernovae studied up to now were a little dimmer, and hence farther, than expected, implying that cosmic expansion has sped up since they exploded.

    Extremely distant supernovae, shining from well back in cosmic history, should reveal a change in the cosmic push at the earliest times if the background energy, called the cosmological constant, or lambda, is real. That's because the density of this energy throughout space should be constant for all time, so the push it produces to counteract gravity and accelerate expansion is also constant. In the early universe, where the same amount of gravitating matter as today was packed into a smaller volume, gravity would have been strong enough to overwhelm lambda and slow the expansion. Lambda would win out and produce an accelerating universe only in the last few billion years, as gravity's grip weakened.

    Albinoni, spotted late last year, seems to be a little brighter—hence nearer—than it would be if the expansion had been accelerating continuously since it exploded. Perlmutter stresses that this conclusion could change with further observations and analysis. But for now, it shows the power of distant supernovae for distinguishing between the cosmological constant and possible confounding factors, such as dust. If a haze of cosmic dust, rather than an accelerating universe, is what dims the nearer supernovae, distant supernovae should also be anomalously dim, not bright, said Robert Kirshner of the Harvard-Smithsonian Center for Astrophysics, a member of the other supernova team who spoke at the same session. “Pushing to bigger [distances] is definitely the way to see the effect of the lambda cosmology as distinct from dust,” explained Kirshner, who said that his own team is also chasing remote explosions.

  15. AMERICAN GEOPHYSICAL UNION MEETING

    New Data Hint at Why Earth Hums and Mountains Rise

    1. Richard A. Kerr

    SAN FRANCISCO—Topics ranging from the atmosphere to the inner Earth were served up at the annual fall meeting of the American Geophysical Union here last month. Below, we report two surprising new ideas on how the solid Earth interacts with the atmosphere and with the water on its surface: why Earth hums and how a river may be able to raise a mountain.

    Big Rivers May Make Big Mountains

    Everyone knows that rivers whittle down mountains, but at the meeting an international team of researchers stood that idea on its head, at least for some of the world's tallest peaks and most powerful rivers. The team concluded that Nanga Parbat, the “killer mountain” of the Himalayas and the sixth highest mountain in the world, reaches its lofty zenith because the nearby Indus River triggers a deep-seated rise in the Earth's crust.

    According to a group of geologists and geophysicists led by geochronologist Peter Zeitler of Lehigh University in Bethlehem, Pennsylvania, rapid erosion by the Indus creates a “tectonic aneurysm”—a weak spot in the crust where deep, hot rock bulges upward and carries Nanga Parbat up with it. “They've got a diverse array of evidence that this is real,” says geomorphologist Robert Anderson of the University of California, Santa Cruz. “I'm excited about the idea.” If the causative link between river and mountain is confirmed, it would be a new way to make the planet's highest ground.

    Mountaineers are in awe of Pakistan's 8125-meter-high Nanga Parbat, the last big peak at the western end of the Himalayan chain. And the mighty Indus snaking nearby is a fitting companion. In spring, snowmelt over 100,000 square kilometers of high terrain creates monstrous cataracts as the river flows south and falls off the edge of the Tibetan Plateau. The 7-kilometer elevation difference from the top of Nanga Parbat to the Indus 25 kilometers away is the greatest single vertical drop on land.

    Not only is Nanga Parbat tall, it seems to be rising at a geologically dizzying pace. Recent rock analyses by the Nanga Parbat Continental Dynamics Project, a collaboration of 26 researchers from the United States, Pakistan, New Zealand, and France, has confirmed that the rock of the mountain rose an average of 3 to 6 millimeters per year during the past 3 million years, for a total rise of 9 to 18 kilometers, although much of the upthrust rock has now been sheared away by erosion. Beneath the mountain, seismic and electromagnetic probing reveals a mass of hot and therefore weak rock. That hot rock fuels Nanga Parbat's unusual hot springs and seismic activity—but one would expect a mountain with such weak underpinnings to sink, rather than rise. Other rapidly rising Himalayan peaks such as Everest, for example, are supported by many kilometers of cold, rigid rock.

    To explain Nanga Parbat's incongruous heights, Zeitler and his colleagues propose that the erosive power of the Indus drives a cycle of crustal weakening and uplift. In their scenario, erosion weakens the crust in two ways. First, the Indus cut through the Himalayan crust as rapidly as the collision of India with Asia pushed it up. This erosion thinned and weakened the crust there, much as a groove filed in a piece of glass creates a weak spot. And because the collision of India and Asia is compressing the crust, the weak spot becomes the easiest place for the crust to bulge upward.

    In the second weakening process, as the river's erosion removed weight from the upper crust, deeper, hotter crust rose rapidly to replace the missing mass. Hotter rock is weaker, so the crust weakened further and bulged upward even more. As the hot rock rose and the pressure on it was reduced, some of it melted, further weakening the rock in a positive feedback loop that accelerated the swelling of the crust. In a model run by the project's geodynamic modeler, Peter Koons of the University of Otago in New Zealand, the runaway bulging of a tectonic aneurysm takes off when a river removes about 5 kilometers of crust, or about a million years' worth for the Indus. Less powerful rivers can't remove rock fast enough to get the feedback going.

    “It's a very interesting idea,” says Anderson. “The localization of [the Nanga Parbat uplift] is quite dramatic; I don't think it's a coincidence” that it lies next to an equally dramatic downcutting by the Indus. But not everyone is ready to believe that rivers can lift mountains. “Big rivers don't make big mountains everywhere they go, [and] there are other 8-kilometer mountains without rivers,” notes geologist Lincoln Hollister of Princeton University.

    Hollister thinks that crustal weakness and uplift at Nanga Parbat instead largely stem from a broader regional cause, namely the India-Asia collision itself. The peak sits at a narrow corner of the Indian plate, where compressional forces are intensified. They may be shoving lower crust upward more strongly in that spot, he says, leading to melting and runaway weakening.

    Zeitler and his colleagues respond that they have identified a similar juxtaposition of big mountain and big river—Namche Barwa massif and the Tsangpo River—at the eastern corner of the Indian plate, where the geometry is different and there's no reason to suspect that tectonic surging is at work. Resolving the question, says Zeitler, may require determining whether, as the tectonic aneurysm mechanism would predict, Nanga Parbat and Namche Barwa first popped up at the same time as their rivers began spilling off the plateau.

    Earth Seems To Hum Along With the Wind

    Last year some seismologists pricked up their ears to an odd sound. The whole planet vibrates with a deep, soft hum, they said, far below the range of human sensation and imperceptible to all but the most sensitive seismographs. The claim was met with some surprise, especially because no one knew what could be prompting such a steady, whole-Earth oscillation; the big earthquakes that can set Earth clanging like a bell are too rare. But at the meeting, it was clear that seismologists now accept the reality of the hum, and one group presented data suggesting that the winds of the atmosphere, rather than something within Earth, excite the hum.

    Seismologists have been recording Earth's bouts of ringing ever since the great Chilean earthquake of 1960 (magnitude 9.5) set the entire planet vibrating for days on end with oscillations that moved the ground up and down as much as a centimeter. Even quakes as small as magnitude 6 can set Earth ringing, albeit far more quietly. But generating the low hum detected last year, which has periods of 3 to 8 minutes, would require a continual string of magnitude 5.8 earthquakes, according to seismologist Göran Ekström of Harvard University. Such quakes strike on average only every few days—but Earth keeps humming day in and day out, with only occasional dips in intensity, according to Ekström's analysis.

    Because the hum had no obvious cause, researchers were at first skeptical, but acceptance grew as more and more credible observations came in late last year. And because known earthquakes seemed unable to power it—all the world's smaller earthquakes summed together still seemed too weak—some seismologists suggested that small undetected quakes, perhaps in the ocean floor, might be at work. Others looked to wind, ocean currents, or even lurching tectonic plates.

    At the meeting, seismologist Naoki Suda of Nagoya University in Japan and his colleagues reported a hint that winds are responsible. They summed 50 to 80 days of seismic records at four especially quiet sites around the globe, removed background noise, and found that the hum tended to wax and wane throughout the day with a global rhythm. Wherever the site—Europe, South Africa, or central Asia—the hum was strongest from noon to 8:00 p.m. Greenwich time and weakest from midnight to 6:00 a.m. That's the same pattern of activity followed by the sum of the world's thunderstorms, notes Suda: Overall, storm activity on Earth tends to increase as the sun stokes storms over Africa and southeast Asia and decrease as night falls on those particularly intense centers of storm activity. The correlation is preliminary, he adds, but it supports the idea that the turbulent winds of thunderstorms striking the surface are setting up the seismic hum.

    Although “everybody now agrees that these [oscillations] are real,” says seismologist Guy Masters of the Scripps Institution of Oceanography in La Jolla, California, “I don't think the wind-stress mechanism has been proved.” He and others want to see more daily records from more sites processed in other ways before they give up on an Earth-based mechanism. Of course, these seismologists have their own biases. Says one: “I'm hoping it's something internal to Earth, because that's more interesting.”

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution