News this Week

Science  13 Oct 2000:
Vol. 290, Issue 5490, pp. 242

You are currently viewing the .

View Full Text

Log in to view the full text

Log in through your institution

Log in through your institution


    Public-Private Project to Deliver Mouse Genome in 6 Months

    1. Eliot Marshall

    Research on the mouse genome lurched into the fast lane last week, as private donors joined the U.S. government to step on the gas. A public-private consortium announced on 6 October that it's kicking $58 million into a new fund that will pay to sequence the DNA of the “black six” (C57BL/6J) strain of laboratory mouse. The consortium aims to produce a draft version of the genome by the end of February.

    Scientists say the project will be a valuable tool for finding genes and other control regions in the human genome. But the venture may not thrill everyone: It will erode the exclusive position of Celera Genomics in Rockville, Maryland, which this year sequenced the genomes of three mouse strains and is selling viewing rights.

    Celera president Craig Venter, who was quoted in The New York Times as saying that public funds would be better used on polishing the unfinished human genome, is now taking a cooler view: “I want to stress the positive,” he told Science. The public project will “complement” Celera's work, he says, because it will focus on a mouse strain that differs from the three being sequenced by Celera (129, A/J, and DBA/2J). Venter says Celera researchers are finding an “extraordinary” amount of variation among the three strains, much more than they had expected. As Science went to press, Celera was planning to announce that it had completed 9 billion base pairs of mouse sequencing—three genomes at a single pass. Venter added, “I applaud the public effort” for releasing more raw data than in the past; he says this will enable Celera to improve the quality of its own database.

    The nonprofit “Mouse Sequencing Consortium,” in fact, is promising an unprecedented degree of public access. It intends to release the data in real time and put even the “traces” from DNA sequencing machines in a public database within a week. The human genome project didn't go this far, although it did release results every day.

    Researchers are delighted. “It's cool,” says Carol Bult, head of a bioinformatics group that works on mouse genomics at the Jackson Laboratory in Bar Harbor, Maine. She predicts that, as the data tumble out, “you'll be able to do all kinds of gene hunting studies in silico,” downloading mouse and human DNA from databases and comparing them. “Having the mouse really will help” make sense of how human genes function, she says.

    Richard Klausner—director of the National Cancer Institute, one of the six institutes at the National Institutes of Health that pledged $34 million to this venture—says the decision to back the project reflects “the outcome of several years of discussions about the importance of the mouse sequence.” It will not divert attention from the human genome, he added, as that work is supported independently by the National Human Genome Research Institute (NHGRI). The new project, he says, “gives us the ability to see what's highly conserved” in mouse and human DNA, and by comparing the two, “to define [genes] and the borders of regulatory sequences.”

    View this table:

    According to NHGRI director Francis Collins, the SmithKline Beecham (SKB) pharmaceutical company of London initiated the mouse consortium in talks beginning about 2 months ago. The company donated $6.5 million itself and helped bring in other supporters, including The Merck Genome Research Institute in West Point, Pennsylvania ($6.5 million); the DNA chip-maker Affymetrix Inc. of Santa Clara, California ($3.5 million); and the Wellcome Trust ($7.5 million), a British charity. SKB spokesperson Rick Koenig says that the company's research chief, Tadataka Yamada, felt that it was important to “pool our resources to accelerate the sequencing of the mouse genome.”

    The technical plan relies heavily on the “whole genome shotgun” method pioneered by researchers working with Venter at The Institute for Genomic Research in Rockville, many of whom followed Venter when he became president of Celera. The entire mouse genome will be chopped into pieces and cloned into bacteria, which will be “fingerprinted” and distributed to labs for sequencing. The three-pass sequencing work will be carried out by robotic devices at Eric Lander's center at the Whitehead Institute for Biomedical Research in Cambridge, Massachusetts; Robert Waterston's at the Washington University School of Medicine in St. Louis; and Allan Bradley's group at the Sanger Centre in Hinxton, U.K. All three groups have pledged to transmit raw data directly from their robots to public databases on the Internet, with no strings attached.

    Even though Celera has already produced rough drafts of the genomes of three other lab mice, Bult says the new data will not be redundant. Researchers fortunate enough to have access to all four genomes, she notes, may use them to do sophisticated “dissection of complex genetic traits.” By crossbreeding strains, tracking the movement of DNA, and observing the physiological effects in offspring, she suggests, researchers may learn how genes interact to regulate complex phenomena such as obesity. But at the moment, Bult can't see all those valuable mouse genomes because the Jackson lab hasn't paid for access to Celera's data.

  2. Nobel Prizes

    As this issue of Science went to press, the Nobel Prize in physiology or medicine was awarded to Arvid Carlsson of the University of Gothenburg, Paul Greengard of Rockefeller University, and Eric Kandel of Columbia University for their work on signal transduction in the brain. The Nobel Prize in physics was awarded to Jack Kilby of Texas Instruments for his part in the invention of the integrated circuit, and to Zhores Alferov of the A. F. Ioffe Physico-Technical Institute in St. Petersburg, Russia, and Herbert Kroemer of the University of California, Santa Barbara (UCSB), for developing fast micro- and optoelectronic components. The chemistry prize went to Alan Heeger of UCSB, Alan MacDiarmid of the University of Pennsylvania, and Hideki Shirakawa of the University of Tsukuba for the discovery and development of conductive polymers. See ScienceNOW ( on 10 October for coverage of these announcements. Full reports will appear in next week's issue of Science.


    Gore, Bush Aides in Friendly Tussle

    1. David Malakoff

    There are substantive differences between George W. Bush and Al Gore Jr. on science and technology policy, standard-bearers for the two candidates said during a Washington debate last week. But the genial 90-minute joust revealed a lot of similarities, too. (See page 262 for the candidates' own responses to questions posed by Science.)

    The candidates agree on many issues, acknowledged Gore aide David Beier and Bush adviser Robert Walker. Both would double funding for basic medical research, boost spending on other civilian and military science, make the R&D tax credit permanent, and spend billions of dollars to improve elementary and secondary school math and science education.

    But Walker—who heads The Wexler Group, a Washington lobbying firm, and is a former chair of the House Science Committee—pointed to industrial research as one area of disagreement. He said Bush dislikes using taxpayer funds for industry to develop emerging technologies. That concept is a centerpiece of the Clinton Administration's Advanced Technology Program (ATP), a $200-million-a-year effort that has been a perennial target for congressional Republicans. The government could better encourage companies to fund risky applied science, Walker argued, by changing tax and liability laws that currently create “too many barriers to innovation” and “an atmosphere where new ideas are threatened by lawsuits.” Beier, a former executive with the biotech company Genentech Inc. who is now Gore's domestic policy adviser, defended ATP and other programs that support precompetitive industrial research. He said that Walker was drawing “an artificial distinction between [investments in] basic and applied research that sometimes doesn't serve policy-makers very well.”

    Beier, citing Gore's “abiding interest” in science and technology and his stints on the House and Senate committees that oversee science policy, argued that Gore's science and technology credentials are “probably better than any presidential candidate's in American history.” Walker conceded that point, but suggested that his experience hasn't been put to good use. As vice president, he charged, Gore has “built government stovepipes” that have limited the flexibility and effectiveness of R&D spending programs.

    The candidates' tax and education spending initiatives also came under scrutiny. Walker touted a $1 billion Bush initiative to link state universities to local school efforts to improve math and science teaching, and he also defended the Texas governor's tax proposal against charges that it would benefit only wealthier Americans. “These are the people who are investing” in technologies, such as the Internet, that are “fundamental to the new economy's growth,” he said. Beier shot back that Bush “seemed to be fixated on protecting the tax status” of a small number of people. The Gore campaign also used the occasion to unveil a committee of scientists, led by Nobel laureates Harold Varmus and Murray Gell-Mann, who are backing the candidate.

    Given the chance to comment, one audience member took a humorous jab at Bush's repeated reference in the first televised debate to the “fuzzy math” of his opponent. When, asked one researcher, will the candidates “stop disparaging a very productive branch of science?”

    The forum was sponsored by a coalition of science policy groups and hosted by the American Association for the Advancement of Science (which publishes Science).


    Research Groups Win Delay in Rules

    1. David Malakoff

    Biomedical research groups have won a last-minute reprieve from threatened regulations covering laboratory mice, rats, and birds. In a surprise reversal, Congress voted this week to bar the U.S. Department of Agriculture (USDA) from following through on a pact with animal-rights groups to draft rules for the animals (Science, 6 October, p. 23). The provision, introduced by Senator Thad Cochran (R-MS), was attached to an agriculture spending bill.

    Animal-welfare advocates were stunned by the development, which became public just as they were celebrating a federal judge's decision to approve the pact after a long-running legal battle. “We are appalled at the lengths to which some biomedical trade associations will go to avoid their legal and moral responsibilities to the welfare of lab animals,” said John McArdle, head of the Alternatives Research & Development Foundation (ARDF), which had sued USDA.

    Under the settlement, USDA agreed to reverse the agency's 30-year-old policy of exempting rodents and birds—which constitute 95% of research animals—from regulation under the Animal Welfare Act. The settlement was opposed by a coalition of research groups that included the National Association of Biomedical Research (NABR), the Association of American Medical Colleges, the Federation of American Societies for Experimental Biology, and the Association of American Universities.

    To achieve its ends, the coalition enlisted Wallace Conerly, dean of the University of Mississippi Medical Center in Jackson. He telephoned Cochran, the third-ranking Republican on the Senate Agriculture Committee. Cochran responded by adding language to the agriculture appropriations bill that prevents USDA from drafting the new animal-care regulations during the 2001 fiscal year, which began on 1 October. The bill faces a threatened veto by President Bill Clinton on other issues, but Cochran's provision is expected to survive.

    The ban “is great news,” says Conerly, who worries that new rules would “cost [his university] millions of dollars.” NABR's Barbara Rich says the delay will allow policy-makers to “take full consideration of the [settlement's] consequences for research.” The issue, she predicts, “isn't going away.”


    Texas Scientist Admits Falsifying Results

    1. David Malakoff

    A University of Texas (UT) immunologist has admitted to federal officials that he falsified research results over at least a 5-year period, leaving a trail of retracted papers and disgruntled collaborators. The scientist, who resigned and has been barred from receiving federal research grants, was found to have repeatedly duped colleagues by spiking test tubes with doses of a radioactive marker that produced positive results, according to detailed reports by UT and federal investigators.

    Last month, the federal Office of Research Integrity (ORI) announced that William A. Simmons of UT Southwestern Medical Center in Dallas had signed a statement admitting misconduct and accepted several penalties, including a 5-year ban on receiving federal research grants. ORI investigates misconduct allegations involving studies funded by the National Institutes of Health (NIH). Officials at the medical center, where Simmons worked under medical professor Joel Taurog, said in a statement that they are considering “further disciplinary action.” This could include financial penalties and the revocation of his 1996 Ph.D.

    Simmons could not be reached for comment, and Taurog declined to speak to Science. The following account is based on ORI and university investigation records obtained through the Freedom of Information Act.

    Simmons enrolled as a graduate student at UT Southwestern in 1989. After receiving his degree in 1996, he won a postdoctoral position in Taurog's lab, which uses transgenic rats and mice to study the role of the gene HLA-B27 in a group of autoimmune diseases, including a painful spinal arthritis known as ankylosing spondylitis. Over the next 2 years, Simmons published a series of papers, including a 1997 Immunity paper suggesting that an undiscovered gene—dubbed Cim2—influences HLA-B27's behavior. Efforts to find Cim2, however, stalled after Simmons left Taurog's lab in 1998 for a corporate job. The postdoc who replaced Simmons “had almost no success in reproducing any of” Simmons's work, according to ORI records, and Simmons was rehired as an untenured faculty member in April 1999 “to help straighten out” the research.

    Within days Simmons's work came under suspicion, however, when an unidentified co-worker observed him pipetting fluid into vials used to test the activity of certain immune system cells. The procedure was not consistent with the experimental protocol. Returning later to investigate, the co-worker discovered a wash bottle and testing vials full of radioactive chromium 51—a substance that should not have been present in the vials until days later in the experiment.

    The experiments involved putting killer T cells and radioactive target cells together in the vials, then assessing the activity of the killer cells by measuring the radioactivity released by the target cells. By adding predetermined quantities of the radioactive chemical to the vials, investigators say, Simmons produced results that were in line with expectations.

    After university officials were told of the co-worker's suspicions, they decided to investigate by laying an artfully designed trap. Simmons was asked to test cells that he was told should produce one type of result when, in fact, they should have produced the opposite. ORI documents explain that the test was designed to rule out the possibility that the whistle-blower was acting out of “possible frustration or anger at being unable to replicate Dr. Simmons['s] work [and] had himself spiked the vials.” Simmons failed the test, and on 29 April 1999 university officials placed him on administrative leave. He resigned 2 months later following an investigation by three UT Southwestern academics—Frederick Bonte, head of the radiology department; Paul Bergstresser, head of the dermatology department; and James Forman, an immunology professor.

    Simmons also falsified results on samples sent to him by collaborating researchers, concluded a subsequent investigation conducted by ORI. “A preponderance of the evidence” showed that Simmons had “systematically” falsified results “throughout his tenure as a graduate student and postdoctoral fellow,” states the ORI report. Despite earlier denials of the allegations, Simmons signed an ORI settlement agreement on 10 August that called for the retraction of the 1997 Immunity paper and three others published since 1993 in the Journal of Immunology and Immunogenetics. A table in a 1998 Journal of Experimental Medicine paper was also withdrawn.

    In the aftermath of the revelations, some of Simmons's former collaborators at The Jackson Laboratory in Bar Harbor, Maine, and the Wellcome Human Genetics Center in Oxford, U.K., are taking a tougher approach to cooperative research. “It's made me much more careful,” says Derry Roopenian of the Jackson Lab, noting that he now deliberately hides the identity of reagents and other shared molecular tools from cooperating researchers in order to “blind” experiments. But most of all, Roopenian is upset that a number of young scientists—in his lab and elsewhere—“wasted a lot of time and money trying to reproduce results that weren't real to begin with.”


    Experts Call Fungus Threat Poppycock

    1. Richard Stone

    CAMBRIDGE, U.K.— The script seems straight from a John LeCarré novel. A former bioweapons lab in Uzbekistan tinkers with a fungus that destroys opium poppies, which Western antinarcotic teams then unleash on poppy fields in Afghanistan. Furious, Afghan heroin cartels retaliate by modifying the fungus to kill food crops in Western countries.

    True? A documentary that was aired last week by the BBC and created a stir here paints the scenario as plausible. But experts contacted by Science play down the threat.

    The real-life story begins in December 1989. A Soviet deputy minister “raised the issue of biological control of illicit narcotic crops” with a U.S. assistant secretary of state, according to Eric Rosenquist, head of the narcotics research program at the U.S. Department of Agriculture's Agricultural Research Service (ARS). The Soviet Union then approached the United Nations Drug Control Program (UNDCP) with proposals to develop biocontrol agents against opium poppies and marijuana plants that may be more effective and environmentally benign than herbicides, including 2,4-D and glyphosate. After the Soviet Union unravelled, several institutes—including some former bioweapons labs—pursued these proposals with help from the UNDCP.

    One such lab, the Institute of Genetics in Tashkent, Uzbekistan, approached the U.S. embassy in Tashkent in May 1996 with its research on a naturally occurring fungus, Pleospora papaveracea, that kills poppies by attacking their roots. The institute, which the Soviet military had backed to develop agents to destroy crops, subsequently received U.S. and British funding.

    The institute is now testing a version of P. papaveracea that can be sprayed from a plane. Research shows that the fungus doesn't affect any of 130 closely related plant species. On a recent visit by Science to the lab, institute director Abdusattar Abdukarimov said that the treatment could be deployed in a few years and that the research site, near the Afghan border, is heavily guarded.

    The BBC program, “Britain's Secret War on Drugs,” recycles concerns raised 2 years ago in the media that the Uzbek institute's efforts “touch the edge of biological warfare.” In the program, Paul Rogers, a plant pathologist at the University of Bradford in the U.K., says the work “is providing new evidence as to how biological warfare could be used against crops.” He later told The Guardian that “drug cartels could themselves acquire the technology and in revenge attacks use a form of agricultural terrorism against Britain or the U.S.”

    Other experts, however, play down such fears. “If drug cartels did acquire the fungus, they would have to adapt it to become a pathogen of food crops, and this would not be a trivial project,” says plant pathologist Jan Leach of Kansas State University in Manhattan. Rosenquist questions whether P. papaveracea will ever become the weapon of choice against opium poppies. So far, he says, from the ARS's perspective the field tests have fallen short of showing its effectiveness as a herbicide.

    Ironically, learning how P. papaveracea behaves and how to target it to certain fields may someday protect legitimate opium poppy plantations. The Uzbek work, says Rosenquist, could help “safeguard world supplies of analgesics” such as morphine.


    New Reaction Promises Nanotubes by the Kilo

    1. Robert F. Service

    Nine years ago, the news roused the slow-but-steady world of organic chemistry like a double espresso: Japanese researchers had discovered that carbon atoms can assemble themselves into tiny tubes with amazing properties. One hundred times as strong as steel and able to conduct like either metals or semiconductors, carbon nanotubes were soon being touted for uses as down to earth as lightweight fuel tanks and car bumpers and as fanciful as cables for elevators into space. The hitch, so far, has been that the most promising tubes—single layers of carbon atoms arrayed like sheets of rolled-up chicken wire—can be made only by the thimbleful. As a result, they have cost up to $2000 a gram, enough to make a single nanotube-based fuel tank worth more than a fleet of Lamborghini automobiles. But perhaps no longer.

    At a meeting in Boston* last week, researchers from Rice University in Houston, Texas, reported a new chemical process for making single-walled nanotubes (SWNTs), potentially by the kilogram. The scheme combines simple and abundant gaseous precursors that react to form iron-based catalyst particles, which then promote the growth of the nanotubes. And because that type of gas-phase synthesis is akin to the way bulk plastics are made today, the new scheme has clear potential to be scaled up to make industrial quantities. “Within the next year we should easily be able to produce 10 kilograms of this stuff [in the lab],” says Richard Smalley, the leader of the Rice team, who shared the 1996 Nobel Prize in chemistry for his part in the discovery of fullerenes, a class of three-dimensional carbon molecules that includes nanotubes.

    “It's a very important development that nanotubes can be made in big quantities,” says Walt de Heer, a nanotube expert at the Georgia Institute of Technology in Atlanta. “It implies that the price [of nanotubes] will come down, and this could allow their use as large-scale construction materials.” Still, de Heer cautions that inexpensive ingredients don't guarantee low costs. The round fullerenes known as buckyballs can be made from cheap starting materials, he points out, yet they remain more expensive than gold.

    Smalley's new scheme isn't the first to use catalysts to create nanotubes. In 1995, his team at Rice came up with a method that blasts a graphite target with lasers in the presence of catalytic metal particles. The intense heat generated by the lasers blasts the graphite into a vapor of carbon atoms, which the metal particles then help to coalesce into nanotubes. But the laser apparatus is expensive and has yielded only about 300 grams of SWNTs in the past 2 years. What's more, the tangle of SWNTs that the process creates is contaminated with about 10% carbon soot, which must then be removed in another step to yield the pure nanotubes.

    In search of better results, Smalley and his postdocs Michael Bronikowski and Peter Willis took a hint from the bulk-plastics industry. They looked for ways to make both the catalyst and nanotube starting materials gaseous. The key turned out to be a molecule called iron pentacarbonele, which has an iron atom surrounded by five carbon monoxide (CO) groups. They spray this compound along with additional CO into a chamber heated to about 1000ºC. The heat rips the CO arms off the iron atoms, leaving the lone atoms energetically unhappy and eager to bond with one another to form more stable clusters. And—as in the laser SWNT scheme—those metal clusters excel at producing SWNTs. Meanwhile, the high temperature also causes CO molecules to react with one another to form the more stable CO2, leaving behind lone carbon atoms, which quickly find the iron nanoparticles and begin to grow a SWNT. “The SWNTs just fall out of the chamber in an essentially pure form,” Smalley says.

    Still, Smalley cautions, “this isn't the ultimate” when it comes to making SWNTs. The tubes, he says, wind up as a tangled mat rather than perfectly aligned fibers. They also vary slightly in diameter, a drawback that can create tubes with a range of electronic properties. But Smalley and colleagues are confident that they can iron out the glitches. Last week, they announced that they were forming a new company—Carbon Nanotechnologies Inc.—to commercialize their SWNT production process. If their scale-up plans pay off, they may finally turn nanotubes from a research curiosity into the technological successor to plastics.

    • *American Vacuum Society, 47th International Symposium, Boston, Massachusetts, 2–6 October.


    Video Game Images Persist Despite Amnesia

    1. Laura Helmuth

    The video game Tetris can be found on computers in almost any lab; grad students need their entertainment, after all. But few researchers have put the game to more explicitly scientific use than Robert Stickgold and his colleagues at Harvard Medical School in Boston. On page 350, they report the results of new work in which they used the game—which involves spatial reasoning to slot falling blocks strategically into place—to study how the brain reviews what it has learned.

    The researchers found that people who have just learned to play Tetris have vivid images of the game pieces floating before their eyes as they fall asleep, a phenomenon the researchers say is critical for building memories. Neuroscientists have long known that memory consolidation goes on during sleep. But much more surprisingly, the team also found that the images appear to people with amnesia who have played the game—even though they have no recollection of having done so. Apparently, Stickgold says, the amnesics still have access to perceptual memories in the cortex, despite having damage to the hippocampal regions of their brains that prevent them from recalling the game.

    The finding suggests that the “parts of the brain responsible for the inability to learn must be different from those responsible for the images,” says Richard Haier of the University of California, Irvine, whose previous brain imaging studies of Tetris players showed that many brain areas become active when novices first learn the game.

    In the current work, Stickgold and his colleagues focused on Stage 1 sleep, the stage at which, as Stickgold explains, “your significant other pokes you and says you're asleep, and you say ‘No, I'm not.’” In this case, a researcher did the poking. Novice Tetris players spent hours learning the game during the day, and then when they went to bed, researchers woke them up repeatedly during the first hour of sleep to ask what was on their minds (aside from wanting to sleep in peace). Nine of 12 people without amnesia said they'd seen images of Tetris blocks, sometimes rotating and sliding into place as they do during the game.

    The researchers also tested five people who had extensive damage to the hippocampus and surrounding areas of the temporal lobe that prevented them from building new so-called explicit, or declarative, memories. For example, the subjects had no memory of the game (or the experimenter) from session to session. And even though other studies have shown that people with amnesia can learn new skills, such as tracing a shape seen only mirror-reversed, they didn't learn Tetris very well. “With complex skills like [those required by] Tetris, subjects may have to remember things declaratively,” says memory researcher Larry Squire of the University of California, San Diego. Because the subjects probably can't keep the rules of the game straight, they don't have the opportunity to get faster and more accurate.

    But even though the subjects don't remember the game, don't get better at it, and have no idea why they're being woken up in the middle of the night, they reported seeing what sound remarkably like Tetris pieces while they're drifting off to sleep. For instance, one reported seeing “images that are turned on their side. I don't know what they are from. I wish I could remember, but they are like blocks.”

    Why are people replaying Tetris in their sleep? Stickgold speculates that during sleep, the brain cements connections between a day's events and stored memories. This theory is bolstered by reports from some study participants who, unlike the novices, had played Tetris years before. These return players reported sleep images that portrayed the graphics from versions of Tetris they'd learned on, not the Tetris graphics they'd seen that day. The new experience, Stickgold says, calls up old memories that the brain interconnects.

    Not all aspects of the experience are replayed during Stage 1 sleep, however. People seem to “extract what's relevant from an event and dump the peripheral details,” Stickgold says. In Tetris, for example, the spinning pieces and disappearing lines are crucial parts of the game, but the computer monitor, surrounding room, and keyboard aren't important—and none of the Tetris players reported imagining them in their sleep.

    The amnesia patients, because of the damage to their hippocampi, can't form the kinds of connections that would allow them to recall the game. But even so, they still retain perceptual memories, which float around in the cortex and return, disconnected, during sleep.


    Growth in Visas Boosts NSF Education Programs

    1. Jeffrey Mervis

    Last week Congress rushed through an immigration bill that gives a big boost to education programs at the National Science Foundation (NSF). The measure, which almost doubles the number of skilled foreign workers eligible for high-tech U.S. jobs under so-called temporary H1-B visas, marks a hard-fought victory for high-tech companies scrambling for talent. In addition, it provides more than $100 million to help NSF tackle what policy-makers say is the real problem: the need for more homegrown scientists and engineers.

    “This bill begins to address our long-term challenge: ensuring that there are enough Americans with the necessary skills to fill these jobs,” declared Senate minority leader Tom Daschle (D-SD) during floor debate on 3 October before a 96-1 vote on the bill, S. 2045. The House passed the Senate's version a few hours later on a voice vote.

    The bill raises the annual ceiling on temporary visas for high-tech workers to 195,000 for the next 3 years from the current level of 115,000. Under the current law, the ceiling would have dropped to 65,000 in 2002. The bill sets no limit on the annual number of workers, estimated at up to 20,000, hired by universities and nonprofit research organizations.

    The fees paid by industry to apply for the visas are expected to generate an estimated $275 million a year, with 55% going to the Labor Department for worker training programs. NSF will receive 38.2%, and unofficial estimates put its expected annual take at $105 million, a big jump from the $30 million it had budgeted for 2001. The new fee is $1000, double the existing amount. The increase came in a separate last-minute bill pushed through by supporters this week after legislators deferred to the House's constitutional right to initiate revenue measures and stripped the Senate measure of similar language.

    The bill directs NSF to spend about 60% of its money on elementary and secondary school activities, ranging from new curricula and improved teacher training to after-school programs and partnerships with industry. The funds will greatly expand a $3 million after-school science enrichment program, begun this year, that plans to make its first awards next spring. The program, known as After School Centers for Exploration and Discovery (ASCEND), attracted three times the expected number of preliminary proposals, despite a requirement that applicants add at least 30% to the government's contribution. “We hit a nerve,” says NSF's Jane Kahle. The bill also provides $20 million over 5 years for after-school technology training programs to be run by the Boys and Girls Clubs of America, which are already eligible for the ASCEND money. And it asks NSF to do a study of the differential access to high technology, the so-called “digital divide.”

    The rest of NSF's visa money would go to enlarge a college scholarship program that made 110 awards to universities during its initial competition last spring (Science, 7 April, p. 40). Each award typically allows a school to provide a 2-year stipend to 40 low-income students seeking associate, undergraduate, or graduate degrees in computer science, engineering, or mathematics. The legislation also boosts that annual stipend from $2500 to $3150, but NSF officials say that it is too early to know how the influx of funds will affect the overall size and number of institutional awards.


    CERN Link Breathes Life Into Russian Physics

    1. Richard Stone

    Without fanfare, 600 Russian scientists in Geneva are playing key roles in building Europe's Large Hadron Collider. Some say their work there could prove the salvation of high-energy physics back home

    GENEVA— Beneath a birch and pine forest south of Moscow, a vast circular tunnel lies haunted by unfulfilled dreams. There, just outside the town of Protvino—a community created for physicists at the height of the Cold War—the Soviet Union in 1984 began building the UNK, a device that promised to be the world's most powerful machine for smashing together protons. Within months, technicians had readied most of the magnets and other equipment needed to keep protons zipping around a ring and to track showers of particles spawned by 800 million collisions every second. Then perestroika ushered in political freedom—and led to an economic free fall that brought the UNK to a screeching halt. The project's demise sounded a death knell for the idea that any one country could push back the frontiers of high-energy physics on its own.

    Fifteen years later, Russian high-energy physicists at last have a chance to wash away the bitter taste left by the moribund UNK. They are working en masse on the $1.5 billion Large Hadron Collider (LHC), a machine that will explore fundamental questions such as why particles have mass, as well as search for exotic new particles whose existence would confirm supersymmetry, a popular theory that aims to unify the four forces of nature. Next month, engineers here at CERN, the European particle physics laboratory, will begin hauling out the existing particle accelerator from the 27-kilometer-long tunnel under the French-Swiss border, and over the next 5 years they will assemble the LHC and its massive detectors for tracking debris from subatomic collisions. Although Russia is not one of CERN's 20 member states, most top high-energy physicists in Russia are working on the LHC. “We couldn't do the LHC without them,” says CERN research director Roger Cashmore.

    Indeed, CERN and Russia need each other. The European lab has kept Russia's vaunted high-energy physics research afloat. “Collaboration with CERN has become essential for the existence of high-energy physics in Russia,” says Vitali Kaftanov, deputy director of the Institute of Theoretical and Experimental Physics (ITEP) in Moscow. And with the LHC, CERN has ventured into Russia's once-secret nuclear weapons labs, where scientists are making key components of the LHC's two main detectors, CMS and ATLAS, at bargain prices.

    A hole in the Iron Curtain

    CERN's close ties with Russian scientists formed in the early 1960s, when the Soviet Union allowed Kaftanov and a handful of other physicists to work at CERN for months at a time. “High-energy physics punched a hole in the Iron Curtain,” Kaftanov says. At first the hole was large enough only to let the scientists through; their families couldn't join them at CERN. “My wife and son were in Moscow, like hostages,” Kaftanov says. It took a year for CERN's director-general at the time to persuade the Russian authorities to relent.

    It wasn't just a one-way street from Russia to Geneva, however. In the 1960s, the Institute of High Energy Physics (IHEP) in Protvino built a collider that attained 70 billion electron volts (GeV). At the time, CERN and the United States each had only a 30-GeV machine. With superior firepower available in the Soviet Union, CERN and Russia in 1967 signed a major agreement on scientific exchange that allowed CERN researchers to work in Protvino on the 70-GeV machine, which was revved up that December. Several dozen physicists from CERN and the French Saclay lab and their families lived in Protvino for the next 5 years, where they even established a French-language schoolroom for their children, while Soviet scientists, families in tow, took up residence at CERN. The collaboration survived even as the Soviet Union was drawing condemnation for its brutal suppression of protesters in Czechoslovakia during the “Prague Spring” of 1968. People “understood that punishing physicists would not improve the political situation,” Kaftanov says.

    In 1975 the focus shifted to CERN, with the completion of the 450-GeV Super Proton Synchrotron (SPS) accelerator. Besides doing research, Soviet scientists contributed an important SPS experiment, a photon spectrometer that melded existing technology for spotting subatomic collisions—the bubble chamber—with cutting-edge electronic tracking.

    Soviet scientists worked at other facilities as well, including the Fermi National Accelerator Laboratory (Fermilab) near Chicago, Illinois, but their connection with CERN was always strongest. “CERN has always been much more active in reaching out to Russia,” says IHEP's Sergei Denisov, who notes that a U.S. Department of Energy-Russia Ministry of Atomic Energy committee on high-energy physics has been “rather passive.” After the Soviet Union invaded Afghanistan in 1979, scientific exchange between Soviet institutes and Fermilab practically ceased for a while, Denisov says.

    By that time CERN had begun designing the Large Electron-Positron collider (LEP) and its associated experiments, including the fabled hunt for the Higgs boson (Science, 22 September, p. 2014). It soon became apparent that the largest planned detector, the L3, would require about 12 tons of extra-pure bismuth germanate crystals. These crystals scintillate when particles strike them; photodetectors convert the tiny fireworks displays into electrical signals. Only a few cubic centimeters of this crystal had ever been produced before. In the end, the Soviet Union laid Cold War mistrust aside and shipped 5 tons of germanium oxide, an ingredient used in making crystals for missile-guidance systems, to China, which had the only institute in the world capable of churning out 11,000 bismuth germanate crystals quickly and cheaply enough to keep the project on schedule. “Everything from Russia was delivered on time,” says ITEP director Michael Danilov. “We never failed to fulfill our obligations.”

    … and high tech.

    CERN'S ATLAS (above) and CMS (below) experiments draw heavily on Russian know-how.


    The Russians, meanwhile, were hoping to chart new physics territory with their 21-kilometer-long UNK collider. They envisaged a two-stage project. The first-generation collider would use so-called warm magnets capable of busting up protons at energies topping out around 600 GeV. In the meantime they got to work designing superconducting magnets for a second machine that would be five times as powerful, smashing protons together at 3 trillion electron volts (TeV). But as the research and development phase of the project dragged on, the scientists had no idea they were about to be steamrolled by perestroika. “The preparation period was too slow,” says Nicolas Koulberg, adviser to CERN's director-general on Russia and Eastern Europe. “It was a real disaster for them.” Looking back, Denisov says that “if perestroika had been delayed a few years, I'm sure we would have had a 600-GeV machine.” And if the Russians had ever managed to realize their full UNK design, Kaftanov says, “the LHC would have been much less interesting.”

    Beaten by the economy, the Russians “realized that the only way to stay at the frontier was to work at CERN,” says Koulberg. Shortly after the Soviet Union dissolved in 1991, he says, the mercurial former CERN director-general Carlo Rubbia “decided to test the Russians in a more complicated way.” CERN wanted to see whether the Russian scientists could enlist their struggling industry to mass-produce components for the LHC, then in early planning. “They started as a full partner in the LHC from the beginning,” Koulberg says.

    Swords into plowshares

    As CERN was getting ready to bring Russia aboard on the LHC, Russian high-energy physicists were struggling for survival. “The high-energy-physics community managed to convince the Russian government that there were only a few places in the world where you could build an accelerator,” says Danilov. Russia was not one of those places, so Danilov and his colleagues had to persuade their cash-starved government to fork out millions of dollars for a project based in wealthy Switzerland. Luckily, they had an ally in then-Science Minister Boris Saltykov, who in 1993 signed an agreement with CERN pledging a major financial commitment to the LHC.

    The scope of the Russian contribution to the LHC is sweeping, involving nearly every major experiment and component of the titanic machine. For instance, the Institute of Nuclear Physics in Novosibirsk last year began shipping, by rail and by truck convoy, magnets for the transfer lines that inject protons into the LHC, where they will collide at energies reaching 14 TeV. Several Russian institutes are also contributing major elements of the two biggest experiments: the Compact Muon Solenoid (CMS), which will be the world's biggest silicon device, and the 20-meter-high ATLAS, a tortuous acronym for “A Toroidal LHC Apparatus” (see diagram). All told, Russia will contribute about $75 million worth of hardware to the LHC.

    Some of this equipment is coming from an unlikely source: Russia's deteriorating military-industrial complex. Working partly with grants from the International Science and Technology Center, a Western-funded agency that supports defense conversion in the former Soviet Union, CERN has brought some 30 physicists at Russian weapons labs into the LHC fold. For example, IHEP scientists found that the Bogoroditsk Techno-Chemical Plant—a factory 100 kilometers from Protvino involved in Russia's “Star Wars” missile defense effort—can produce the lead tungstate crystals needed for the CMS's electromagnetic calorimeter, which will measure the trajectories of particles that interact by the electromagnetic force. CERN has contracted with the plant to make 40,000 of the transparent, pyramid-shaped crystals. “Nobody else can provide crystals of this quality at this price,” Kaftanov says. And the Russian Navy in Murmansk has kicked in tens of thousands of rifle and artillery shells, which have been melted down to make brass plating for the CMSdetector. Working on cutting-edge physics is a much better option than other defense-conversion projects, which can be as prosaic as converting factories from making bombs to making refrigerators. “It's a way for them to have dignity as scientists,” says Koulberg.

    But the Russian presence extends far beyond designing and building hardware, most of which is being produced by the CERN member states, the United States, and Japan. “The intellectual contribution of the Russians is most important,” Koulberg says. These days, he says, as CERN tests LHC component prototypes and runs simulations, “when you come in on a Sunday, essentially you only see Russians here.” Indeed, Denisov adds, “it's often easier to meet a Russian colleague at CERN than in Russia.”

    Having Russia play such a high-profile role in the LHC carries a risk: CERN is gambling that the Russian economy will remain stable enough to allow the country to deliver its hardware on time and maintain its 600-strong army of physicists at CERN, most of whom are paid by the Russian government. “We would be in deep trouble if the Russians pulled out,” CERN's Cashmore says. “We would have to do major rethinks on experiments and designs, and it's too late for that.”

    A dying discipline?

    The LHC connection has kept many top-flight Russian scientists at the forefront of high-energy physics. But the discipline in Russia isn't out of the woods. Indeed, some argue that it is dying. “That's very close to the truth,” says Kaftanov, who points out that nearly all topflight experimentalists and many theoreticians now spend most of the year abroad. Others insist that the brain drain is only temporary and that most of the scientists haven't permanently emigrated from Russia. “In principle, the Russians return to their home institutes,” Koulberg says. And even older facilities like the 70-GeV accelerator in Protvino can still do good science, Denisov says. It now operates for two 2-month stretches a year, running experiments that are hard to do at higher energies, such as searching for exotic resonances and exploring kaon decay. During the experimental downtime between LEP being shut down next month and the LHC being switched on in about 5 years, Denisov says, “I hope we'll have more physicists from Europe coming to Protvino.”

    Key component.

    Dipole magnets for the LHC came from a plant in Novosibirsk.


    Everyone agrees that the gravest concern lies with grooming the next generation of physicists. A couple of years ago, a few legislators were agitating to close down the prestigious Moscow Physical Technical Institute. The problem was that upon graduation, about four out of five students get offers from Western labs and leave. “People were crying that we were only preparing students for careers in the West,” Kaftanov says. That threat has gone away for now, but there remain worries about keeping physics alive in Russia. The trend is so alarming that the country's top physicists and institute heads are now discussing holding a review sometime next year of the future of Russian high-energy physics. Tops on the agenda will be survival after the LHC.

    In that regard, Russian scientists haven't ruled out bringing to life their unfinished giant in Protvino. To equip the UNK to its full 3-TeV glory, Denisov estimates, would cost about $1 billion—a sum twice as great as Russia's total science budget. They won't see that anytime soon, but finishing the 600-GeV ring—even after the LHC is up and running—would be reasonable, Denisov argues, because a scaled-down UNK could be used as a fixed-target accelerator for studies on B quarks and neutrinos. But well into the next decade, at least, the Russians will find nurturing environs at CERN, their home away from home.


    Bulgarians Sue CERN for Leniency

    1. Robert Koenig*
    1. With reporting by Richard Stone.

    SOFIA— To most nations, $700,000 a year may seem a pittance for a piece of the action at CERN, the European particle physics laboratory near Geneva. But in cash-strapped Bulgaria, scientists are wondering whether a ticket for a front-row seat in high-energy physics is worth the price: It nearly equals the country's entire budget for competitive research grants. Faced with that grim statistic and a plea for leniency from Bulgaria's government, CERN's governing council is considering slashing the country's membership dues for the next 2 years. Such a move might salvage Bulgaria's participation in CERN, but it may not quell discontent among Bulgarian scientists who argue that other research areas would still get the short end of the stick. And Bulgaria's woes almost certainly have dashed Romania's hopes of joining the elite club this year.

    For dues-paying nations, CERN membership has its privileges. Their companies are eligible to bid on CERN projects, for instance, while their scientists get a voice in CERN decision-making. That certainly appealed to Bulgaria, which by 1998 thought it could afford to have its high-energy physics community join CERN. Lab officials agreed to let Bulgaria in if, from the get-go, it paid the full dues (determined by a formula based mainly on member nations' gross domestic product), rather than ramp up contributions as the Czech Republic, Hungary, Poland, and Slovakia were allowed to do a few years ago. Bulgaria agreed, and in June 1999 it became the 20th member state.

    “CERN membership is a good investment in the future of the nation,” says Roumen Tzenov, one of three dozen Bulgarian scientists now working at the lab. Among other things, he and his colleagues have helped develop key components for the planned CMS detector of the Large Hadron Collider, slated to come on line in 2005.

    But trouble began brewing even before Bulgaria formally joined CERN. The NATO bombing campaign against Yugoslavia in spring 1999 destroyed enough bridges to cripple commerce between Bulgaria and Western Europe on the Danube River—hurting Bulgaria's economy and squeezing the government's budget. CERN officials are sympathetic to Bulgaria's plight. “Bulgaria suffered a lot,” says Nicolas Koulberg, who advises CERN's director-general on Russia and Eastern Europe. “It became clear that the science ministry would have to pay CERN a big part of its budget.” Indeed, Bulgaria's National Science Fund—an agency recently disbanded—last year doled out only about $500,000 for grants in the natural and social sciences, and the government is expected to spend much less this year. Biologists complain that a typical grant amounts to less than $1000 a year.

    To many Bulgarian scientists, forking over to CERN the kind of money that would fund hundreds of research projects at home doesn't seem like a bargain. “There is great opposition to CERN membership,” says George Russev, who directs the Bulgarian Academy of Sciences' Institute of Molecular Biology. In a recent letter to Bulgaria's education and science minister, Russev and others on the former Science Fund's council called on the country to pull out of CERN and use the money for domestic research.

    Facing a rebellion at home, Bulgaria's vice minister for education and science, Christo Balarew, last month petitioned CERN to cut his country's membership fees substantially for the next 2 years. He told Science that CERN plans to send a delegation to Sofia this fall to discuss the financial problems and the extent of the proposed fee reduction.

    Suffering collateral damage in this dispute is neighboring Romania, which has made a strong bid to join CERN. Lab officials are now worried about Romania's R&D balance sheet—and they admit that Bulgaria's woes have influenced their judgment. “If Bulgaria was all right, Romania probably would have been accepted this year,” says a CERN official, who adds that Romania's application is now on hold until at least 2001.

    If the CERN council votes in December to cut Bulgaria some slack, that may very well save the lab's latest member state from an ignominious withdrawal. But it is unlikely to fully assuage critics such as Russev. “Even if we pay lower CERN fees for 2 years,” he says, “the problem is that the membership is not useful to the rest of Bulgaria's science community.”


    Can Genetically Modified Crops Go 'Greener'?

    1. Anne Simon Moffat

    The next generation of genetically engineered crops may be created by tinkering with the plants' own genes, rather than by introducing completely foreign ones

    Cotton and corn containing bacterial genes that make compounds toxic to insects; soybean and canola plants engineered to resist weed killers with other genes from bacteria; papaya carrying genes from viruses that make them resistant to deadly diseases. Opponents of genetically modified foods have had a field day labeling such transgenic crops “frankenfoods.” But the next generation of engineered crops may be more difficult to demonize with that glib moniker: plants whose own genes have been modified to make them hardier and more productive.

    Until recently, plant genetic engineers have had little choice but to transplant genes from foreign species. The reason: They generally knew little about the plant genes encoding the traits they wanted to improve. But a flood of new information from projects such as the sequencing of the mustard plant Arabidopsis has pinpointed genes involved in key processes such as speeding up flowering, changing a plant's basic architecture, or improving pest resistance (Science, 6 October, p. 32). As a result, researchers may be able to enhance the traits they want by introducing one or a few genes from another plant, or by modifying the regulation of genes in their original settings. “We can understand in molecular terms the genes that breeders have worked with for a long time,” says plant molecular biologist Richard Flavell of Ceres Inc. in Malibu, California.

    Armed with this new understanding, researchers are working on plant modifications that couldn't be achieved by simply transplanting bacterial or viral genes. For example, they are hoping to engineer plants to flower earlier, which could extend the growing season for grains and fruits. Manipulating plant genes may also produce larger leaves that permit more photosynthesis or more aggressive root systems for combating drought.

    Rob Martienssen, a plant molecular biologist at Cold Spring Harbor Laboratory on New York's Long Island, says what plant biotechnologists are now trying to do isn't much different in principle from what the Aztecs did centuries ago when they used conventional plant breeding to transform the bushlike teosinte into the more productive single-stalked corn. But the new methods, Martienssen points out, are much faster and come without the uncertainties that accompany the mixing of whole genomes. “Current techniques,” he says, “are 1000 times more precise than classical plant breeding.”

    Advancing spring

    One of the most sought-after goals is to alter the timing of flowering. A delay would be desirable in leafy crops such as spinach and lettuce, which tend to “bolt”—send up long, energy-consuming stems—when they flower. In contrast, speeding up flowering in some plants could enhance fruit and seed development and perhaps enable farmers to grow more than one crop each year. Researchers have taken the first critical steps down this path, identifying genes that help choreograph flowering.

    Over the past decade or so, about a dozen teams at places such as the California Institute of Technology (Caltech) in Pasadena, the University of Wageningen in the Netherlands, and the John Innes Centre in Norwich, U.K., have turned up more than 80 genes. The latest example comes from Caroline Dean of the John Innes Centre and her colleagues. On page 344 of this issue, they report the cloning of an Arabidopsis gene called FRIGIDA and show that natural mutations leading to loss of FRIGIDA function are associated with early flowering, a helpful adaptation in some cold climates.

    Although it's too soon to say whether researchers can alter flowering times in other plant species by manipulating FRIGIDA, another gene, known as LEAFY, first cloned in the early 1990s by Detlef Weigel in the Caltech lab of Elliot Meyerowitz, is already showing promise. Weigel and Ove Nilsson of the Salk Institute in La Jolla, California, have introduced a highly active version of LEAFY into aspen trees, which flowered in 6 to 8 months, instead of the usual 12 to 15 years (Nature, 12 October 1995, p. 495). And Jose Miguel Martinez Zapater of the National Center for Biotechnology in Madrid, Spain, and his colleagues at the Valencia Agricultural Research Institute have manipulated the LEAFY gene to bring about first-year flowering in a hybrid citrus known as citrange, which normally needs 5 to 7 years to flower.

    Even a modest advance in flowering time could benefit rice farmers, especially in the developing world, where rice is a staple. “Rice needs a bit longer than 6 months to grow and mature in some areas, so a small speedup in flowering could enable double-cropping a year,” says Dean. Indeed, as Weigel and Chris Lamb, also from John Innes, report in the June issue of Transgenic Research, introducing LEAFY into rice provides just such a modest acceleration, with only a minor yield penalty.

    Dwarfs preferred

    Other researchers are homing in on genes that control an even more basic plant feature: height. They are looking for ways to produce short, sturdy “dwarf” varieties. This is a desirable trait in many crops, such as wheat and rice, because it allows the plant to put more energy into the grain instead of stalks, and it reduces the chance that plants will topple and be damaged by wind and rain. Indeed, the “Green Revolution,” which greatly increased cereal grain production during the 1960s and '70s, was based on the development of dwarf strains of wheat and rice through conventional plant breeding. A gene discovery reported last year by Nick Harberd and his colleagues at John Innes may help plant biologists develop new dwarf varieties more directly (Nature, 15 July 1999, p. 256).

    Last year, Harberd's group identified an Arabidopsis gene called gai as that plant's equivalent of one of the mutant genes that causes dwarfing in both rice and wheat. The genes apparently code for a protein in the signaling system through which the hormone giberellin stimulates plant growth. But whereas it took the Green Revolution pioneers years to breed the mutant gene into the cereals, Harberd's team is introducing a mutated form of the Arabidopsis gai gene into wheat and rice much more quickly—with a gene gun. For example, the researchers are working with collaborators in India to produce a dwarf strain of basmati rice—a variety favored for its flavor and white color, but a balky crop to grow because the plant's long, weak stems often cause it to keel over.

    Previous efforts to breed dwarf basmati rice failed, Harberd notes, because the crosses exchanged thousands of rice genes along with the dwarfing character, and breeders ended up with short plants whose rice had lost its fine taste. But by shooting the mutant Arabidopsis gene into cultured basmati rice cells and then regenerating them into whole plants, Harberd's team has produced short plants that retain their tasty grains. These promising results suggest that a mutant giberellin signal pathway “could be used to increase yields in a wide range of crop species,” Harberd predicts.

    Other groups are taking a more indirect route to influencing plant growth: modifying a plant's responses to light. Plants shaded by their neighbors often bolt upward to reach sunlight. “This makes sense in a natural setting, but it is a problem with agricultural crops, because all that upward growth is often at the expense of the harvestable parts of the plant,” says plant scientist Peter Quail of the U.S. Department of Agriculture's Plant Gene Expression Center in Albany, California, and the University of California, Berkeley.

    Researchers, including Quail, are focusing on pigment-containing proteins called phytochromes that help a plant tell when it's in the shade. Sunlight filtered or reflected by leaves has more light at the far red end of the spectrum than clear sunlight does, and certain phytochromes can detect that difference and pass the information on to nuclear genes involved in growth control. Earlier this year, for example, Quail's team found that when phytochrome B is exposed to red light, it binds to a protein known as PIF3, a transcription factor that regulates the expression of a variety of genes, including some involved in photosynthesis and the plant's circadian clock (Science, 5 May, p. 859). Suppressing phytochrome B activity could therefore make plants less responsive to far red light, which should reduce the tendency to bolt. The strategy is now being tested with potatoes.

    In addition to suppressing unwanted vertical growth, plant genetic engineers are trying to encourage growth in other directions. For example, Robert Fischer and Yukiko Mizukami of the University of California, Berkeley, have identified an Arabidopsis gene called Aintegumenta (ANT) that Fischer says “can make organs larger without altering the basic proportion of the organs.” When this gene is continuously active in Arabidopsis and tobacco, the plants have bigger flowers, leaves, seeds, and seedlings. Fischer and his colleagues are now working with scientists at Ceres Inc. to see whether they can use ANT to create crop plants with thicker, and therefore sturdier, stems or with more roots, which could increase drought resistance.

    Depending on the crop, yields might also be increased by making seedpods either stronger or weaker. Strong, shatterproof seeds are desirable in the oilseed canola, for example, because pod shatter can cause major losses—up to 50% in bad weather—whereas a more fragile seedpod might make harvesting easier for crops such as cotton. A recent discovery by Martin Yanofsky and his colleagues at the San Diego and Davis branches of the University of California may provide a means to manipulate seed shattering. They've identified two related Arabidopsis genes, called SHATTERPROOF 1 and 2, that control this property by promoting production of tough lignin compounds in the seedpods (Nature, 13 April, p. 766).

    Fighting disease

    Although genetic engineers are focusing much of their efforts on altering plant architecture, they are also using new genetic information to beef up plants' resistance to viral, fungal, and bacterial pathogens. One such example comes from molecular plant pathologist Brian Staskawicz of the University of California, Berkeley.

    Blackspot disease, which is caused by the bacterium Xanthomonas campestris pv. vesicatoria, causes major damage to pepper and tomato crops in Florida and in humid areas of the Midwest. About 10 years ago, plant scientists identified genes that make peppers resistant to the disease, but they couldn't find comparable genes in tomatoes. However, in the 23 November 1999 issue of the Proceedings of the National Academy of Sciences, Staskawicz and colleagues at San Francisco State University and the University of Florida in Gainesville report introducing one of the resistance genes from the pepper into tomatoes. “Now those tomato plants are resistant to the pathogen,” says Staskawicz. Modified tomatoes are being readied for field trials.

    Less advanced, but perhaps of more general value, is work with a gene called DIR-1, which is involved in systemic acquired resistance, a broad-spectrum response that plants activate to fight off many bacteria and fungi. These defenses enable the plant to prevent a pathogen from spreading by a variety of means, including walling it off or even killing the infected cells. A few years ago, research teams at the Salk Institute and at the Noble Foundation in Ardmore, Oklahoma, identified the DIR-1 gene as encoding a key component of the signal pathway that activates systemic acquired resistance in Arabidopsis in response to an infecting pathogen. Because other plants have very similar pathways, the genes found in Arabidopsis “should be usable in other plants,” says Animesh Ray, a molecular biologist at AkkadixCorp. in La Jolla. His team is now testing that assumption by introducing the Arabidopsis DIR-1 gene into wheat, rice, corn, and alfalfa in hopes of boosting their defenses against fungal pathogens.

    So far, plant researchers have barely scratched the surface of plant genomes, but what they've learned already has made them eager to know more—a desire that will no doubt be partially fulfilled when the Arabidopsis genome is completely sequenced later this year. Other plant genomes are much larger than that of Arabidopsis and so may provide even more genes useful for plant genetic engineering. As Purdue University plant biologist Jeffrey Bennetzen says, “It's time to get to know plant genomes better.”


    Australian Researchers Go for the Gold

    1. Elizabeth Finkel*
    1. Elizabeth Finkel writes from Melbourne.

    Academic scientists are hoping that two new reports calling for increased funding of research and innovation will finally boost a stagnant R&D budget

    MELBOURNE, AUSTRALIA— The recent slide of the Australian dollar to an all-time low of US$0.54 may have been a pleasant surprise to the flood of tourists arriving for last month's Olympic Games in Sydney. To Australian scientists, however, it was proof of something they have been saying for years: The country's failure to spend more on research is stifling innovation and dragging down the economy. Last month their proposed solution—a major boost in spending on research and education and an improved climate for commercializing new technology —was embraced by two new reports that take the government to task for underinvesting in the new “knowledge” economy. Researchers hope that the reports, combined with the sinking exchange rate, will finally prompt action.

    Australia's economy has been growing at a healthy annual clip of 4% in recent years. But that expansion hides a fundamental weakness that worries global financial markets, says Australian Chief Scientist Robin Batterham, the author of one report, entitled The Chance to Change, that argues for a variety of measures to stimulate innovation. “We're perceived by the international community as an old economy,” Batterham says, referring to the country's reliance on such low-tech commodities as agriculture and precious metals. “If we don't do something to change that, we're on [the] way to the 30-cent dollar.” James Wolfensohn, head of the World Bank, delivered an equally blunt message about the connection between R&D policies and a sliding currency during a visit to the recent World Economic Forum in Melbourne. “There is a lesson in that for Australia,” said the Australian-born financier. “You haven't put your money into R&D.”

    Signs of the problem are everywhere. Australia's overall R&D budget has been stagnant since 1996, at $4.8 billion, and spending by industry—slightly less than half the total—has actually declined by 6% over that period. Government funding of academic research has risen by a meager 2% per year since 1996 and has fallen 13% as a proportion of GNP. Researchers have been forced to cope with soaring enrollments—up 60% in the past decade—that have boosted teaching loads and diverted resources from needed renovations and major equipment purchases. At the same time, however, student interest in such bellwether majors as mathematics and physics is dropping. With funding tied to enrollment, many universities have trimmed faculty slots and cut administrative support in these departments.

    Batterham's report to Science Minister Nick Minchin lays out a plan to reverse that slide by creating an “innovation culture.” It calls for stronger links between academic and government research labs and industry and a big boost in spending on research, facilities, and education. The government is also mulling over a second report, based on an innovation summit held in February, that puts a price tag of $1.4 billion over 5 years on a similar package of reforms. The idea for a summit was hatched by business leaders, who last week joined the cream of the country's research establishment in sending the government an open letter calling for “early action” toward achieving those aims. In line with the country's near-obsession with the recent Olympic Games, the letter asserts that, “with the right policy framework and encouragement from government, Australian industry, research, and higher education can help deliver the gold-medal performance needed to deliver greater prosperity.”

    Help can't come soon enough for University of Sydney microbiologist Dee Carter, who has just won a 5-year, $300,000 grant from the Howard Hughes Medical Institute for her work on a eucalyptus-borne fungus that can cause meningitis in humans. Although the grant supports her research, it will do little to relieve a heavy teaching load or improve the crowded and dilapidated conditions in her lab, which are so bad that Carter routinely declines requests from her overseas collaborators to visit. “I'm too embarrassed about the state of the lab to invite them over,” says the 38-year-old Carter. “For young academics like me it's pretty depressing.”

    It's no better for veteran scientists like John Cashion, head of physics at Monash University. “It's hard to find time to think,” he says about a teaching load that has doubled in 20 years. For Cashion, survival lies in an upcoming amalgamation of his department with materials engineering. But such consolidations can be unsettling to faculty members, say university administrators. “Many of the top people are going overseas,” says Garth Gaudry, head of the mathematics department at the University of New South Wales, who is also finding it hard to fill positions in his department.

    To be sure, those working in hot areas such as biotechnology or computing have fared much better. The government doubled funding for academic medical research last year, for example, after a report written by industrialist Peter Wills argued that Australia was lagging badly in key fields (Science, 21 May 1999, p. 1248). And some universities have overhauled their structures to take advantage of a suddenly booming venture capital industry spurred by an innovation fund that the government created jointly with business in 1997. “One of the big changes is the availability of venture capital in the range of $2 million to $5 million,” says Peter Andrews, co-director of the Institute for Molecular Bioscience at the University of Queensland in Brisbane, whose institute has already benefited by spinning off some 17 companies in the last 4 years.

    But most researchers are unable to attract venture capital when their work has no immediate commercial payoffs. “We feel like we're being hit from two sides,” says Carter. “We can't do the research we would like to because of the demands placed on our time through declining staff numbers, and basic research areas are being ignored in favor of industrial applications.” Gaudry notes that a culture of innovation won't flourish unless there is a sufficiently large pool of talent.

    The two reports address both problems. They recommend that 2000 new student places per year be created in the physical sciences, the costs of which should be shared between government and business, and that scholarships be given to students who fill 500 of these places. They recommend that the government invigorate first-rate university research by doubling the number of postdoctoral fellowships from 55 to 110, doubling the $132 million budget of the Australian Research Council (ARC), and providing $275 million for academic infrastructure. At present the council funds only one in five proposals, and individual grants cover only 75% of the research costs.

    Most academics believe that the recommendations reinforce the earlier Wills report, which made a convincing case for the value of basic research in fostering productivity. And they welcome the strong support coming from industry. “This is the first time we have had such a united front,” says Vicky Sara, chair of the ARC. Still, they caution that years of neglect can't be erased overnight. With her department facing another 20% budget cut, Carter is not alone in wondering whether any boost in government funding may be too little, too late.


    Second Thoughts on Skill Of El Niño Predictions

    1. Richard A. Kerr

    A few years' perspective on the 1997–98 El Niño and a toughening of standards drops model forecast performance from spectacular to encouraging

    What if your high school geometry teacher called you up at grad school to tell you that, on further thought, she was dropping your grade from an A to a C+? That's what researchers are doing to the computer models that forecast a warming of the tropical Pacific early in 1997. Less than a year after the onset of one of the two strongest El Niños of the century, researchers were awarding high marks to models that had anticipated the warming by as much as 6 months, especially the big, computationally expensive models (Science, 24 April 1998, p. 522). Now, with the complete rise and fall of El Niño available for grading—and with second thoughts about the standard to which models should be held—researchers have lowered their marks. Some critics have even failed the models.

    In the September issue of the Bulletin of the American Meteorological Society, meteorologists Christopher Landsea of the National Oceanic and Atmospheric Administration's (NOAA's) Hurricane Research Division in Miami and John Knaff of NOAA's Cooperative Institute for Research in the Atmosphere in Fort Collins, Colorado, give the models' performance a failing grade. The sophisticated models, it turns out, did a poor job of predicting the 1997–98 El Niño's full course; in fact, they did no better than a rudimentary model that can run on a desktop. “When you look at the totality of the [El Niño] event, there wasn't much skill, if any,” says Landsea. Others won't go that far but agree that the models' reputation needs to be taken down a notch or two. “There's no doubt the models were helpful,” says meteorologist Neville Nicholls of the Australian Bureau of Meteorology Research Centre in Melbourne, “but certainly they didn't do as well as a majority of my colleagues had thought. We still can't be certain the models have got the problem solved.”

    In arriving at their much-reduced forecasting scores, Landsea and Knaff make two changes in the way forecast performance was initially evaluated. Instead of focusing on how well a model predicted the onset of El Niño in the spring of 1997, as early analyses had to do, they gauge success throughout El Niño's onset, peak, and decay and into the beginning of La Niña, the cold phase of the cycle in the tropical Pacific that ended this summer. They found that whereas some models, such as that of NOAA's National Centers for Environmental Prediction (NCEP) in Camp Springs, Maryland, did well in predicting the time of onset, all 12 models they assessed underestimated the size of the actual warming by at least 50% in their 6- to 12-month forecasts. And the models that did well with the timing of onset missed on the decay; although the NCEP model continued to call for a gradual cooling through 1998, the tropical Pacific chilled precipitously in May. “If they do well on the onset,” says Landsea, “but then bust on how strong it's going to be and bust badly on the decay,” the models aren't doing very well.

    The NOAA researchers also found that, contrary to initial impressions, the complex computer models, which are similar to ones developed to predict greenhouse warming, fared no better in the longer run than much simpler—and much cheaper—models that are based on statistics. Such so-called empirical models are, in essence, automated rules of thumb that compare sea surface temperature and atmospheric pressure in the tropical Pacific with conditions preceding El Niños of the past 40 years and then issue predictions based on the resemblance. The far more complex dynamical models attempt to simulate, on a global scale, the real-world interplay of winds and currents that leads to an El Niño. “The use of more complex, physically realistic dynamical models does not automatically provide more reliable forecasts,” the NOAA researchers write. They don't investigate the reason, but it probably involves the unrealistic way the models simulate the distribution of rain in the tropics. “National meteorological centers may wish to consider carefully their resource priorities. …”

    More provocatively, Landsea and Knaff raise the bar high for El Niño predictions. Traditionally in climate work, a forecast would not be considered skillful unless it did better than some simple, barely useful technique. A favorite technique used as a benchmark was just to assume that present conditions would persist through the forecast period. “Persistence is way too easy to beat,” says Landsea. “There needs to be a more demanding standard.” Everyone seems to concede that point, but there is no agreement on just how demanding a performance standard should be. Landsea and Knaff offer an empirical model of their own—dubbed the El Niño- Southern Oscillation Climatology and Persistence (ENSO-CLIPER) model—as a reasonable benchmark. Developing ENSO-CLIPER took the two of them just a few weeks, Landsea says, and it runs on a workstation in about a microsecond. “If you can't do better than ENSO-CLIPER,” he says, “the state of El Niño forecasting is still pretty primitive.” Indeed, none of the 12 models tested—six dynamical and six empirical—beat it when forecasting 8 months ahead, and only two empirical models and one dynamical model improved on it out to 11 months and longer, the dynamical model just barely.

    The reception for the models' regrading is mixed. “I think it's fair,” says meteorologist Stefan Hastenrath of the University of Wisconsin, Madison, who forecasts climate in drought-prone northeast Brazil but does not forecast El Niños. “It has to be a little more demanding. I don't think the profession, much less the public, is served by overblown claims.” Huug van den Dool, who supervises NCEP's long-range U.S. climate forecasts at the Climate Prediction Center (CPC) in Camp Springs, also thinks “it was an overstated conclusion that the big models were scoring big. [Landsea and Knaff] upped the bar a little, which I sympathize with.”

    Meteorologist Anthony Barnston of Columbia University's International Research Institute for Climate Prediction in Palisades, New York, (who was until recently at CPC) agrees that “we don't have a whole lot to crow about yet concerning dynamical models,” but he thinks comparing them to ENSO-CLIPER “is a little bit too harsh,” pointing out that ENSO-CLIPER is relatively sophisticated for an empirical model. Vernon Kousky of CPC, who produces official El Niño advisories, also sees ENSO-CLIPER as too high a standard. The complex dynamical models “are marginally useful,” he says. “They help confirm what we're seeing. We hope they will be better in the future.”

    The future is where modelers are now looking. “Empirical prediction is a dead end,” says dynamical modeler J. Shukla of the Center for Ocean-Land-Atmosphere Studies in Calverton, Maryland, whereas “dynamical prediction has a lot of future.” Empirical models will get only marginally better as decades of El Niño history accumulate, he says, whereas faster computers, better observations, and more complete models will surely advance dynamical models more rapidly. Adds Nicholls: Dynamical models “will only improve from here.”


    New Brain Cells Prompt New Theory of Depression

    1. Gretchen Vogel

    Growing evidence from laboratory and animal experiments and brain imaging suggests that a slowdown in brain cell growth may be linked to depression

    Depression can be a crippling disease. Its sufferers become trapped in a cycle of exhausting gloom, in which even eating seems like a chore. Drugs such as fluoxetine (Prozac) have helped millions, but patients are notoriously susceptible to relapses, and often the symptoms worsen with each episode. No one knows what causes depression, although many neuroscientists blame an imbalance of brain chemicals, so-called neurotransmitters, especially those that affect the brain's pleasure responses.

    Now a few neuroscientists are converging on a radical, but complementary, theory: that depression may be caused by a lack of new cell growth in the brain. Even a few years ago, that notion, now being promoted by neuroscientist Barry Jacobs of Princeton University, among others, would have been met with ridicule. And indeed, it remains highly speculative today. But the recent discovery that the brain keeps producing neurons into adulthood (Science, 27 March 1998, p. 2041) has given it at least one leg to stand on.

    Work by several neuroscientists over the past 2 years has shown that growth of new cells in the adult human brain occurs in an area called the hippocampus, best known for its role in learning and memory but also a suspect in mood disorders. And a different line of research has recently revealed that the hippocampus is smaller than normal in many depressed patients. What's more, several of the most effective antidepressants seem to increase brain cell growth, according to animal experiments and some preliminary observations in people.

    Put those insights together, says Jacobs, and it starts to look as if disruptions in the cycle of new brain cell growth might be a primary cause of depression. Others say that although a lack of new cells may not cause the sleep disturbances, lack of appetite, and feelings of overwhelming sadness that characterize the disease, the evidence is persuasive that the two processes are linked. “There has been no convincing biological theory for depression,” says Jacobs. And the idea that waxing and waning patterns of brain cell growth might account for cycles of depression “could explain at least as much of the biological data as anything else out there.”

    Those biological data come from several sources. Scientists studying depression have been stymied for years because no one could find any obvious changes in the brains of depressed patients. “We don't have the luxury of knowing where we ought to be looking” for damage, says Helen Mayberg, a neurologist at the University of Toronto. “We have no plaque or tangle or Lewey body or spongiform degeneration,” as in other neurological diseases. Many scientists have assumed that depression results from changes on a molecular scale—an imbalance in chemical messengers that communicate among brain cells.

    In 1996, Yvette Sheline of Washington University School of Medicine in St. Louis and her colleagues reported evidence that changes may be occurring on a larger scale as well. As reported in the Proceedings of the National Academy of Sciences (PNAS), their magnetic resonance imaging (MRI) studies revealed that the hippocampi of 10 depressed patients were on average 12% to 15% smaller than those of controls of the same age, height, level of education, and handedness. At least two follow-up studies in the past year, involving 40 patients and matched controls, have found consistent results. “It is absolutely clear that really prolonged major depression is associated with loss of hippocampal volume,” says Stanford University neuroscientist Robert Sapolsky.

    Examinations of the brains of deceased patients who had suffered from depression suggest that a similar phenomenon may occur in other parts of the brain as well, according to work by Grazyna Rajkowska, a neuroscientist at the University of Mississippi Medical Center in Jackson. She and her colleagues reported in Biological Psychiatry last year that the brains of 23 people who suffered from either major depression or bipolar disorder—alternating cycles of mania and depression—have smaller and less densely packed neurons and fewer glial cells (a type of neuronal support cell) in the prefrontal cortex, an area of the brain thought to be involved in emotion and cognition.

    Many neuroscientists suspect that these cell losses in both the hippocampus and prefrontal cortex are related to the effects of stress on the brain. Stress is a frequent trigger for depression, and evidence has been building for years that stress, whether from acute trauma such as a car accident or chronic pressure on the job, is not good for brain cells. Stress causes an increase in hormones called glucocorticoids, which raise the heart rate, boost the immune system, and suppress energy-intensive systems such as reproduction. Such changes are clearly advantageous for a mammal trying to escape from a predator but are not beneficial over, say, 30 years of chronic stress, says Sapolsky. Decades of animal studies have shown that stress-related glucocorticoids cause cell atrophy and death in certain areas of the hippocampus. More recent studies, notes Jacobs, suggest that stress and glucocorticoids inhibit new cell growth in the hippocampus as well. Specifically, Princeton University neuroscientist Elizabeth Gould, using a chemical called BrdU that marks newly divided cells, has found that exposing monkeys to chronic stress blocks the new neuron growth found in control animals.

    Intriguingly, several effective depression fighters may have the opposite effect. Prozac, for example, increases the amount of serotonin in the gaps between brain cells, and serotonin, Jacobs notes, is a well-known promoter of cell growth during fetal development. It seems to have similar effects in adult animals as well. In work in press at the Journal of Neuroscience, neuroscientist Ronald Duman of Yale University and his colleagues have found that rodents given any of three different classes of antidepressant drugs (including Prozac) or electroshock therapy all have significantly more newly divided cells in the hippocampus. This suggests, Duman says, that increased neurogenesis is a common effect of antidepressant treatment.

    A more natural antidepressant—exercise—may also encourage brain cell growth. Exercise has been shown to increase the level of serotonin in the brain and can often help patients shake off mild depressive symptoms. Neuroscientist Fred Gage of the Salk Institute for Biological Studies in La Jolla, California, and his colleagues reported last year in PNAS that rodents with access to a running wheel (on which they ran an average of nearly 5 kilometers per day for several months) had more than twice as many cells marked with BrdU as did mice with no running wheel.

    The mood stabilizer lithium also seems to trigger growth of brain cells in humans. In work published last week in The Lancet, Husseini Manji of Wayne State University School of Medicine in Detroit and his colleagues reported that patients who suffer from bipolar disorder experienced a slight but measurable increase in brain gray matter volume after 4 weeks on lithium, as detected by MRI. The team is now trying to determine whether the increase is concentrated in a particular area of the brain.

    All these results are suggestive, agrees Mayberg, but she is not yet convinced. “Do I think failure of neurogenesis is the cause of depression? I don't think there's any direct evidence to support that,” she says. “We have several intriguing coincidences. The task now is to see whether they're causally linked or working in parallel with each other.” As several scientists point out, there is little evidence that the hippocampus is a primary player in mood. And the shrinking observed in the hippocampus during depression may be a sort of collateral damage rather than a cause of depression, says Mayberg.

    But Jacobs thinks that the connection is more direct. Depressed patients often report memory problems, he notes, and one of the first signs of Alzheimer's disease—which attacks the hippocampus—is depression. There are known links between the hippocampus and the amygdala and the prefrontal cortex, the two regions of the brain that are thought to control emotion and have long been suspected in depression. It's possible, he says, that a dearth of new neurons—triggered by a stress-induced spike in glucocorticoids—could disrupt those connections and lead to depression. But he agrees that cause and effect are difficult to separate.

    “It is all highly speculative, and yet it is based on some pretty interesting observations,” says neuroscientist Bruce McEwen of Rockefeller University in New York City, who has studied the effects of stress on the hippocampus. “It's a fascinating possibility that in some of these psychiatric disorders, you might be able to rescue the brain” by designing new drugs to encourage the regrowth of lost neurons. But it will take a lot more basic research—and a lot more debate—before doctors will be able to attempt such a rescue.