News this Week

Science  17 May 2002:
Vol. 296, Issue 5571, pp. 936

    DuPont Ups Ante on Use of Harvard's OncoMouse

    1. Eliot Marshall

    The Harvard OncoMouse, a patented animal with a checkered past, is causing trouble in academia once again. E. I. du Pont de Nemours and Co. of Wilmington, Delaware, which controls rights to this genetically engineered rodent, has become more assertive about asking U.S. researchers to obtain licenses for permission to use it. The company—arguing that the OncoMouse patents cover any transgenic animal predisposed to cancer—is also asking institutions to enforce the agreements. Some have complied readily, according to DuPont, but others, including the Massachusetts Institute of Technology (MIT) and the University of California, appear to be dragging their feet. And a few researchers and administrators are up in arms.

    DuPont regards the flap as a small misunderstanding, according to J. Gregory Townsend, associate director of DuPont's Intellectual Assets Business. “There's been some confusion,” he says, because “people are not familiar with the exact terms” of a broad agreement the company worked out with the community a couple of years ago. In talks managed by the National Institutes of Health (NIH), DuPont agreed in 2000 to provide what Townsend calls a “free research license” to any NIH scientist or NIH grantee doing noncommercial studies with the mouse. Anyone who wants to use the animal in drug screening or other company-related projects must obtain a commercial license and pay a fee (

    Even though most academics can use the mouse for free, the deal still requires that each institution sign a contract and comply with the terms. They may share mice only with others who have a license from DuPont and must file an annual report with DuPont—requirements that upset some researchers. Under Townsend, who took charge of this portfolio last year, DuPont has become politely insistent; he says he is surprised to hear that anyone would regard a free license as “burdensome.”

    At the center of the flap is a mouse engineered to develop cancers that closely mimic human disease. Although transgenic animals are not widely used in connection with clinical research, they could become valuable for testing new therapies. The first to file property claims on the cancer-prone mouse was Philip Leder of Harvard Medical School in Boston and colleagues. Harvard received a series of three patents, the first in 1988 and the most recent in 1999; all have been licensed exclusively to DuPont. The most recent one covers toxicology and cancer therapy testing; it will remain in force for another 14 years.

    A few scientists—such as Tyler Jacks, chief of MIT's Center for Cancer Research and a developer of research animals, and oncologist Kevin Shannon of the University of California, San Francisco—are concerned that DuPont's licensing campaign could bog down the testing of new therapies. Some have suggested ignoring or resisting the company's demands, arguing that the broad patent claims would not survive in court. But university administrators aren't eager to litigate.

    MIT was pulled into the fray in “early March,” says Karen Hersey, MIT's technology licensing chief, when “we received a letter saying … that we had not signed a licensing agreement.” MIT responded that, “We didn't know” that we should have one. DuPont then provided the names of three individuals who were using the mice without a license. Jacks was one of them. When MIT notified these scientists that DuPont, in effect, was after them, the reaction was “hot,” says Hersey.

    Prime example.

    DuPont wants MIT to sign the Harvard OncoMouse patent license to cover work by Tyler Jacks and others.


    Jacks, creator of a popular p53 knockout mouse, is upset by the breadth and potential impact of DuPont's licensing demands. He has been exchanging mice freely with academic researchers and thought his laboratory was covered by the blanket agreement NIH and DuPont had worked out 2 years ago. Now, the company has begun pressuring his institution and others to take out licenses. In particular, Jacks “strongly objects” to DuPont's claim that “any animal with germ line disruptions that is cancer prone” must be licensed for research use under the OncoMouse patent. He's disappointed that institutions seem to “back down” to such broad patent claims.

    Even more outspoken is Andrew Neighbour, associate vice chancellor for research at the University of California, Los Angeles. At a meeting of a cancer advisory panel at the Institute of Medicine in Washington, D.C., last month, Neighbour criticized the OncoMouse licensing campaign. DuPont's “nonnegotiable” terms, he said, will impede the use of cancer-prone mice in labs that are doing company-sponsored research or that are testing proprietary drugs. And the fee for a commercial OncoMouse license, Neighbour said, could be up to “two times the amount of the sponsored research contract”—creating an “economic burden [that] will restrict research.”

    Townsend denies that the company has done anything that might impede the use of genetically engineered mice. “We can turn around a license to an academic user in 2 days to a week,” he says. Townsend notes that the mouse patents had been through several major legal trials already—including in Europe and Japan—suggesting that they would stand up to any new challenge in the United States. Most research institutions are working out agreements amicably, he reports, although he is still waiting to hear from MIT and California. He declined to comment on specific fees but stated firmly that “any commercial use” of the mouse “does require a license from DuPont.”


    Community Hails Bill to Double Budget

    Jeffrey Mervis Science lobbyists have spent the past 4 years trying to get equal treatment for the National Science Foundation (NSF). They have been urging Congress to do for NSF what it is doing for the National Institutes of Health: double its budget, now $4.8 billion, over 5 years. Last week, they achieved a symbolic victory when Representative Sherwood (Sherry) Boehlert (R-NY), chair of the House Committee on Science, introduced a bill (H.R. 4664) that aims to accomplish just that.

    The bill faces a long and uncertain trip through the congressional labyrinth. But it includes a provision that could have a more immediate impact on the agency and perhaps even on the controversial practice of congressional earmarks. It requires NSF to rank proposed major new research facilities so that legislators will no longer feel free to pick and choose from among approved but unfunded projects, which circle expectantly like planes arriving at a crowded airport.

    Bigger bumps.

    The House bill would boost NSF's allowed budget by 15% a year for 3 years, a much larger jump than in recent years.


    Boehlert, a self-professed “cheerleader” for NSF, has long resisted the doubling argument, scorning it as the product of “randomly generated numbers” (Science, 11 May 2001, p. 1048). Instead, he has urged the community to spell out exactly what is needed and how much it will cost. Last week, however, Boehlert joined ranks with his admiring constituency. Leading the biggest science pep rally in years, the chair declared that NSF needs annual increases of 15% for the next 5 years if it is to succeed in bolstering basic research and education. Asked why he had changed his mind, Boehlert said that “there's a certain appeal to having a lofty goal. … I would have asked for a tripling [of NSF's budget], but I wanted to be realistic.”

    Even before Boehlert took to the microphone, scores of scientific societies papered the Capitol Hill venue with press releases praising him for his “leadership and vision” in calling for more federal dollars. NSF director Rita Colwell, although obliged by her position to support the president's request for a meager 5% boost next year, nevertheless calls the bill a “terrific show of bipartisan support by Congress.”

    Despite the euphoria, congressional aides and lobbyists acknowledge that the bill is just a small step in a long legislative process. Although the House is likely to back the bill, no version has yet been introduced in the Senate. And even a full congressional endorsement won't generate a penny more for NSF unless another set of legislators, who sit on the appropriations committees that control NSF's purse strings, climb onboard.

    The science committee can play a bigger role in the other major component of the bill: compelling the NSF director to rank the importance of proposed facilities. Currently, the agency's governing body, the National Science Board, says “yea” or “nay” to specific projects without indicating priorities.

    Out in force.

    Representative Sherry Boehlert, at podium, and other legislators are enveloped by science lobbyists at a press conference unveiling the NSF bill.


    That process works fairly well when NSF has enough money to do everything. But when money's tight, some approved projects get left out of NSF's budget request. Last year that led to a free-for-all, with backers of specific projects seeking congressional help to move up in the queue (Science, 27 July 2001, p. 586). These so-called earmarks are an unwarranted intrusion into scientific peer review, say many legislators. If NSF ranks its big-ticket items, says Representative Nick Smith (R-MI), who chairs the committee's research panel, that “would be a huge step toward making better decisions.” The president's science adviser, John Marburger, also thinks it's a good idea: “Any process that establishes priorities for funding is good,” he says.

    Colwell agrees that such an exercise is important, and she notes that the bill “leaves priority-setting in the hands of the director, which is most appropriate.” But sources say she views any mandatory sharing of those rankings with Congress as an encroachment on her prerogatives as a member of the executive branch. Colwell declined to elaborate, saying that “I'd prefer not to comment on pending legislation.”


    Panel Would Screen Foreign Scholars

    1. Jeffrey Mervis

    The U.S. government is putting another brick in the wall to shore up homeland security. This one is intended to prevent foreign terrorists from masquerading as researchers.

    Last week White House officials unveiled a proposal to create a panel that would screen foreign graduate students, postdocs, and scientists who apply for visas to study “sensitive topics … uniquely available” on U.S. campuses. The proposal comes as a relief to higher education officials, who had feared a more intrusive policy that would dampen the flow of foreign students and scholars. “This is an excellent framework for protecting national security, although many details remain to be spelled out,” says Terry Hartl of the American Council on Education, which has followed the issue closely. “They seem to be fairly narrow and defensible criteria,” agrees George Leventhal of the Association of American Universities, a group of 63 major research institutions.

    Presidential science adviser John Marburger unveiled the proposed policy last week at briefings for Congress and the higher education community. It flows from a 29 October 2001 presidential directive intended to stop foreign students and scientists from “abusing” the visa process by which they gain entry to U.S. educational institutions. (The U.S. Department of Agriculture has gone much further, declining to sponsor any new visas for foreign scientists to work in its labs. See Science, 10 May, p. 996.)

    Roughly 175,000 students or scholars enter the country each year to carry out scientific work, says James Griffin, a Department of Education official who is coordinating the effort while on loan to the White House. Of those, he says, perhaps a few thousand will warrant a closer look under the new guidelines. “But that doesn't mean they will be denied entry,” Marburger notes. Officials will look at what type of research they plan to pursue, where and with whom they will be working, and whether they will have access to specialized equipment of a sensitive nature.

    The screening would be done by a new Interagency Panel on Advanced Science Security (IPASS), created by and composed of representatives from the major U.S. science agencies as well as officials from the State, Justice, and Commerce departments. “Combining science agencies with law enforcement agencies should make for a more rational and systematic review,” says Hartl. University officials are also relieved that they will not have to decide which applicants warrant closer scrutiny. That will be the responsibility of either the State Department or the Immigration and Naturalization Service, although schools would be required to pass along information about significant changes in course work or research projects.

    The co-chairs of IPASS will be appointed by Secretary of State Colin Powell and Attorney General John Ashcroft. Griffin says that the White House is weighing a suggestion from university officials to set up an expert committee to help IPASS define “uniquely sensitive” courses of study and areas of research.

    A presidential directive spelling out how IPASS will operate is probably “a few months away,” says Marburger. The announcement was made now, he says, to give the academic community plenty of time to react.


    Did an Impact Trigger the Dinosaurs' Rise?

    1. Richard A. Kerr

    Large impacts would seem to be bad for dinosaurs. After all, a huge asteroid or comet ended the 135-million-year reign of the dinosaurs when it hit Earth 65 million years ago. But on page 1305, a group of researchers suggests that an impact also triggered the final rise of dinosaurs to dominance 200 million years ago. Proving that an impact is a two-edged sword will depend on demonstrating that a large body hit Earth at the very geologic instant that the dinosaurs' reptilian competitors abruptly died away and meat-eating dinosaurs came into their own.

    By following fossil footprints, geologist Paul Olsen of Lamont-Doherty Earth Observatory in Palisades, New York, and his colleagues show for the first time that the final ascent of the dinosaurs was indeed abrupt, at least in eastern North America. And they now have a geochemical hint—although not yet proof—of an impact at the geologic instant that dinosaurs established their supremacy. “There was something interesting going on” 200 million years ago, says Olsen.

    Linking evolution to impacts is a tough job. When researchers made the first impact-extinction connection in the 1980s, most of their colleagues were skeptical. But the case for an impact's wiping out the dinosaurs and numerous other creatures strengthened steadily following the discovery of high levels of iridium—an element rare on Earth but abundant in asteroids—in rock laid down at the boundary between the Cretaceous and Tertiary periods (K-T), when the dinosaurs disappeared and mammals began their rise. The iridium showed up around the globe sandwiched between Cretaceous rock and Tertiary rock, often accompanied by mineral bits bearing scars from the shock of impact. And these traces of impact always fell at the moment of extinction, a time pinned down with increasing precision as paleontologists built more detailed fossil records.

    No mere coincidence?

    Fern spores (top) marking a possible impact disaster immediately precede the first tracks of a new, bigger Jurassic dinosaur.


    Many paleontologists began to think that before long, every mass extinction would have its impact. No such luck. Not a single other extinction has been firmly linked to an impact, although there have been hints. In the early 1990s, palynologist Sarah Fowell of the University of Alaska, Fairbanks, and Olsen found a rock layer rich in the spores of ferns—plants that rush in when the landscape is devastated—in southeastern Pennsylvania. These fern fossils appear in rocks formed at the Triassic-Jurassic (T-J) boundary 200 million years ago. (A similar fern spike marks the K-T boundary in western North America.) And geologist David Bice of Carleton College in Northfield, Minnesota, found what he suggested were impact-shocked quartz grains near a marine T-J boundary in Italy (Science, 11 January 1991, p. 161).

    But the Pennsylvania fern spike and the unimpressive Italian shocked quartz never won anyone over, so Olsen and colleagues checked the fern spike for iridium and hit pay dirt. As they report in their paper, three sites in Pennsylvania show elevated iridium across the same 40 centimeters of rock containing the pollen spike. Peak iridium comes at the base of a 5-centimeter coal layer sitting on top of a layer of claystone, much as K-T rock looks in western North America. But at a maximum of 285 parts per trillion, the T-J iridium is not far above a background of 50 parts per trillion and is only one-third the size of the lowest concentrations found at the K-T. That small an amount of iridium might have been concentrated by natural geochemical processes or perhaps even carried in from volcanic eruptions; one of the largest outpourings of lava in Earth's history began nearby no more than 20,000 years after the boundary and has itself been suggested as a trigger for the T-J events (Science, 18 August 2000, p. 1130).

    What was happening to the dinosaurs during the iridium-dusted fern spike? To find out, Olsen and his colleagues—especially amateur paleontologists Michael Szajna and Brian Hartline of Reading, Pennsylvania—collected footprints left in the mud of the string of lakes that ran through the middle of what was then the supercontinent Pangea. Lumping together more than 10,000 tracks found in former lake basins from Virginia to Nova Scotia, they found that “the nondinosaurs were getting wiped out” across the boundary, says Olsen; dinosaurs jumped from 20% to more than 50% of taxa. At the same time, meat-eating dinosaurs ballooned to twice their previous mass, to judge by the size of their tracks—much as mammals grew larger after the K-T.

    In the Newark basin lake sediments in New York and Pennsylvania, the group found that tracks of Triassic reptiles that had been around for 20 million years disappeared within 20,000 years of the spore-iridium event. Then the first distinctive tracks of dinosaurs that would dominate the Jurassic appeared within 10,000 years after the event. Given the high statistical unlikelihood of ever finding the last Triassic track or the first Jurassic track, that places all four events—the disappearance of Triassic reptiles, the ascendancy of the dinosaurs, an apparent disaster among plants, and a hint of an impact—in the same geologic instant.

    Paleontologists like what Olsen and his colleagues did with their huge footprint database. “They've definitely pinned [the evolutionary transition] right on the boundary,” says paleontologist Michael Benton of the University of Bristol, U.K., thanks to their use of clocklike climate cycles recorded in the lake basins. Impact specialists are less impressed. The iridium by itself is unimposing, says cosmochemist David Kring of the University of Arizona in Tucson. Finding clear-cut shocked quartz would be convincing, he notes, but analyses for other, iridiumlike elements could show that the iridium is truly extraterrestrial. Then the dinosaurs could feel ambivalent about visitors from outer space.


    Novartis Sows Its Future in U.S. Soil

    1. Andrew Lawler*
    1. With reporting by Helena Bachmann in Geneva.

    CAMBRIDGE, MASSACHUSETTS—It was no mere political braggadocio when U.S. Senator Edward Kennedy (D-MA) last week called Cambridge's Kendall Square the “epicenter of the biotech world.” The Swiss drug giant Novartis, based in Basel, intends to set up a $250 million research facility here that will guide its overall R&D efforts—a move that has sent shock waves rippling through the company's home turf.

    Novartis's move is the latest blow to homegrown European drug research and reflects the company's efforts to keep U.S. competitors in its sights. “Europe created its own problems by failing to … ensure a dynamic research environment,” explains Novartis Chief Executive Officer Daniel Vasella. The new center—the Novartis Institute for Biomedical Research Inc.—will coordinate the company's $2.4-billion-a-year R&D portfolio in the United States, Japan, and Europe.

    The lab, slated to open early next year, initially will house 400 scientists—eventually staffing up to 1000—and will specialize in developing drugs against diabetes, cardiovascular ailments, and viral diseases. Its market is increasingly centered on this side of the Atlantic: Less than one-third of Novartis's sales are in Europe, while 43% is in the United States.


    Novartis will set up its new center in this MIT-owned building.


    However, it was the talent pool as well as drug sales that convinced Novartis to establish its research hub in the United States. After considering both Southern California and the San Francisco Bay area, Vasella chose Cambridge with its winning combination of academic institutions such as the Massachusetts Institute of Technology (MIT) and Harvard University and its boatload of biotech businesses crowding an area once known for candy factories.

    Novartis was drawn to Cambridge's “interwoven environment,” as Vasella calls it, where the traditional lines between industry and academia are becoming ever more blurred. The company persuaded Mark Fishman to leave academia—Harvard Medical School—to head the new center. The reluctant cardiovascular researcher turned down the job twice before Vasella overcame his skepticism about jumping to industry. “It was a very long and difficult sell,” says Vasella. The blurring was also apparent in the setting of Vasella's announcement: the home of MIT president Chuck Vest. MIT will be Novartis's landlord, and talks likely will get under way this summer on a potential collaboration between the two powerhouses, Vest told Science. The company already has a decade-long collaboration with Harvard's Dana-Farber Cancer Institute that has been key to the development of the new cancer drug Gleevec.

    Novartis hopes to avoid a reprise of the controversy surrounding its $25 million investment in plant research at the University of California, Berkeley, in 1998, which sparked widespread concern among academics about industry influence over the direction of university research. But that conflict pales in comparison with the general resistance in Europe to links between industry and academia. And European governments have failed to match the prodigious investments in biology and biotechnology made by both the U.S. government and venture capitalists, Vasella says: “The U.S. has pursued a much smarter policy.”

    Such statements may sting the home crowd, but they aren't being disputed. Members of the Swiss scientific community agree that their research programs are underfunded and offer few incentives to retain young talent. Last November, the Swiss Science and Technology Council launched a petition imploring the government to boost the research budget by 10% within 5 years. “The Novartis move is a very serious symptom of the downhill course of research in Switzerland,” says Catherine Nissen-Druey, the advisory body's vice president. “It sends a message to young Swiss scientists that research is more promising in the U.S.A. than it is here.” Nor is the Novartis move the first symptom of an ailing research community: Last summer, Switzerland's other drug giant, Roche, shuttered its once-vaunted Institute of Immunology in Basel (Science, 13 July 2001, p. 238).

    Novartis hasn't turned its back on Switzerland entirely: Vasella says that all of the company's 1400 researchers in Basel will keep their jobs. It will also maintain its labs in the United Kingdom and Austria. But there's no getting around the fact that the European contingent will now be looking west for their marching orders.


    Big Bucks for MIT Brain Center

    1. Andrew Lawler

    CAMBRIDGE, MASSACHUSETTS—Just across the street from Novartis's new center (see previous story), another impressive research facility will break ground this fall, a $150 million academic complex devoted to neuroscience. That effort got a big boost last week, when the Massachusetts Institute of Technology (MIT) received $50 million—its largest contribution in history from a foundation—to jump-start one part of the complex: a learning and memory center led by Nobel laureate and biologist Susumu Tonegawa.

    The money will pay for the new facility, additional faculty members, and an endowment for Tonegawa's center. But the gift won't clarify the fuzzy boundaries among the different pieces of MIT's neuroscience effort, which also includes a new institute led by fellow Nobel laureate and biologist Phillip Sharp, MIT's existing brain and cognitive sciences department, and an imaging institute. MIT officials say they are intentionally leaving the lines of responsibility blurred, and that the new neuroscience complex will allow the different groups to interact closely.

    Better mousetrap.

    Barbara Picower, left, Jeffry Picower, center, and Norman Leventhal, MIT class of 1938, examine an experiment designed to test mouse memory at Tonegawa's institute.


    “MIT is taking a comprehensive approach to the study of the brain,” says Robert Silbey, MIT science dean. And MIT president Charles Vest acknowledges “some conceptual overlap,” saying it reflects not only the difficulty in drawing boundaries in an interdisciplinary field but also “some conceptual separation.” At a 9 May press conference announcing the gift from the Picower Foundation based in West Palm Beach, Florida, Vest said that Tonegawa's piece of the brain pie will cover research from fundamental molecular neurobiology to systems neuroscience, whereas Sharp's institute will focus on systems, imaging, and computational neuroscience.

    Sharp's institute, which has been slow to set a research agenda (Science, 24 August 2001, p. 1418), held its first major meeting this week with a heavy emphasis on molecular biology; many papers were devoted to neural stem cells and genetic neuroscience as well as imaging. Sharp doesn't see a boundary problem between his group, the imaging institute, or that of Tonegawa. “It's healthy overlap,” he asserts.

    The Picower gift to Tonegawa's center, which will be renamed the Picower Center for Learning and Memory, will disburse $10 million a year over 5 years, giving new clout and personnel to Tonegawa's efforts to understand the molecular basis for learning and memory. Thirty million dollars of the gift will go toward the complex, $12 million will be allocated to four new faculty positions, and the remaining $8 million will be used to establish an endowment.

    This amount is far smaller than the $350 million pledged to Sharp's McGovern Institute for Brain Research. But that pledge provides only $5 million a year for the first 20 years, half of what Tonegawa will receive in the first 5 years. The complex will be ready in 2004 or 2005.


    A Hidden Arabidopsis Emerges Under Stress

    1. Eliot Marshall

    The common mustard plant (Arabidopsis thaliana) may look drab, but results published online by Nature this week show that it has a surprising ability to break out into new forms—some of them weird and exotic—when it is under stress. The report's authors, Susan Lindquist, now chief of the Whitehead Institute in Cambridge, Massachusetts, and her University of Chicago colleagues Christine Queitsch and Todd Sangster, suggest that this ability may play an important role in evolution, possibly allowing organisms to store up alternative survival strategies and express them only when environmental challenges go beyond the normal range.

    Queitsch says that the current findings grew out of earlier research on fruit flies by Suzanne Rutherford, a member of Lindquist's group when she was at Chicago (Science, 4 December 1998, p. 1796). In that work, Rutherford, now at the Fred Hutchinson Cancer Research Center in Seattle, Washington, wanted to see how changing the level of a protein called heat shock protein 90 (HSP90) affects fly development. HSP90, a so-called chaperone, helps protect organisms against the deleterious effects of high temperatures by binding to cell proteins and keeping them from unraveling and clumping together.

    Lindquist and her colleagues were investigating how HSP90 protects individual flies from harmful genetic mutations when they discovered that its role is at once broad and transitory. Rutherford found that reducing HSP90 in developing fruit flies produced dramatic morphological changes, such as misshapen wings and abnormal eyes. Some of these mutations became “fixed” and could be passed on to subsequent generations.


    The standard plant (top) reveals exotic new forms if HSP90 is restricted.


    The broad pattern suggested that the fruit fly genome harbors many developmental mutations that are normally suppressed by HSP90. Rutherford and Lindquist gave the name “buffering” to this ability to store but not express alternative genetic programs. Queitsch explains that HSP90—which physically stabilizes the function of proteins by altering their shape—is just one of many chaperone proteins that appear to have similarly widespread effects on gene expression. The researchers proposed that this ability allows eukaryotes to experiment with radically new phenotypes when an established phenotype comes under stress.

    To see how widespread the phenomenon might be, Queitsch tried a similar experiment in Arabidopsis, an organism she says she chose because it is so genetically distant from the fruit fly. If the same pattern emerged, she and her colleagues reasoned, it would strengthen the argument that there may be simple environmental variables—such as concentration of HSP90—that bring about dramatic variations in gene expression. They treated Arabidopsis seedlings with a chemical that interferes with HSP90 genes and produced a stunning array of developmental changes.

    They found, for example, that leaves that are normally held at right angles come out in a whirling dervish formation, the plant's ordinary gentle green hue turns dark, and roots that should dive into the earth reach instead for the sky.

    A variety of experiments showed that the changes the Lindquist group saw were not due to random drug effects but reflected underlying genetic differences in the plant strains. By crossing different genotypes, inbreeding them, and comparing the effects of HSP90 on offspring, the Chicago researchers say they found that the plants clustered in morphological groups according to their genetic lineage.

    The study is “excellent,” and the analysis of this genetic puzzle is “really super-interesting,” says botanist John Archibald of the University of British Columbia in Vancouver, Canada. It is particularly interesting, according to cellular biochemist F. Ulrich Hartl of the Max Planck Institute for Biochemistry in Martinsried, Germany, because it applies “a protein biochemical concept to phenomena which are normally viewed in a genetic and more deterministic framework.” He notes, however, that this paper says less about the heritability of traits than the fruit fly report, suggesting that the authors may be “taking a step back” from the proposal that buffering offers an evolutionary advantage.

    Sangster and Queitsch say that there has been no retreat, just a shift in emphasis. They intend to study evolutionary effects in their next experiments. They also plan to use HSP90 to bring out hidden variations that could be valuable in agriculture—looking for ways to modify plants without germ line alterations.


    Gas-Filled Chip Bids to Outshine a Computer

    1. David Bradley*
    1. David Bradley is a writer based in Cambridge, U.K.

    Want to take in all the sights of London without wearing out your shoes or make all those sales visits in the shortest possible distance? Let some glowing helium gas do the walking. Andreas Manz and colleagues at the Imperial College of Science, Technology, and Medicine, London, and a Harvard University team led by George Whitesides are taking a crack at the classic “traveling salesman problem” (TSP)in an entirely new way: using a lab on a chip.

    Finding the shortest route around a certain number of stops—whether they are tourist attractions, potential customers, or workstations in a factory—is relatively easy if the number of stops is small. But as more stops are added, the complexity of the calculation increases exponentially until it becomes impossible to compute.

    Mathematicians and computer scientists have struggled with the TSP for decades, but Manz and Whitesides, specialists in lab-on-a-chip technology, describe a more mechanistic approach in the May edition of the journal Lab on a Chip. Essentially, they have etched the problem onto a sliver of glass and let a fluid find the best route. They have so far used this analog device to work out the shortest routes between various landmarks in central London, such as Imperial College and Buckingham Palace. Other researchers call the work an impressive technical demonstration. “It is very, very cool,” says microengineer David Beebe of the University of Wisconsin, Madison.

    A to Z.

    Glowing helium shows the shortest route from Imperial College to Big Ben.


    The researchers' first step was to represent the problem graphically by etching a map of London onto a glass chip. They then covered the etched part of the chip with another piece of flat glass to create a network of pipes. They also fixed tiny electrodes to the chip so that they could apply a voltage to various locations.

    The researchers then pumped low-pressure helium into the chip through open channels along one edge and filled the pipes. Using the electrodes, they could then apply an electric voltage between two points on the chip. The electric field would then guide an electric discharge along the shortest route between the two points, making the helium glow like a fluorescent tube just along that route. The answer to the problem literally lights up. The team members say the method can at present be used to find the way out of a maze and the shortest route between two points, but they hope to develop it for the more complex TSPand network flow problems. “We had really good fun doing this,” Manz says.

    Manz concedes that the technique has limitations, such as the fact that once a layout is etched onto a device it cannot be changed. But the team hopes to scale up to much more complex problems soon. “With present knowledge about plasma discharge in narrow capillaries, we can assume to be able to work with 5-micrometer capillaries instead of the current 250-micrometer channels in this example,” says Manz. This would allow them to stud a 6-cm2 chip with 1 million electrodes, providing 21,000,000 routes across the chip.

    Next the researchers hope to find a way to control the opening and shutting of channels on the fly. That would enable them to create a variable chip that could solve a range of problems by changing the network each time to represent a different maze, map, or network layout. “The new digital wave of technologies has opened up a variety of possibilities that will be very hard to surpass,” Manz acknowledges. Still, he says, “this technology would benefit from open-minded engineers with a good feeling for where the future lies in computing.”

    Whether glass chips can rival a digital computer remains to be seen. “There is no doubt that [this is] a clever piece of work,” says computer scientist Paul Purdom of Indiana University, Bloomington. “It is an interesting physics problem to determine whether it can be made to work more rapidly than a traditional computer.” Beebe thinks racing a digital computer is pointless. But “I'll bet there are other applications … that none of us have thought of yet,” he says.


    Cherished Concepts Faltering in the Field

    1. Ben Shouse*
    1. Ben Shouse is a freelance writer in New York City.

    Scientists at the U.S. Fish and Wildlife Service (FWS) thought they had finally won a measure of respect from their peers after adopting two major revisions in their approach to endangered species: setting aside critical habitat, and taking a big picture, or whole ecosystem, view in writing recovery plans. Now they must be feeling like the Rodney Dangerfields of ecology. A clutch of papers in the June issue of Ecological Applications suggests that FWS's new approach is faltering. At stake is the success of a series of high-profile initiatives, including the agency's ambitious plan for protecting the Florida Everglades and its 68 imperiled species.

    Not that the old modus operandi—essentially viewing an endangered species in a vacuum—was a smashing success. Of roughly 1000 species listed in the United States as endangered, only 13—including the American peregrine falcon and the American alligator—have rebounded enough to warrant removal from the list. For years, sympathetic voices blamed this disappointing record on a welter of litigation that siphoned away FWS funding for implementing recovery plans. “They're getting eaten alive by the day-to-day issues,” says James Michael Scott, a University of Idaho, Moscow, zoologist who works extensively with FWS. Critics, however, have derided the agency's grip on current science.

    For a sweeping review of protection strategy, FWS and the Society for Conservation Biology launched a massive data-crunching project in 1998 involving more than 300 people at 19 universities. An army of students led by ecologist P. Dee Boersma of the University of Washington (UW), Seattle, pored over 136 recovery plans, FWS's blueprints for endangered species under its jurisdiction, addressing some 2600 questions for each plan.

    The fruit of this labor—“a huge and onerous spreadsheet,” as one of the foot soldiers calls it—was not a total slam against the agency. The study lauds FWS for steadily improving its use of science, for instance, by adopting better defined measures of a species' status.

    Postcards from the edge.

    Unlike the peregrine falcon, the Florida panther remains in grave danger.


    But several practices came under fire. Over the last decade, FWS has relied increasingly on recovery plans designed to preserve many species facing common threats in the same habitat. The analysis revealed that species in such plans are more likely to be in decline than are those in plans custom-built for their own survival, even after adjusting for when the plan was written. Probing further, the study found that FWS's multispecies plans tend to be lighter on biology than the single-species plans. That's “a very disturbing and unsettling trend,” says UW's Alan Clark, who led this part of the analysis. He cautions that the study is not an indictment of multispecies plans in general. These may well work, he says, as long as they don't give short shrift to individual species.

    Nevertheless, conservation biologists are chagrined that multispecies plans, so good in theory, are struggling in the field. The finding “caught me by surprise,” says biologist David Wilcove of Princeton University. FWS, he notes, began drafting such plans in response to criticisms that the agency moved too slowly, and in a piecemeal fashion, in getting recovery efforts under way. “They are now vulnerable to the charge that they are providing inadequate analysis,” Wilcove says. “For the FWS, it's a can't-win situation.”

    More disappointment comes from the study's critique of critical habitat, a designation that the Endangered Species Act provides to extend protection to a beleaguered species' home range. Lawsuits forced FWS to accelerate designations last year, bleeding time and money from the listing of new species and for little if any gain: The study concludes that critical habitat designation does not correlate with better data on the habitat or improved measures to preserve it.

    FWS puts a positive spin on the analysis. The findings do not depict an agency failing in its mission, insists Martin Miller, recovery chief in FWS's endangered species division. He welcomes the criticisms and plans to incorporate them into new recovery guidelines now being prepared. And he pledges to build on FWS's newfound links with academia. “We see that as one of the most important benefits” of this exercise.

    Jamie Clark, FWS director from 1997 to 2001, says the “thoughtful and incisive” study should help the agency shape its efforts, which she thinks should continue to feature multispecies planning and critical habitat designation. Devising sound plans is a struggle for an agency in perpetual crisis, she acknowledges. The key to success, she says, will be to “slow down the fire hose of everything else that's happening at FWS long enough to focus on science.”


    Mutations Reveal Genes in Zebrafish

    1. Gretchen Vogel

    To piece together an organism's blueprint, developmental biologists have to work backward. By deliberately disabling genes and watching what happens, researchers can discover the roles the genes play in development, gradually piecing together a building plan for an embryo. Even for a relatively simple fish, the task is daunting.

    In a significant step toward a blueprint for vertebrates, a team of developmental geneticists has just published new results from a large-scale screen of zebrafish mutations. In a 13 May online publication by Nature Genetics, Nancy Hopkins, Adam Amsterdam, Gregory Golling, and their colleagues at the Massachusetts Institute of Technology describe 75 mutants and—unlike previous screens—the genes responsible for the deformities. The work is “a technological tour de force” that will speed the efforts of other researchers in the field, says developmental biologist Len Zon of Harvard Medical School in Boston.

    Zebrafish are ideal models: They are easy to care for, they reproduce quickly, and their see-through embryos enable researchers to easily spot missteps in development. Essentially, scientists simply need to create genetic mutations, usually with a chemical, and then examine the embryonic wreckage. When they find an especially interesting phenotype—one eye instead of two, for example—the researchers then try to find the mutation that caused the abnormality. Such work began in earnest in the 1990s, and in 1996 groups in Tübingen, Germany, and Boston published dozens of papers describing a zoo of deformed fish (Science, 6 December 1996, p. 1608). But pinpointing a single mutated gene requires breeding hundreds of fish and can easily take more than a year. As a result, researchers have so far cloned genes responsible for only about 70 of the thousands of mutants the project created.

    No stripes.

    A normal zebrafish (top) and a mutant fish with irregular coloring.


    To speed the gene-tracking process, Hopkins and her colleagues used a genetically engineered retrovirus to create mutations. The virus enters the reproductive cells of parent fish and inserts itself into the genome—sometimes disrupting a gene. If the disrupted gene is crucial to development, the resulting offspring show the effects. Although the virus is not as efficient as chemicals in causing mutations, it has a key advantage: The affected genes are relatively easy to track down. The researchers use reverse polymerase chain reaction to locate the viral genes in the genome of the deformed embryo and then sequence the regions on either side of the inserted DNA looking for traces of the disrupted gene. About half the time, Hopkins says, the first attempt yields a likely gene at fault. The team has found some genes in as little as 2 weeks.

    Consistent with earlier screens, two-thirds of the mutants had either an unusually small head and eyes or general central nervous system degeneration. Researchers usually ignore such nonspecific mutations, focusing their resources on abnormalities that affect a single process or organ system. But the Hopkins team gave all its mutants equal treatment. Many of the culprits behind the general deformities are so-called housekeeping genes that control basic cellular functions such as DNA repair and protein manufacture, as researchers had suspected. But this is the first time anyone has shown in such detail the developmental roles of those basic genes, Amsterdam says.

    When the project is finished in 2 to 3 years, Hopkins says, the team will have identified roughly one-fifth of the genes required to make a 5-day-old larva, when the fish is “quite a significant little vertebrate animal,” able to swim and search for food.

    “The advantage of this screen is that it is comprehensive. It allows you to envision getting a phenotype for every gene expressed and functioning during embryogenesis,” notes Marnie E. Halpern, a developmental biologist at the Carnegie Institution of Washington in Baltimore.

    The project, partly funded by Amgen, will likely help human geneticists as well: All 75 genes described in the paper have human counterparts.


    Shrinking Fuel Cells Promise Power in Your Pocket

    1. Robert F. Service

    Micro fuel cells stack up well against batteries on paper. But the devices still face engineering, financial, and even political hurdles

    Modern conveniences can be so inconvenient. A billion of us now cart around laptops, cell phones, and other portable electronic gadgets. But as nifty as these accessories are, they have us perpetually prowling for outlets to recharge their ever-fading batteries. And the problem is only getting worse. Packed with energy-hogging color screens and video and data transmission capabilities, next-generation devices threaten to drain batteries in just tens of minutes instead of the few hours they take today. “Even with improvements, batteries cannot keep up with the needs,” says Hyuk Chang, a principal researcher with Samsung Advanced Research Institute in Suwon, Korea.

    Until now consumers haven't had much of an alternative. But that's about to change. A bevy of companies are closing in on commercialization of micro fuel cells, small devices that convert chemical fuels such as hydrogen or methanol directly into electrical power. Packed with energy, these chemical fuels promise to power devices up to 10 times as long as batteries on a single charge. For laptop users, that means no longer running out of juice on a long flight; for cell phone fanatics, it means 20 hours of nonstop gabbing. And even when the juice does run low, fuel cells can be recharged instantly just by adding more fuel. When it comes to powering advanced portable electronics, “fuel cells seem to be a front-runner in a field where there are very few choices,” says Dennis Sieminski, business development manager of AER Energy Resources Inc., a battery development company in Smyrna, Georgia.

    Among fuel cell makers, the race is just heating up. Companies including Motorola, Samsung, and Manhattan Scientific introduced new prototype micro fuel cell devices in the past year. And these and other companies expect commercial versions of the devices to begin hitting stores in the next 2 years. The early devices, company officials say, will likely be small, cigarette pack-sized modules that serve as backup power supplies for devices slotted with conventional rechargeable batteries. But by 2007, many experts believe, chemical power will have begun systematically replacing conventional plug-and-play electric rechargeables altogether—if, that is, micro fuel cell makers can overcome a few none-too-small barriers, including high cost and concerns over the danger of carrying combustible fuels aboard airplanes.

    Charging to market?

    For years, companies have been competing furiously to create macrosized fuel cells to power cars and generate electricity for homes and offices. Large cells still represent the biggest potential markets for the technology, says K. Atakan Ozbek, who heads energy research at Allied Business Intelligence Inc., a technology research company in Oyster Bay, New York. But the micro fuel cell market is nothing to sniff at. Battery makers, for example, sell some $5 billion a year of rechargeables alone, and cell phone makers distribute close to 400 million new phones a year. If micro fuel cell makers manage to capture even a fraction of those markets, they'll quickly find themselves doing big business.

    They certainly compete well on paper. Methanol, the most common fuel for small cells, has more than 10 times the energy density of a material in a lithium ion battery. If a fuel cell turned just half of the energy in 30 grams of methanol into usable energy, it would put out 80 watt-hours of electricity—compared with fewer than 7 watt-hours for a conventional mobile-phone battery. And consumers on the go are willing to pay extra for power that lasts longer and can be instantly recharged, Ozbek says. As a result, “micro fuel cells will be the first ones that most consumers see on the market,” he predicts.

    But like all fuel cells, the miniature versions face numerous hurdles on the way to market. Most micro fuel cells are packed with precious-metal catalysts that make them costly to produce. They run on flammable chemical fuels. Because they typically perform best at high temperatures, they must be well insulated to protect the electronics they power—not to mention anyone carrying them around. And the devices are a long way from proving themselves as reliable and versatile as plug-and-play batteries.

    Power play.

    By cleverly reshuffling atoms and electrons, fuel cells convert hydrogen-rich fuel into electric current.


    Large or small, fuel cells work by converting chemical energy into current. The reactions take place inside a chamber containing two electrodes separated by an electrolyte that keeps the chemical reactants apart (see figure above). Hydrogen atoms are fed into the chamber at the negatively charged electrode (anode), where catalysts strip them of their electrons. The electrons are siphoned off to an electrical circuit where they are used to do work. The leftover protons, meanwhile, are drawn through the electrolyte—typically a plastic mesh that blocks free electrons from passing to the other side—to the positively charged electrode (cathode). There they combine with electrons returning from the circuit and oxygen molecules from air to form water, which is usually vented off as steam.

    Although such hydrogen-consuming fuel cells can be extremely efficient, pure hydrogen must be stored in pressurized tanks—a drawback that makes it a “nonstarter” as the fuel source in mini fuel cells, says Kurt Kelty, who directs business development at Panasonic's Battery Research and Development Center in Cupertino, California. The alternative most micro fuel cell companies prefer is methanol. This liquid fuel has a high energy density and is plentiful and cheap, explains Mark Hampden-Smith, a vice president for catalyst supply company Superior MicroPowders in Albuquerque, New Mexico. The catalysts used in methanol fuel cells can also strip hydrogen atoms from methanol without the need for another step. What's more, he explains, in a fuel cell methanol breaks down into CO2 and water vapor without any leftover byproducts that could foul up the fuel cell over time.

    Reactions and regulations

    Getting methanol fuel cells to work at high efficiency, however, hasn't been easy. One problem lies in the fuel itself. Methanol can cross through the plastic electrolyte to the cathode, where it will block the reactions that form water, thus reducing the overall efficiency of the cell. To lessen the problem, researchers often dilute the methanol with water. Yet this solution creates problems of its own, as it tends to lower the overall power output of the cell.

    Future co-pilot?

    Fuel cells will have to get much smaller to replace batteries in devices such as PDAs.


    Numerous groups are working overtime to come up with membranes less permeable to methanol. At a meeting last month in Washington, D.C.,* researchers at DuPont, which produces the most popular electrolyte membrane, called Nafion, reported developing a pair of new thin plastics that drastically lower the amount of methanol that crosses to the wrong side of the cells. The two membranes reduced methanol crossover by 60%, one of them while allowing a cell to operate at a 60% higher power output, DuPont's Raj Rajendran reported.

    Researchers at the Japanese electronics giants NEC and Sony, meanwhile, are turning to all-carbon fullerenes for their electrolytes. Last year Sony, for example, reported creating a new electrolyte membrane using soccer ball-shaped C60 molecules. The fullerenes proved much less permeable to methanol, allowing Sony researchers to run their fuel cell without spiking the methanol with water.

    Researchers at Motorola are making progress on a very different solution to methanol crossover. They're using advanced semiconductor manufacturing techniques to create a separate miniature fuel reformer that strips methanol of its hydrogen atoms, which are then sent to a fuel cell. Because methanol never enters the fuel cell itself, the cell can be made with a simpler design, says Jerry Hallmark, an electrical engineer who heads Motorola's fuel cell effort at Motorola Labs in Tempe, Arizona. Motorola hopes that, once perfected, the reformers could be stamped out at low cost just as microchips are today. Other teams, including groups at Manhattan Scientifics, Mechanical Technology, and Los Alamos National Laboratory, are also looking to lower costs by creating chiplike cells. “I think it's still early days” for these devices, says Hampden-Smith. “But it's definitely a promising strategy.”

    No matter which approach manufacturers settle on, the technology could still find itself derailed by factors well beyond their control. Precious metals such as platinum and palladium, which are used as catalysts in both fuel cells and reformers, could jump in price if consumers suddenly demand hundreds of millions of devices a year, Hampden-Smith cautions. Security concerns might also bar the way. For now, methanol and other liquid fuels cannot be taken aboard commercial aircraft. Hallmark is leading industry negotiations with the U.S. Department of Transportation to allow properly packaged fuel canisters on planes, but he acknowledges that last year's terrorist attacks in the United States could make his efforts a hard sell. “If the consumer cannot carry a fuel cell cartridge, then you have no product,” Ozbek warns.

    For micro fuel cells to make it to market, tricky engineering challenges could prove the simplest problems to solve.

    • *The Knowledge Foundation Fourth Annual International Symposium: Small Fuel Cells for Portable Power Applications, 21–23 April.


    Biofuel Cells

    1. Robert F. Service

    While companies are battling to shrink fuel cells down to cell phone size, nature has already done them one better. Enzymes in creatures from bacteria to people extract energy from compounds such as glucose to power life. Now researchers are looking to borrow a page from biology's manual to create rice grain-sized fuel cells that run on chemicals inside our bodies. Such cells, they say, could someday power futuristic implantable sensors that monitor everything from blood glucose levels in diabetics to chemicals that signal the onset of heart disease or cancer.

    Researchers can already make glucose-detecting sensors as small as a millimeter across. “But you cannot make a submillimeter-sized battery at a reasonable cost,” says Adam Heller, a chemical engineer and biofuel cell pioneer at the University of Texas, Austin. “That's where we see the use for miniature biofuel cells.”

    Biofuel cells are much further from commercial development than their larger cousins. But recent progress has been heady. Last August, for example, Heller and his Texas colleagues reported in the Journal of the American Chemical Society that they had created a miniature glucose-powered cell that puts out 600 nanowatts of power, five times the previous biofuel cell record and enough to power small silicon-based microelectronics. Heller's lab has already developed millimeter-sized glucose sensors, which are currently being commercialized by a company called TheraSense in Alameda, California. And the new biofuel cells may one day keep such implantable sensors running for days to weeks at a time.

    Like traditional fuel cells, biofuel cells use catalysts at two oppositely charged electrodes to strip hydrogen atoms of their electrons and then combine the leftover hydrogen ions with oxygen to form water (see figure). The siphoned-off electrons are then used to do work. In traditional fuel cells, reactants at the two electrodes are kept apart by a thin plastic membrane. But such membranes would be impractical to make on the size scale of biofuel cells, so Heller and other teams have settled on another approach: They use enzymes to carry out the reactions and tether those enzymes to the two different electrodes to ensure that the proper reactions occur at the right spots. At the negatively charged electrode, or anode, copies of an enzyme called glucose oxidase strip electrons from hydrogen atoms on glucose, converting the sugar molecule to gluconolactone and a pair of hydrogen ions. These ions then travel to the positively charged electrode, or cathode, where an enzyme called laccase combines them with oxygen and electrons to make water. The tethers are made of osmium-containing polymers that ferry electrons between the electrodes and enzymes.


    By drawing fuel from the body and processing it with enzymes, researchers hope to build fuel cells that imitate the power plants in living organisms.

    Heller's cells do have their drawbacks. Because laccase enzymes typically work best in environments much more acidic than the neutral pH of blood, laccase-based fuel cells implanted in the body likely wouldn't produce much power. “Nature didn't evolve proteins to work with circuitry,” says Tayhas Palmore, a chemist at Brown University in Providence, Rhode Island.

    But Palmore has been working to improve matters here as well. At a fuel cell conference in Washington, D.C., last month, Palmore reported that her group had used standard molecular biology techniques to reengineer the laccase enzyme so that it retains about 50% of its activity at physiological pH. And Palmore and her colleagues are now working on incorporating the reengineered laccase into a prototype fuel cell that could extract power from circulating fluids such as blood.

    All biofuel cells still face considerable challenges, however. Most important, blood and other complex bodily fluids contain numerous compounds that can deactivate or block the enzymes essential to fuel-cell function, causing them to stop working within hours or days. But if researchers can improve their stamina, biofuel cells could pave the way to a new generation of implanted devices powered by the body itself.


    The Battery: Not Yet a Terminal Case

    1. Joe Alper*
    1. Joe Alper is a writer in Louisville, Colorado.

    The power demands of portable electronics may seem insatiable, but the venerable battery still has a few tricks to keep it a key player in the digital revolution

    The alkaline power packs that today you pop into your electronic gadgets may not look much like the column battery developed by Alessandro Volta 200 years ago, but they are essentially the same technology. And even though the performance of batteries has improved greatly, digital electronics is evolving at such breakneck speed that these chemical power plants are struggling to keep up. “Batteries have become the showstopper in today's wireless world,” says Krishna Shenai, a power engineer at the University of Illinois, Chicago. “They don't provide enough power, they're too heavy for the power they do provide, and they don't last long enough to meet the demands of the next generation of portable digital devices.” According to John Hadley, who oversees alkaline battery research at Rayovac in Madison, Wisconsin, “there are personal electronic devices that have already been developed but that need better batteries from us before consumers will be satisfied with them.”

    It seems inevitable that fuel cells will soon take over many of the jobs that batteries now do (see p. 1222), and increasingly photovoltaic cells and even clockwork are muscling in on their territory. But the chemists and materials scientists who work in the field say they still have a few tricks up their sleeves. Complex ceramic electrodes and solid electrolytes made from conducting polymers may keep rivals at bay for years yet. “I don't think we've really scratched the surface of what it's possible to do with battery chemistry, and yet we're already far ahead of where we were just a few years ago,” says chemist Jim McBreen of Brookhaven National Laboratory in Upton, New York.

    The battery industry's secret weapon is lithium, a material with one of the largest electromotive forces in nature that is one of the lightest metals known. The big problem with lithium metal is that it is also immensely reactive. It catches fire when exposed to even the smallest amount of moisture and will oxidize virtually any liquid electrolyte.

    Current fashion.

    In a pure lithium battery, metal ions in the outer cathode are oxidized, releasing an electron into the circuit and freeing a lithium ion to migrate to the anode.


    Despite these challenges, the first generation of this type of battery, known as lithium ion batteries, are already in use in watches, flash cameras, and the newest rechargeable batteries. These batteries pack three times more energy into a given volume than a conventional alkaline battery and can be recharged an almost unlimited number of times. Unlike conventional batteries (see sidebar), lithium ion batteries don't use a redox reaction to generate electricity. Instead, lithium ions shuttle back and forth between the anode and cathode, forcing electrons to move with them.

    In currently available lithium ion batteries, the anode consists of ultrapure graphite, which absorbs lithium ions, one per six-carbon ring. Oxides of cobalt, nickel, or manganese form the cathode. When the battery discharges, lithium ions exit the graphite anode, migrate through the electrolyte, and form chemical complexes with the metal oxide within tiny channels in the cathode's physical structure. Applying an opposing voltage forces the ions back to their starting point, recharging the battery. Lithium's high reactivity and the need to exclude moisture mean that lithium batteries are very expensive to make, however.

    To justify the costs, designers want to boost the batteries' performance still further. Their first target for improvement is the anode. Graphite is good for this job because lithium slips easily between its parallel sheets of carbon rings. The problem is that it takes six carbon atoms to accommodate one lithium ion, wasting space. “We'd like to develop materials that can pack more lithium into a given volume,” says Gerald Caesar, who manages battery and fuel-cell research for the Advanced Technology Program at the National Institute of Standards and Technology in Gaithersburg, Maryland.

    To accomplish this, researchers are looking for metallic composites that absorb lithium ions. At T/J Technologies in Ann Arbor, Michigan, for example, chemists have found that nanoparticles of various lithium-tin alloys can absorb and release 2.5 times more lithium than a given volume of graphite. Batteries made with tin-alloy anodes store nearly three times more charge than those with a graphite anode, and the tin-based materials cost far less than graphite, too.

    Another line of attack is to improve the electrolyte, which must exclude even the smallest trace of water to avoid explosion. “There are so many problems with today's nonaqueous electrolytes that it's amazing that lithium ion batteries are as good as they are,” says Brookhaven's McBreen. One of the main problems is that positively charged lithium ions and their negative co-ions in the electrolyte do not separate well in nonaqueous solvents, so the lithium ion ends up dragging its co-ion like a ball and chain. “The harder it is for lithium to move from one electrode to another, the poorer the performance of the battery,” says McBreen, whose group has developed different additives to ease that congestion. One, a fluorinated isopropyl boron compound, enhanced the conductivity of the standard ethylene carbonate-dimethyl carbonate electrolyte 100-fold by binding to the co-ions, thereby giving the lithium ions more freedom to move. A battery made with this electrolyte kept its performance over 50 discharge-charge cycles. Longer term tests are now under way.

    Migrating species.

    In a lithium ion battery there is no redox reaction. Ions shuttle between cavities in the graphite cathode and metal oxide complexes in the anode.


    Ultimately, for reasons of cost, weight, longevity, and safety, manufacturers would like to do away with liquid electrolytes. Dozens of groups worldwide are racing to develop suitable conductive polymers. An intermediate step, made by researchers such as Kyoung-Hee Lee and his colleagues at Samsung SDI in Chungchongnam-Do, Korea, is to form a hybrid: cross-linked polymers in the presence of a liquid electrolyte. The Samsung team found that such materials were 100 times as conductive as the liquid electrolyte alone. Test batteries made from the flexible gellike electrolyte were, in Lee's words, “very encouraging as candidates for a practically useful lithium ion cell.” Teams at Brookhaven and the University of Rome are also working on mixed polymer-liquid electrolytes.

    Removing liquid altogether has proven difficult because it's not easy getting lithium ions to move smoothly through a solid polymer. But it is proving worthwhile for specialist applications. Quallion, a maker of specialist batteries in Sylmar, California, is trying to develop lithium ion batteries the size of a large grain of rice to power implantable nerve stimulation devices to treat conditions such as Parkinson's disease and urinary incontinence. Robert West of the Organosilicon Research Center at the University of Wisconsin, Madison, is helping the company with electrolytes. West says that conductive silicon- and oxygen-based polymers known as polysiloxanes are soft and pliable at room temperature, and they have among the highest free volumes of any polymer, “which means there would be plenty of room for lithium ions to travel between electrodes.”

    West and his team have now prepared several polymer electrolytes that Quallion has incorporated into prototypes. The best polymers have proved almost as conductive as liquid electrolytes. Quallion is now working out how to make commercial quantities of the tiny batteries. “We're dealing with an entirely new set of physical parameters in trying to make a battery this small,” says Wendy Wong, project manager at Quallion.

    Further in the future, battery designers would like to up the lithium stakes by moving from lithium ions to anodes made of lithium metal, which would pack even more power into a given volume. Again, finding a suitable solid electrolyte is the key. Chemists Mason Harrup, Thomas Luther, and Frederick Stewart of the Department of Energy's Idaho National Engineering and Environmental Laboratory in Idaho Falls have created solid electrolytes using phosphorus-and-nitrogen-based polymers known as polyphosphazenes, which can easily pass lithium ions between their chemical groups. By combining the polymer with a ceramic, compressing it, and then spinning the mixture into a thin film, the researchers made sheets of solid polymer flexible enough to wrap around a metallic lithium anode. When combined with a commercially available cathode, the result is a battery with “outstanding power-to-weight performance over a very large number of discharge and recharge cycles in a flexible package,” Harrup says. Such a flexible battery would allow device manufacturers to cram the power source into odd-shaped nooks and crannies.

    Ultimately, the needs of electronic devices will outstrip the ability of batteries to adapt, but battery designers are darned if they're going to give up yet. Says Caesar: “There's only so many chemistries that you can use to make a battery, and we're trying to milk them for all they're worth.”


    Inside a Battery

    1. Joe Alper

    A typical AA battery is a miniature power plant that uses a chemical reaction to create an electric current. Every battery has a positive and a negative electrode immersed in an electrolyte that will conduct electrons or transport ions between them. Chemical reactions between ions in the electrolyte and the different metals of the two electrodes cause electrons to accumulate in the negative terminal, or anode. Connecting the two electrodes via an external circuit (which contains the device that needs current, such as a portable DVD player) allows the electrons to flow through the circuit from the anode to the positive electrode, or cathode. The more the battery is discharged, the more the anode becomes oxidized and the cathode becomes reduced. Eventually, one of the electrodes will no longer be able to react. Then the redox reaction stops; the battery is dead. In rechargeable batteries, applying an external voltage across the electrodes runs the redox reaction in reverse.


    Finding New Ways to Protect Drought-Stricken Plants

    1. Anne Simon Moffat*
    1. Anne Simon Moffat is a freelance writer in Chicago.

    With drought an ever-present threat, researchers are identifying genes that can help plants tolerate arid conditions in hopes of using them to produce hardier crops

    Dry fields and stunted plants from Maine to Georgia show that the eastern United States has been hit with the worst drought in more than a decade. In the grain-farming and livestock-grazing states of Montana, Nebraska, and Wyoming, ranchers are also confronting parched soils. Even the Midwest, home to 20% of the world's fresh water, is in trouble: Areas only 65 kilometers from the Great Lakes have dangerously low water tables.

    The global picture is just as bleak. Historically arid regions in Africa and the Middle East are expanding, and shortages of fresh water are appearing in places, such as the Asia-Pacific rim and Northeast Brazil, that once never doubted their water supplies. “Worldwide, drought is the biggest problem for food production,” says Jeffrey Bennetzen, a molecular geneticist at Purdue University in West Lafayette, Indiana. And that makes the quest for drought-resistant crops even more urgent, he says.

    In the last decade or so, researchers in the developing world have successfully coupled molecular marker technology, which allows a more precise identification of strains carrying desired traits, with classical plant breeding to yield more drought-tolerant varieties. For example, 1 year ago, South Africa's Ministry of Agriculture announced the release of maize ZM521, which produces yields up to 50% higher than those of traditional varieties under drought conditions. Many organizations, including the Consultative Group on International Agricultural Research, the International Maize and Wheat Improvement Center, and the European Union, contributed to the development of ZM521.

    More recently, plant researchers in the United States and Europe have taken a newer tack, focusing on identifying specific genes that help plants cope with arid conditions and, it turns out, with other stresses, such as cold temperatures and the high salt concentrations often found in irrigated soil. Indeed, from a plant's perspective, frost injury, which involves water leaving cells and forming ice crystals in intercellular spaces; salinity damage, which occurs when roots can't extract enough fresh water from salt-laden soils; and drought injury are all forms of dehydration. “If you increase a plant's tolerance to dehydration, it doesn't matter whether the stress comes from cold or drought, it will often help the plant survive,” says plant molecular biologist Michael Thomashow of Michigan State University in East Lansing.

    Researchers are now attempting to beef up the ability of crop plants to withstand dehydration by transferring in some of the genes they've identified. They've achieved some successes, albeit modest ones, with cotton and tomatoes, and they hope to extend the work to the most important cultivated crops: cereal grains.

    Salt lovers.

    Tomato plants carrying a foreign gene that protects their cells from salt-induced dehydration (top) thrive in a 200-millimolar salt solution, whereas unaltered plants (bottom) wither.


    The Rockefeller Foundation in New York City, among others, wants to guarantee that such advances also benefit developing countries. Two years ago the foundation approved a 10-year global effort for up to $50 million to improve drought tolerance in maize for Africa and in rice for Asia. Given the resistance that greeted plants genetically altered to resist pests or herbicides, it remains to be seen how well accepted drought-resistant plants produced by the same technology will be.

    Complex adaptations

    Over the years, researchers have found that plants have evolved several mechanisms to guard against drought damage. One is by producing “osmoprotectants,” compounds that shield proteins and membranes from the damaging effects of dehydration by forming a protective shell on their surfaces or by removing destructive hydroxyl radicals that would otherwise chop up proteins. Not all crop plants make osmoprotectants, which include sugars such as trehalose and certain amino acids and amino acid derivatives.

    Almost 10 years ago, Hans Bohnert of the University of Illinois, Urbana-Champaign, decided to see whether the genes for osmoprotectants could be inserted—and made to function—in plants that don't normally carry them. He took a gene that produces the osmoprotectant D-ononitol from ice plants, the durable groundcover that blankets California's highway medians, and introduced it into tobacco plants. The modified plants were better able to withstand stresses such as drought—but not enough to make a difference in the field.

    Still, the results provided a proof of principle. Since then, researchers in a dozen labs have introduced osmoprotectant genes into major crops, including potato, rice, canola, and, in Japan, the persimmon tree. Again, though, production of the compounds was too low to improve the plants' drought tolerance. University of Florida, Gainesville, plant biologist Andrew Hanson, among others, wants to solve this problem. “We need to diagnose what limits osmoprotectant levels in engineered plants and to use repeated cycles of engineering to overcome these limits,” he says.

    Even when drought-tolerance genes are present in plants, they are often poorly expressed during stress. A current strategy is to identify and manipulate the signaling pathways that trigger the potentially protective genes into action. “If genes are the hardware that seems to be present in all plants, what makes them tolerant are software differences,” says Bohnert.

    Indeed, recent research has shown that fine-tuning of regulatory systems can make or break drought tolerance. About 10 years ago, for example, Thomashow's group identified four genes involved in cold tolerance in Arabidopsis; at about the same time a team led by Kazuo Shinozaki of the Institute of Physical and Chemical Research in Tsukuba, Japan, and his wife Kazuko Yamaguchi-Shinozaki of the Japan International Research Center for Agricultural Sciences, also in Tsukuba, identified a group of Arabidopsis genes involved in drought tolerance. Sequence analysis showed that two of Thomashow's COR (cold response) genes are the same as the Shinozakis' RD (responsive to dehydration) genes.

    The functions of most of these genes are unknown, but in 1998, Thomashow, in collaboration with the late Peter Steponkus of Cornell University, demonstrated that one of the COR genes makes a cryoprotective protein that stabilizes membranes against injury caused by freeze-induced cellular dehydration. First, Thomashow and his team tried simply overexpressing this or other COR genes alone to improve the ability of Arabidopsis to withstand freezing, but they had little success. But turning up the activity of several cold-responsive genes at once worked better.

    In 1997, Thomashow and his colleagues identified a transcription factor, CBF1, that controls expression of a battery of COR and other cold-responsive genes in Arabidopsis. A year later, the researchers showed that overexpressing the CBF1 gene increases the freezing tolerance of Arabidopsis plants. And in similar experiments a few months later, the Shinozakis showed that they could increase tolerance to both frost and drought by overexpressing a second member of the CBF family of transcription factors, which they designated DREB1. The altered Arabidopsis plants grew poorly, however.

    Fields of … brown.

    Improving the drought tolerance of corn could make dried-out crops like this one a thing of the past.


    Thomashow hopes to extend that work to crop plants. As he reported at a recent meeting,* parts of the CBF/DREB1 system are widespread in the plant kingdom. He and his colleagues found CBF-like genes in canola, a commercial oilseed related to Arabidopsis. And there are indications that wheat, rye, and even tomato have parts of what Thomashow calls the CBF cold-response pathway. The goal now is to crank up the activity of these genes—without stunting the plants' growth.

    Evidence that this may in fact be possible comes from plant scientist Tuan-Hua David Ho of Washington University in St. Louis, working in collaboration with Cornell University biochemist Ray Wu and Min-Tsair Chan of the Institute of Agricultural Sciences in Taipei, Taiwan. These researchers attached the CBF1 gene to a regulatory sequence that causes it to be turned on when the temperature drops and then introduced it into tomato plants. As a result, Ho says, “this new generation of transgenic tomatoes has normal yields yet has still displayed a higher level of stress tolerance.”

    Salinity, often caused by irrigation of croplands, produces plant dehydration just as dangerous as that caused by drought itself. But progress is being made here, too. At the University of California, Davis, Eduardo Blumwald and his colleagues have been studying an Arabidopsis protein called AtNHX1 that can protect against this threat.

    Plant cells contain vacuoles that can sequester harmful materials. AtNHX1 is located in the membrane of one type of vacuole, where it pumps sodium ions from the cell cytoplasm into the vacuole. About 3 years ago, the Blumwald team showed that they could protect Arabidopsis from high salt concentrations by altering the regulatory sequence of the AtNHX1 gene so that it makes higher than normal amounts of protein.

    Last year, Blumwald extended these findings, showing that overexpression of the AtNHX1 gene also protects greenhouse-grown tomatoes from high salt concentrations. Indeed, the fruit grows in a 200-millimolar solution of salt, about one-third the concentration of seawater, far higher than that of the fresh water used for irrigation. (The results appeared in the August 2001 issue of Nature Biotechnology.) Field trials are planned for next year.

    Another way to protect plants from the drying effects of salt is to prevent it from getting into their cells in the first place. In results reported in the 20 November 2001 issue of the Proceedings of the National Academy of Sciences, Mike Hasegawa, Ray Bressan, and their colleagues at Purdue showed that they could increase the salt tolerance of Arabidopsis by inactivating the gene for a protein called AtHKT1 that transports sodium through the membranes of root cells.

    These genetic manipulations were aimed at directly preventing dehydration of plant cells, but other drought-tolerance schemes are being investigated. Plant biologists have known since the mid-1980s that stresses such as high light, drought, or salinity increase production of toxic oxygen species, such as peroxide, the damaging effects of which include disruption of photosynthesis.

    In work that began in the mid-1990s, plant molecular biologist Randy Allen and his colleagues at Texas Tech University in Lubbock introduced genes encoding two enzymes that mop up peroxides, ascorbate peroxidase (APX) and glutathione peroxidase, both together and separately, into tobacco plants. The researchers targeted the enzymes so that they would be active in the chloroplasts, where photosynthesis takes place. In lab studies described in the December 2001 issue of Experimental Botany, the Allen team found that the altered tobacco plants maintained near-normal rates of photosynthesis under stressful conditions while photosynthesis in wild-type plants was reduced by one-half.

    Even before the tobacco studies were published, the researchers had begun work on cotton, an important crop plant in Texas. In 2000, a preliminary field trial of cotton transformed with APX showed that, under dryland agriculture, the altered plants produced 280 kilograms of cotton per hectare, whereas the wild-type yielded only 168 kg.

    Back to the future

    Other researchers have gone back to the basics: studying the physiological underpinnings of tolerance, work that could also provide new ways of boosting drought tolerance. For example, in studies of maize seedlings grown with limited water, plant biologist Robert Sharp of the University of Missouri, Columbia, found that roots adapt to the scarcity in several ways. The structure of their cells changes, permitting more longitudinal growth deep into soils. Also, the roots adjust osmotically, taking in more solutes and water. This response mechanism might be beefed up via changes in regulatory mechanisms, possibly further enhancing roots' vertical exploration of the soils, Sharp says.

    Dorothea Bartels of the University of Bonn, Germany, and others are seeking clues from plants with an extraordinary ability to deal with drought, such as the resurrection plant (Craterostigma plantagineum), which can become completely dehydrated but revives with moisture. One of its secrets for success is a revamped chemistry that allows cellular metabolism to go into an inert, glasslike state. Curiously, although the plant tolerates desiccation, it doesn't thrive in saline soils, which suggests “a unique metabolism for the plant,” says Bartels.

    Washington University plant biologist Ralph Quatrano and his colleague David Cove of the University of Leeds, U.K., have just started studies of Physcomitrella, a moss that tolerates severe desiccation. Mosses were among the first land plants and may provide a good source of genes needed for coping with limited water. “Just 6 or 7 years ago, given the then capacity to control transformation, I would have scoffed at the [value of] ‘weird and wonderful’ genes from resurrection grasses or mosses,” says Rockefeller Foundation scientist John O'Toole, who has developed research programs on drought tolerance for more than 25 years.

    Now, he says, identification of such genes is a promising next step, offering researchers a significant new opportunity for manipulating drought tolerance into crops. More knowledge of plants' diverse physiological adaptations to drought, coupled with an understanding of their genetic basis, should help world agriculture do its part to conserve an increasingly rare resource, fresh water.

    • *“Crop Productivity in Water-Limited Environments,” 31 October to 2 November 2001, Donald Danforth Plant Science Center, St. Louis, Missouri.


    Bigger Populations Needed for Sustainable Harvests

    1. Katie Greene

    The future of the New England fishing industry rests on the willingness of fishers, environmentalists, scientists, and the courts to find common ground

    In 1998, scientists at the Northeast Fisheries Science Center (NEFSC) in Woods Hole, Massachusetts, took on a huge challenge. They helped calculate the mass of Atlantic scallops needed to support a stable and productive scallop industry, something that hadn't existed for decades. Their answer: It would take a fivefold increase over the recent depressed mass of scallops in the Georges Bank—a prime fishing area 100 kilometers off the Massachusetts coast—to bring back what had once been the continent's most productive scallop fishery. The calculation was more than an academic exercise. The New England Fishery Management Council (NEFMC) worked the target into its scallop management plan in 1998, which helped guide the opening and closing of the region's scallop grounds over the next 3 years. That injection of science into government regulation has paid off, yielding a robust harvest that seems to be sustainable, says Steven Murawski, director of the center's fisheries population dynamics branch.

    Now scientists hope to replicate that success with similar calculations for the much larger groundfish industry, once the backbone of the New England economy. Groundfish harvests—which include 14 bottom-dwelling species such as flounder, cod, hake, and haddock—have been inching back since a crash in the mid-1990s prompted an economically wrenching series of fishing restrictions. Some populations are now nearing targets set in 1999, but new findings issued by NEFSC in March suggest that existing targets may be too low and that more reasonable goals would limit fishing to ensure the mass of some stocks climbs higher than levels seen at any time in the last 40 years.

    Those findings could have a major impact on the livelihood of tens of thousands of people in the Northeast. The new biological targets have already been accepted—pending revisions and debate—as the goals for an NEFMC management plan that will guide regulators starting in August 2003. And they have influenced a court-ordered settlement between the National Marine Fisheries Service and conservation groups on new rules designed to help northeast groundfish recover. The settlement rules—approved last month by Judge Gladys Kessler, who tightened some provisions—slash the number of days commercial fishers can operate, mandate the use of coarser nets, increase the size of legally catchable fish, and add thousands of square kilometers to areas already closed year-round or during certain seasons. They went into effect 1 May.

    The compromise rules seem to please nobody. Although everybody agrees that the fish are coming back, opinions diverge over how high the bar should be set, and how to tell if it's been cleared. Indeed, three of the four conservation groups that took part in the negotiations declined to sign the settlement, complaining that the rules didn't go far enough. And most representatives of the fishing industry backed out too, arguing that the rules are far too restrictive.

    Out of bounds?

    Scientists say the mass of fish needed to rebuild troubled New England haddock fisheries exceeds the estimated stock that has existed at any time since 1930.


    NEFSC's estimates of the harvest rates needed to sustain a productive fishery lie at the heart of these disagreements. The center's experts marshaled a vast array of data to argue that the target biomass should be increased for more than half of the stocks. Those data include detailed catch statistics stretching back to the 1930s and 40 years of scientific surveys. For many species, the catch data includes the ages of the fish caught—critical information for estimating how many young fish reach maturity each year.

    These so-called recruitment data are key to calculating the revised biomass targets. Older models don't use this information, and incorporating it raised the targets considerably—in some cases doubling the previous target biomass as the researchers reassessed the importance of strong recruitment. The result of this thorough comparison of many models and data, says Ransom Myers, a fisheries scientist at Dalhousie University in Halifax, Canada, better addresses the inherent uncertainty of projecting fish populations. Unlike previous targets, he says, the scientists have taken into account the shifting baseline problem—the idea that as time passes, fishers and researchers become used to populations that are slowly spiraling downward.

    Jeff Hutchings, also at Dalhousie, calls the higher biomass targets “very brave, and very necessary.” Striving for bigger biomass would prevent fish populations from sinking below a threshold that could trigger not only economic hardship but also fundamental changes in food webs that prevent the species from ever rebounding, he says. That's what happened with the Newfoundland cod population, still in dire straits despite a 10-year fishing moratorium.

    But fishers are skeptical of the higher biologic targets and the corresponding bleak picture the report paints of stock status in the Northeast. “We're continually amazed by these reanalyses that push the numbers [for target population levels] up,” says Jim Kendall, director of the New Bedford Seafood Coalition and a member of NEFMC, which drafts fishing management plans. He and others also wonder if the seas can support such large increases in every type of fish. Murawski says that's a fair question but that the researchers feel justified in pushing all populations back to at least 1960 levels.

    Aside from their fundamental disagreement over population targets, both sides are at odds over the means to prevent overfishing. The new rules only limit fishing effort—how much time and energy is spent pulling fish out of the water—rather than regulating what is actually caught. Fisheries scientist Ellen Pikitch of the Wildlife Conservation Society says this means “you have to sit back and pray” that the fishers don't kill too many fish from species in trouble, preventing populations from rebuilding to the new biomass targets. Her group, one of the three that refused to join the settlement, criticized the rules for failing to impose a quota on each fish stock that would force fishing to stop once the catch limit is reached.

    But the fishing industries object strongly to quotas. They argue that quotas fundamentally change the market structure for fish, fall more heavily on small-volume fishers, and can reduce safety by encouraging fishers to fish in bad weather in order to beat other boats to the fish, says Anthony Chatwin, a fisheries scientist at the Conservation Law Foundation in Boston—the only conservation group that signed the settlement. They're also hard to enforce in real time, he says. “They've just finished tallying the fishing mortalities for 2001,” he notes.

    A quota system appears to be the most straightforward way to implement the science, admits Chatwin, “but we need something that works on the water as well as on paper.” Those who agreed to the settlement must now work on a longer term plan that will fully incorporate the new science, he says, and lay the groundwork for the greater harvests to come.


    Japan Asks Why More Yen Don't Yield More Products

    1. Dennis Normile

    Officials look beyond a sluggish economy to understand why R&D spending hasn't translated into greater success in the marketplace

    TOKYO—The importance of research is an article of faith within Japanese industry and government. “It is simply held to be true that investment in research and development is the biggest factor in keeping a company growing,” says Tatsuro Ichihara, vice president for research at Omron Corp. “New technologies are very important for the nation's economic growth,” adds Tagui Ichikawa, a science and technology policy official at the Ministry of Economy, Trade, and Industry (METI, formerly the Ministry of International Trade and Industry).

    But recent news is straining that belief in the power of R&D. Government officials are puzzling over an equation that shows a simultaneous rise in research spending and a decline in global competitiveness. The total has been buoyed by steady increases in governmental research budgets that offset a tightening of corporate spending on research. In March, the government released figures showing that Japan's R&D investment in 2000 was a world-leading 3.18% of gross domestic product, far ahead of the 2.66% ratio in the United States. Unfortunately, the news barely preceded an announcement that the nation has slipped into its second recession in 5 years, and that many of the biggest corporate R&D spenders—including NEC Corp., Hitachi Ltd., and Toshiba Corp.—were among a near-record number of Japanese companies announcing losses for the fiscal year ending 31 March.

    Research managers say fixing the mismatch between spending and results will require help from both the public and private sectors. “The problem is that the fruits of the R&D are not going into commercialization,” says Ichikawa. Takemitsu Kunio, general manager of research planning for NEC Corp., admits that their research efforts haven't helped the company as much as they should. “Our research has often been out of touch with corporate goals,” he says.

    Customers first.

    Hitachi's Michiharu Nakamura says a healthy corporate research budget hinges on rising sales.


    Concerns about Japan's ability to commercialize high technology mark a dramatic turnaround from a decade ago, when Japan's high-tech companies seemed invincible. Private sector research was growing at 5% or more per year as corporations forged ahead on basic research and hired top American and European scientists to staff overseas R&D labs. But Japan's economic malaise has taken its toll, and private R&D spending has been nearly flat since 1997.

    The inability of the private sector to keep boosting R&D investment led to a dramatic expansion of government research. It was hoped that public sector basic research would provide discoveries that corporations could commercialize and that would eventually restart the nation's sputtering economy. But the economic payoff has been very difficult to see. The lackluster economic performance represents what Ichikawa calls “a valley of death” between basic research and applications, a gap that claims fundamental discoveries before they can become moneymaking products.

    The government has tried to do its part to bridge that valley with laws encouraging university and national lab researchers to file for patents and work with technology licensing offices to help them market their discoveries. It has loosened regulations preventing university professors from advising private companies and adopted measures to foster start-up businesses. METI also reorganized its own group of applied research labs to make their efforts more focused and subject to stricter evaluations. “We had lots of overlapping efforts, lots of projects, but no one could understand how they were doing,” says Ichikawa.

    But ultimately, responsibility for commercializing research rests with the private sector. And here, Ichikawa says, “the management [of corporate R&D efforts] has not been very effective.” NEC's Kunio admits that his company tended to “place small bets on every possible number at a roulette table.” In 1999 NEC tried a new tack, giving managers more responsibility for how their divisions performed. That led to greater involvement in setting research objectives. Although Kunio declines to identify which research efforts were dropped and which survived, he notes that the company's software division has cut the number of projects by half and doubled the number of researchers in each group in an effort to increase the payoff.

    A similar reorientation has been carried out at Hitachi, which has abandoned a linear model in which researchers handed off discoveries to production departments. Now researchers work directly with engineers and customers, says Michiharu Nakamura, who heads Hitachi's Research & Development Group, and all have a stake in finding a market niche.

    Even the optimists acknowledge that the reforms will take time, however. “Over the next 10 years you will start to see the fruits” of corporate research restructurings, predicts Omron's Ichihara. But government officials want to see corporate spending recover more quickly. METI's Ichikawa says they are particularly concerned because U.S. corporate R&D spending grew briskly through the latter half of the 1990s—by an average of 7% per year, according to the U.S. National Science Foundation—just as Japan's corporate spending has flattened out (see graph).

    Trading places.

    U.S. industry now funds a larger share of overall domestic research than does industry in Japan, where the government is becoming a bigger player.


    One step that the government is considering would bolster tax breaks for new R&D investment. Companies can currently deduct up to 15% of any new spending, but the reduction cannot exceed 12% of the tax the corporation would otherwise pay and spending must have risen for two consecutive years. METI would like to eliminate those conditions and possibly increase the percentage that can be deducted. It is also weighing new tax incentives to encourage university-corporate R&D cooperation. More generous tax breaks help U.S. companies save 10 times the amount of their Japanese counterparts, Ichikawa notes. Details of the tax package will be worked out this summer after negotiations with the Ministry of Finance.

    Hitachi's Nakamura welcomes the tax incentives, even though their effect is likely to be more psychological than fiscal. More important, he says, “for R&D budgets to really rise, sales will have to rise.” Such a rise would go a long way toward restoring the country's faith in the value of research.


    Japan's Drugmakers Need a Boost to Compete

    1. Dennis Normile

    TOKYO—Japan's major manufacturers have long been able to match, if not top, the research investments made by their overseas competitors. But the same is not true of Japan's pharmaceutical firms, which are dwarfed by the behemoths that now dominate the global market. Japan's Ministry of Health, Labor, and Welfare estimates that on average domestic pharmaceutical firms spend only one-fifth as much on research, in absolute terms, as do their U.S. counterparts. And the gap seems to be growing. “If we don't rally [R&D activities], in 10 years Japan's pharmaceutical sector will be significantly weaker,” warns a new report from the health ministry, “Vision for the Pharmaceutical Industry.”

    The report, released last month, is short on specifics. But it calls for increased support for biomedical research at national laboratories and for money specifically for joint public-private research projects. One such project would be a major effort to identify and analyze disease-related proteins. The ministry would also like to create a national basic biomedical research institute, streamline the regulatory process for new drugs, and better coordinate public, academic, and industry research efforts.

    The plan gets a warm reception from Japan's pharmaceutical industry, but academic researchers are more cautious. Ken-ichi Arai, director of the University of Tokyo's Institute of Medical Science, says the ministry's plans are well intentioned and that the drug-approval process in Japan certainly needs to be reformed. But he worries that the differences between investigator-initiated basic research and the work done to develop a drug are being blurred and that the plan doesn't address the conflict of interest inherent in having the ministry both promote drug development and judge the efficacy and safety of the drugs that ultimately result from those funds. “The player and the judge should be separated,” he says.

    A ministry spokesperson says that talks with industry and researchers are continuing, with the target being a concrete proposal by this summer for next year's budget.

Log in to view full text

Log in through your institution

Log in through your institution