News this Week

Science  30 Jul 1999:
Vol. 285, Issue 5428, pp. 644

    Report of New Hepatitis Virus Has Researchers Intrigued and Upset

    1. Jon Cohen

    American Standard, a company best known for making toilets and air conditioners, has taken a beating from stockholders and financial analysts for starting a money-losing biotechnology company, DiaSorin. Critics may have been somewhat mollified on 20 July, however, when The New York Times revealed that DiaSorin scientists had found a putative new hepatitis virus. The new agent could be responsible for many cases of the disease that for decades have baffled scientists—and it could form the basis of a lucrative test to screen blood supplies.

    But the company's announcement, which came on the eve of disclosure of its second-quarter financial results, has attracted sharp criticism from hepatitis researchers. The findings have not been submitted for publication, nor have they been presented at scientific meetings. “It angers everybody,” says the National Institutes of Health's Harvey Alter, who provided the company with blood samples that were key to linking the virus to hepatitis. “It has no scientific validity at all to publicize things before you're ready to publish. All the scientists involved tried to stop it.”

    If they pan out, the findings could have important public health implications. Researchers have linked five viruses, designated hepatitis A through E, to the liver inflammation known as hepatitis. But they have long suspected that other pathogens also cause the disease, because many people who develop hepatitis test negative for all the known hepatitis viruses. Blood-screening practices for other pathogens have already reduced new cases of transfusion-associated, unexplained hepatitis to near zero—presumably because likely carriers of the unknown pathogens have been screened out. But Alter says about 10% to 20% of past cases of acute hepatitis remain unexplained, as well as 30% of chronic hepatitis cases and 50% to 60% of cases that lead to rapid liver failure.

    By fishing for new pathogens in the blood of injecting drug users who have AIDS, immunologist Daniele Primi and colleagues at DiaSorin's Biomedical Research Center in Brescia, Italy, may have identified the culprit. Alter is impressed with the results he has seen so far. “It has the feel of something that has potential causality,” says Alter. But he quickly adds: “There's a lot more that needs to be done, though.” For example, the company has yet to grow the virus in a laboratory culture, photograph it with an electron microscope, or show that it can infect chimpanzees and cause liver damage.

    Primi himself says he urged the company not to announce his findings through the media. “I don't understand myself why it came out like this,” says Primi, who adds that he has not submitted anything for publication because of patent concerns. The company, he says, filed a European patent application in November 1998, and corporate lawyers told him he had to wait 1 year before submitting a detailed scientific report about the virus and evidence of causality. “Unfortunately, science becomes a business,” says Primi.

    Company executives are defending their decision with unusual candor. They first mentioned the virus discovery in a cryptic press release issued in February, and they say they were duty bound to discuss it further when they disclosed their second-quarter earnings on 21 July. Company officials acknowledge that they approached The New York Times with the story shortly before that disclosure. “If we have a discovery that does have a significant impact on value, we have to share that with our stockholders,” says Raymond Pipes, a vice president of investor relations at American Standard Companies, which grosses more than $6 billion a year. Pipes says the company has promised to decide by the end of the year what it will do with DiaSorin, which last year grossed $98 million and posted a $21 million loss. “Almost certainly that will involve not making it part of American Standard.”

    Jorge Leon, chief of the company's medical division, says the discovery led DiaSorin to request a budget increase, which American Standard had to explain to investors. “This is a very expensive enterprise,” says Leon. “We still need to develop a larger effort to complete this work.” As for why the company did not present the findings via the traditional scientific routes, Pipes says “we have the DNA sequence of the virus and, frankly, we don't want to reveal that data in the public domain until we have to.”

    Leading hepatitis researchers say they do not know what to make of the press reports. “I think this is a terrible way of doing this,” says Leonard Seeff, a prominent hepatitis epidemiologist at the National Institute of Diabetes and Digestive and Kidney Diseases. “Things should be published in an appropriate forum so people can see the legitimacy of this.” Robert Purcell, a hepatitis virologist at the National Institute of Allergy and Infectious Diseases, is equally circumspect. “Why should I believe them?” he asks. “You can be fooled by these things.”

    Although the company will not discuss many details, Leon allows that the virus contains DNA, not RNA, and shares less than 50% homology with any known virus. Primi says he looked for the virus in injecting drug users who had AIDS because he thought they would be infected with many pathogens and their HIV-impaired immune systems could not keep those infections at bay, resulting in high levels of any mystery agent in their blood. “To be honest with you, this was a project on the side,” laughs Primi. “It was the kind of thing you do where if it comes out with something, fine; if not, who cares.”

    To find the virus, Primi and his co- workers used random stretches of DNA called primers to fish DNA out of the blood samples of both the injecting drug users with AIDS and healthy controls. From a patient with the initials SEN, they found a large amount of an unknown virus, which they called SEN-V. Preliminary tests for the same agent in blood samples from patients with non-A-E hepatitis, provided by Mario Rizzetto of the University of Torino in Italy, suggested that they might have isolated the elusive hepatitis virus. Alter then sent Primi coded blood samples from both healthy people and those with non-A-E hepatitis. Primi found the virus in 10 of 12 people with transfusion-associated non-A-E hepatitis, four of 50 transfused people who did not develop disease, and one of 49 people who were not transfused. DiaSorin says the researchers have now analyzed nearly 600 blood samples, finding additional evidence for the virus in 13 of 19 people with unexplained chronic hepatitis.

    If Primi and his colleagues have identified the non-A-E hepatitis virus, “it could potentially explain a lot of hepatitis,” says Alter. But “the clinical relevance will be whether this virus can be shown to cause chronic liver disease,” he points out. “Scientifically, I think it's sound. But it still can fall through. It was not a wise scientific decision to publicize this. It was an economic decision. I would have wanted a lot more data.”


    A Trigger of Natural (and Other) Killers

    1. Michael Hagmann

    When the immune system goes to war against invading pathogens or the insidious internal attack of cancer cells, it can deploy an arsenal of weapons. Some, like the antibody-producing B cells or the T type of killer cells, only attack when set off by a specific antigen. But others, such as the so-called natural killer (NK) cells, are far less picky; they eliminate a variety of infected or cancerous cells. How NK cells are triggered to mount such sweeping attacks, while remaining able to tell friend from foe, has long puzzled immunologists. Now, parts of that mystery appear to be solved.

    On page 727, a team led by Thomas Spies at the Fred Hutchinson Cancer Research Center in Seattle reports that it has identified the molecular trigger that may help some NK cells pick their victims, as well as the receptor that recognizes that molecule, a protein called MICA. Other receptors appear to be involved in NK activation, but their molecular triggers are largely unknown. The identification of MICA as the activator of the new receptor is particularly interesting because it suggests that the receptor is key to NK cells' specificity. MICA appears to be switched on in cancer cells or in cells under stress, as may happen when they are infected by a virus. And in a second report, on page 730, a team headed by Joseph Phillips and Lewis Lanier of the DNAX Research Institute in Palo Alto, California, identifies the internal signaling pathway through which the MICA receptor tells NK cells to activate their killing machinery.

    Making contact.

    The MICA/B proteins on infected or cancerous cells serve as destruction tags that can be recognized by NK cells. The DAP10 adapter protein then passes the killing signal to other proteins (p85 and p110) in a signaling pathway not used by other NK receptors.


    To cellular immunologist Lorenzo Moretta of the University of Genova in Italy, it “makes perfect sense” that MICA is a trigger for NK cells. “In normal cells MICA is not expressed; it's only turned on when something goes wrong,” he says. Together, the findings may someday help researchers design drugs that beef up NK responses to cancer or infections.

    The road that led to the current discoveries actually began with another cell type—the oddball γδ T cells, which constitute less than 10% of all T cells. The antigen-binding receptors (TCRs) of the much more common αβ T cells operate on a dual-recognition system; they are triggered by antigens displayed on the surface of antigen-presenting cells in conjunction with major histocompatibility complex (MHC) proteins. The γδ TCRs don't require MHC proteins for their activation, however.

    Indeed, researchers weren't sure what activates some γδ T cells, but about a year ago, Spies's team found that a population of γδ cells that live in the intestinal lining are triggered when their TCRs contact either of two MHC-related proteins with hitherto unknown functions: MICA and its close relative, MICB. These proteins, the Spies team showed later, seem to be switched on in many tumors of the lung, breast, and other organs—implying that some of the cell-killing γδ T cells act as tumor watchdogs by spotting MICA/B-bearing cancer cells through their antigen receptors.

    Because NK cells are themselves well known as tumor-cell killers, Spies and his colleagues, Stefan Bauer and Veronika Groh, wondered whether these or other immune cells would also be capable of recognizing MICA. To their surprise, the researchers found that MICA binds to almost all NK cells. Evidence that MICA marks the cells that display it for killing came when the researchers engineered cells that normally resist killing by NK cells—and that don't make MICA—to display the protein on their surface. NK cells, they found, made short work of these newfound targets.

    Bauer also bagged the gene for the MICA receptor, by “subtracting” the active genes in cells that didn't bind MICA from the active genes in cells that do. Out of five candidates, “only one made sense,” says Spies. This was the gene encoding a known NK cell protein called NKG2D, whose structure indicated that it is a surface receptor. Further work confirmed that NKG2D is indeed the MICA receptor. For example, Groh found that MICA-positive cancer cells could be protected from NK cells by antibodies against either NKG2D or MICA/B.

    Meanwhile, Phillips and Lanier were also looking for NK cell receptors, but they were taking a different tack. They were searching through DNA databases for proteins that could transmit NK-activation signals the next step of the way—from the cell surface receptor into the cell. Such a molecule would presumably bind to the active receptor and could thus serve as bait to trap it. Lanier had a clue about the kind of protein to look for, because last year, his team had cloned the gene encoding a protein called DAP12 that performs the same job for another receptor that activates NK cells, and Lanier suspected that a related protein might perform the function for other receptors. Feeding the search algorithms with a DAP12 sequence, Phillips and his colleagues came up with a new gene, DAP10, which resides right next to DAP12 on human chromosome 19. “So we thought this is worth looking at,” recalls Lanier.

    The researchers then generated antibodies against DAP10, with which they hoped to pull out any putative NK cell receptor associating with DAP10. They fished out a single protein, which turned out to be NKG2D, the same receptor Spies's group had found. Says Lanier: “They had a ligand, and we had an adapter. We met in the middle at the NKG2D receptor.”

    By identifying DAP10 as a part of the machinery that relays the MICA signal into the cell, Lanier and Phillips's work may also help explain an unusual feature of the NKG2D receptor. Other immunologists have found that NK cells are endowed with receptors that turn down their killer activity when they contact the body's own MHC molecules. This keeps them from attacking normal cells. But MICA binding to NKG2D can override this inhibition. It may be able to do this, Lanier says, because NKG2D's partner, DAP10, feeds into a different intracellular signaling pathway than the inhibitory signals.

    A good many questions still remain about NKG2D's functions, however. Because γδ T cells contain both it and a TCR, and both receptors seem to bind MICA, researchers wonder which of the two receptors is more important in activating these killer cells. Then again, says Spies, the answer may be simple. You may “need both receptors to elicit a strong response” in γδ T cells.

    Also unclear is how important the MICA system is for controlling tumors. As immunologist Adrian Hayday of the University of London points out, “a lot of NK cells will kill tumor cells in a culture dish, but they won't do a good job in [the body], because tumor cells seem to have a superb capacity to turn off immune cells.” He speculates that MICA recognition may serve mainly to ratchet up responses to pathogen-infected cells.

    Whatever the physiological role of the MICA/NKG2D/DAP10 complex eventually turns out to be, however, these molecules are clearly not the whole story of NK cell activation. Indeed, researchers expect more NK cell receptors to emerge from test tubes and gene databases. “There's more to come in NK cell activation,” predicts Eric Long of the National Institute of Allergy and Infectious Diseases. “This is a young field, and it's moving fast.”


    Support Builds for Allègre's Reforms

    1. Michael Balter

    PARISAfter more than 3 months of hearings, debates, lab visits, and electronic forums, two parliamentary deputies have delivered their diagnosis of France's ailing research effort and a lengthy prescription for reviving it. Their 140-page report, presented personally to French Prime Minister Lionel Jospin on 22 July, broadly echoes controversial reforms previously suggested by France's research minister, Claude Allègre. Like Allègre, deputies Pierre Cohen and Jean-Yves Le Déaut—both of whom are also active researchers—urge that France break down the barriers between universities and public research organizations, as well as boost both the number of young scientists and their research opportunities.

    Although many French scientists had resisted what they saw as Allègre's heavy-handed approach to reforming French research (Science, 18 December 1998, p. 2162), the initial response to the deputies' report—which contains 60 proposals urging change through mostly voluntary incentives—has been much more positive. Henri-Edouard Audier, a chemist at the Ecole Polytechnique near Paris who had often chided Allègre for trying to ramrod French science reforms, told Science that the proposals were “balanced, realistic, and effective.” If they are put in place, Audier says, “it will make a profound change in French research.” Harry Bernas, a physicist at the Orsay campus of the University of Paris, says that “Cohen and Le Déaut really listened” to the scientific community. Jospin's staff is now reviewing the recommendations, before the prime minister decides whether to put them in place. (Allègre himself is studying the report and has no comment on it yet, according to his spokesperson.)

    France's age pyramid.

    The graying of university and public agency researchers means that nearly half will retire by 2010.


    Even if the reforms do go forward, however, not everyone thinks they go far enough. Among those disappointed is Pierre Chambon, director of the Institute of Genetics and Molecular and Cellular Biology near Strasbourg, who had argued for much more radical changes, such as ending the “researcher for life” status of publicly funded scientists and requiring them to undergo periodic reviews (Science, 18 June, p. 1898). While praising measures aimed at young scientists, Chambon says he fears that taken as a whole the reforms “will not change much in France.”

    Cohen and Le Déaut conclude that although French research has “remarkable” potential, it faces serious problems in three areas. First, echoing Allègre, the deputies say that researchers rarely move between universities, industry, and public research agencies such as the basic science agency CNRS. Second, young scientists have great difficulty finding research jobs and achieving scientific independence. Third—a problem critics say Allègre did not fully address, and one most troubling in the deputies' view—is what they call the “age pyramid,” the alarmingly high percentage of older researchers among France's scientific corps (see graph).

    To address the first problem, Cohen and Le Déaut wholeheartedly adopt one of Allègre's main—and controversial—aims: a major rapprochement between the public agencies and the universities. “We must try to demolish the watertight partition” between the two sectors, Le Déaut told a press conference right after the meeting with Jospin. To induce researchers to cross the barrier, the deputies propose such measures as linking promotion and salary increases to mobility. Thus, a CNRS scientist who takes on a serious teaching load, collaborates with industry, or explores new research themes would move up the ladder faster.

    This idea sparked considerable resistance when Allègre first proposed it last year, but the deputies' many consultations with researchers seem to have softened opposition and allayed suspicions that Allègre planned to weaken the CNRS in favor of the universities. And Allègre's critics note that the new proposals stress voluntary inducements rather than top-down fiats. “Everyone will have double nationality in the public research agencies and the universities,” says Audier. “This is better than enforcing reforms by decree.”

    In addition to urging closer ties between the CNRS and the universities, the report documents that both sectors face a graying workforce. Almost half of France's university and public agency scientists will retire in the next 10 years, and in some fields, such as physics, 30% will retire by 2005. That leaves a dangerous gap both in numbers of scientists as well as in their level of experience. “We must avoid this catastrophe,” Cohen told the press conference. Allègre's critics had long complained that he was neglecting this problem: The presidents of the CNRS's 40 scientific committees recently published an editorial on their Web site declaring that the current 3% recruitment rate, mandated by the ministry for the agency, would “simply mean the death of the CNRS.”

    To forestall this scenario, the deputies propose a number of measures, including a law mandating the recruitment of a minimum number of researchers each year to replace those who retire; they also suggest that newly minted Ph.D.s be hired even before their postdoctoral training. Unlike some of the other recommendations, such a recruitment drive would not require extra funds; indeed, if anything the new hires would command lower salaries than the senior scientists they are to replace. “It's about time,” says Bernas of the recruitment push, although he criticizes the report for not giving specific numbers of researchers to be recruited. And Chambon takes issue with the idea of hiring researchers before they have done postdocs. “This is nonsense,” he says. “You cannot judge people right after their thesis.” Chambon's own proposal—to create a corps of temporary postdocs, currently almost nonexistent in France—was rejected by the deputies.

    Another proposal to boost young university researchers did meet with unanimous acclaim, however. Cohen and Le Déaut want to create a flexible system of “time credits” that would allow assistant professors in their first 3 years on the job to cast off one-third of their heavy teaching loads, which amount to 192 hours of classroom time per year. Doctoral students would then take on these teaching duties. As might be imagined, this scheme, which would cost about $20 million per year, is being greeted enthusiastically by young university teachers. “To recognize the need to lighten the load of young researchers is an essential advance,” says physicist Isabelle Kraus, an assistant professor at the Louis Pasteur University in Strasbourg, who says this would also provide doctoral students with valuable teaching experience.

    The deputies seem confident that Jospin will realize the urgency of the situation and take swift action on their recommendations. “We are at a crossroads,” said Cohen, a point seconded by Le Déaut: “If we don't do something now … we will have our backs against the wall in the years ahead.”


    Kassirer Forced Out at New England Journal

    1. Eliot Marshall

    In the second big shake-up in scientific publishing this year, the editor of The New England Journal of Medicine (NEJM) has been asked to step down following a management dispute with the owner, the Massachusetts Medical Society. Editor Jerome Kassirer, 66, will go on sabbatical leave beginning 1 September, and his editorship will end with the expiration of his contract on 31 March. Kassirer confirmed the news in a phone interview but declined to comment other than to say that he felt “terrible” about what is happening.

    The Boston Globe learned about the shake-up last week and reported that Kassirer had been “fired.” But the medical society quickly issued a joint statement with Kassirer suggesting that the parting took place by mutual agreement because there were “honest differences of opinion between Dr. Kassirer and the medical society over administrative and publishing issues.” The two sides were “unable to find common ground,” the society said, and for that reason, “the best course of action” was to search for a new editor. The society will name an interim editor soon, possibly Executive Editor Marcia Angell.

    To some, Kassirer's dismissal looked like a reprise of the decision by the American Medical Association (AMA) 7 months earlier to fire George Lundberg, editor of The Journal of the American Medical Association (Science, 22 January, p. 467). Kassirer had less tenure than Lundberg—only 8 years compared to 17—but, like Lundberg, he clashed with the physician-executives who run the parent organization and lost. But Frank Fortin, spokesperson for the Massachusetts Medical Society, argues that the two cases are very different, noting that NEJM's owners never challenged Kassirer's editorial decisions: “This is not about the editorial independence or integrity” of the NEJM, he says. The disagreements had to do with business matters, Fortin explained, but he declined to discuss specifics. In contrast, AMA president E. Ratcliffe Anderson last January said Lundberg had been fired for publishing an “inappropriate” article on oral sex during President Clinton's impeachment trial.

    According to Marshall Kaplan, chief of gastroenterology at Tufts University New England Medical Center in Boston and an associate editor of NEJM, Kassirer disagreed sharply with NEJM's owners on plans to use the journal's name on other publications. Kaplan mentioned, for example, that the society recently bought Hippocrates, a popular journal for physicians, and that it had plans to develop new publications for patients similar to Heart Watch, a newsletter it now publishes. Kaplan said he and “most of the editors” feared it would “dilute” the reputation of the NEJM to place its name on publications that are less rigorously reviewed. But the medical society, he believes, has decided to increase its revenues to help pay the mortgage on “luxurious” new headquarters it built in the Boston suburb of Waltham. The NEJM staff, now ensconced near Harvard Medical School in Brookline, is not eager to relocate to the new building, which opened 2 weeks ago.

    Like others, Kaplan described Kassirer as a “very successful editor.” Massachusetts Medical Society president Jack Evjy also praised Kassirer in a prepared statement last week, saying the editor had redesigned the journal, shortened the turnaround time for manuscript review, and rapidly informed doctors of new medical developments.

    But many people were dismayed by what they interpreted as a loss of editorial authority. Epidemiologist Walter Willett of Harvard School of Public Health in Boston says he thinks the society “views the journal as a cash cow and wants to milk it even harder.” Richard Horton, editor of The Lancet, says he thinks the Lundberg and Kassirer dismissals highlight “an acute crisis that is developing between the professional values of medicine and corporate values that have overtaken much of U.S. medicine in recent years.” Medical journals, he says, are sustained by the trust that readers place in them. Abruptly firing editors, he says, can “damage that trust.”


    Telling Pluto and Its Partner Apart

    1. Mark Sincell*
    1. Mark Sincell is a free-lance science writer in Houston, Texas.

    Scientists have added another compound to the list of organic molecules detected on the solar system's coldest planet. Spectroscopic images show that Pluto harbors ethane, according to astronomers at Japan's Subaru Telescope on Mauna Kea, Hawaii. Their images, released last week, add to the evidence that Pluto and its satellite Charon have very different compositions, suggesting that Charon formed in a tremendous interplanetary collision.

    Most astronomers think that our own moon formed when a passing chunk of rock collided with Earth, knocking huge pieces of its surface rock into orbit, which later coalesced to form the moon. Because the surface rocks that formed the moon have a different composition from the rest of the planet, the two bodies should have a marked difference in elemental composition—and that's just what geochemists find. Astronomers have speculated that Charon—which was discovered in 1978, is about half the size of Pluto, and orbits its parent about once a week— also formed in a catastrophic impact. But although the spectrum of sunlight reflected from both objects has shown that they harbor molecules like ice and methane, Pluto and Charon are so faint and close together that astronomers couldn't always tell which elements are on which celestial body.

    A team of astronomers led by Ryosuke Nakamura at the 8.3-meter Subaru Telescope took advantage of exceptionally good atmospheric conditions on 9 June to snap the first ground-based telescope image that shows Pluto and Charon as separate bodies. Nakamura's team produced spectra from the two bodies that showed differences in composition known from earlier measurements: Pluto is covered in nitrogen ice, while Charon is coated with water ice. The spectra also revealed small amounts of ethane on Pluto, but not on Charon.

    Astrophysicist Alan Stern of the Southwest Research Institute in Boulder, Colorado, says the detection of ethane “is a nice confirmation of theoretical predictions” that the compound would be found on Pluto, either left over from the solar system's formation or formed by sunlight-driven reactions. But it is probably too early to decide how Charon formed. “Right now I'd probably come down on the side of the impact hypothesis,” says University of Hawaii, Manoa, astronomer Dave Tholen, “but more data will be necessary to try and tip the scales.” Nakamura's team will be returning to gather those data in the near future, after the telescope has been adapted to better correct for atmospheric blurring.


    NIH to Help Fund Big Physics Facilities

    1. Robert F. Service

    The National Institutes of Health (NIH) is getting into the synchrotron hardware business. Last week, NIH officials announced plans to spend $18 million this year to help pay for upgrades at California- and New York-based synchrotrons, which ricochet powerful beams of x-rays off materials to determine their atomic structure. NIH officials say they hope the money will help meet the burgeoning demand for “beamtime” among biologists looking to reveal the cell's secrets on the atomic scale.

    The new money pales in comparison to the nearly $175 million that the Department of Energy (DOE) spends every year to operate the nation's four principal synchrotrons. Still, NIH's new direction is “tremendously significant,” says Keith Hodgson, who heads the Stanford Synchrotron Radiation Laboratory (SSRL) in Menlo Park, California. A 1997 DOE advisory panel strongly backed a series of synchrotron upgrades (Science, 17 October 1997, p. 377). But the increasingly cash-strapped DOE has had a difficult time coming up with the extra money. “Given the difficult budget climate at DOE, I think the [upgrades] would have been difficult to pull off,” says Hodgson.

    NIH's support for the facilities comes in response to the mushrooming demand among biologists for access to the stadium-sized machines. According to a recent DOE advisory committee report, biologists have grown from about 5% of all synchrotron users in 1990 to nearly one-third in 1997. Among protein crystallographers, the growth is even more rapid: The number of protein structures solved with the help of synchrotron x-rays jumped from 16% to 40% in just 5 years. With the genome project churning out new protein sequences by the hundreds, demand is only projected to grow. “We said we have to do something about this,” says Marvin Cassman, who heads the National Institute of General Medical Sciences in Bethesda, Maryland.

    The first part of that something— $14 million of the $18 million of NIH funds—will kick off a $53 million upgrade of the central electron storage ring at SSRL, a project expected to take almost 4 years. When complete in 2002, the upgraded ring, which produces the tightly focused x-ray beams prized by users, is expected to generate 10 to 100 times its current x-ray power, enough to boost the facility from a “second-generation” to a “third- generation” machine. That newfound power will enable researchers to collect data faster and study smaller protein crystals than they can now, says Hodgson. NIH's other $4 million will support new x-ray detectors and storage ring improvements at the National Synchrotron Light Source at Brookhaven National Laboratory (BNL) in Upton, New York.

    Although NIH has long helped pay for analytical equipment used by biological user groups at synchrotrons, the new money marks the first time the biomedical agency has paid for general capital improvements at any of the facilities. But DOE physicist Bill Oosterhuis notes that the new upgrades will benefit more than just biologists. “Most of the improvements will improve the quality of the x-ray beams for all the scientists,” he says.


    Stem Cells as Potential Nerve Therapy

    1. Sabine Steghaus-Kovac*
    1. Sabine Steghaus-Kovac is a writer in Frankfurt, Germany. With additional reporting by Gretchen Vogel.

    Last November, U.S.-based researchers announced, with much fanfare, that they had isolated an “immortal” line of human embryonic stem cells—a type of universal cell extracted from an embryo, which can, in the right environment, transform itself into any type of human tissue (Science, 6 November 1998, p. 1014). The press was soon full of predictions that researchers would be able to grow new tissue, or even organs, from these cells for transplantation into sick people. Already, evidence that such therapies may be possible is emerging.

    The best example so far comes from Oliver Brüstle of the University of Bonn Medical Center and his U.S. colleagues. On page 754, they report that they've taken embryonic stem (ES) cells from mice and coaxed them to form glial cells, a type of support cell in the brain that also produces myelin, an insulating sheath for neurons. When the researchers injected the glial cells into the spinal cords of rats with a genetic defect that leaves them unable to make myelin, the glia soon got to work coating the rats' neurons with myelin. “Our myelination experiments are a first example of an application of this [stem cell] technique to a neurological disorder,” Brüstle says.

    Developmental biologist Davor Solter of the Max Planck Institute for Immunobiology in Freiburg, Germany, describes the work as “promising,” adding: “It is nice that they put the pieces together and substantiated what everyone is believing”—that ES cells may have therapeutic uses. They might, for example, be used to treat people with multiple sclerosis or other conditions in which myelination is defective. Brüstle cautions, however, that much more work needs to be done with animal models before attempting such transplants in humans. But he adds, “if the experiments are successful in animal models, it is worthwhile considering whether the results are applicable to humans.”

    Consideration may be all the technique gets, however, because of legal barriers to this type of embryo-based research. In the United States, current law forbids the use of public funds for deriving stem cells from human embryos. In Germany, restrictions are even tougher. The human embryo is protected by law from fertilization to implantation, and any research on or with human embryos is prohibited unless the embryo is the immediate beneficiary. “Particularly in Germany it will be difficult to advance research in this field,” says Brüstle.

    For their experiments, Brüstle and his team took cells from 3.5-day-old mouse embryos and coaxed them to grow and bunch together into embryoid bodies, a first step toward differentiation, which is when individual cells become committed to forming different cell types. Then the researchers cultivated the embryoid bodies in a medium that favors the survival of precursors to nerve cells and finally applied growth factors known to promote the proliferation of precursors to glial cells. Ultimately, the glial precursors formed the two major types of glial cells, known as oligodendrocytes and astrocytes. Five days later, the team detected the expression of CNP, a protein characteristic of the myelin sheaths of neurons, by the cells.

    Earlier transplant studies had shown that oligodendrocyte precursors injected into animals suffering from myelin diseases had succeeded in coating the host animals' neurons. So Brüstle and his team transplanted their stem cell-derived oligodendrocytes into the brains and spinal cords of fetal and week-old rats who have the same mutation as humans with Pelizaeus-Merzbacher disease (PMD), a rare genetic disorder in which the myelin is defective. A few weeks later, the donor cells had generated numerous myelin sheaths on the rats' brain and spinal neurons. That suggests that similar transplants might help patients with PMD, which is usually fatal, or other demyelinating conditions.

    Such promise for stem cell therapies has prompted much soul searching on both sides of the Atlantic over current legislation banning embryo research. Earlier this month, the White House's National Bioethics Advisory Commission recommended that the U.S. government lift its restrictions on research on human embryonic stem cells (Science, 23 July, p. 502), while in the United Kingdom two advisory committees recommended relaxing the rules on embryo research, but the government decided in June to put off the decision for 6 months [ScienceNOW, 25 June (see the Archives at].

    German researchers can't expect the green light for human ES cell research anytime soon, however. In March, Germany's main research funding agency, the DFG, published a policy statement on research with human embryonic stem cells, which advised German policy-makers not to change the embryo protection law now. “I do not think it is possible to change the embryo protection law within an adequate time period, even if this aim was desired,” says DFG president Ernst-Ludwig Winnacker.

    The DFG calls instead for more public discussion of the issue and suggests establishing a central commission to assess the ethical, legal, and scientific basis of research with human embryonic stem cells. The agency also wants to see uniform European standards in this matter, which will preserve fundamental values of human dignity and health. “Other countries in Europe are more liberal about research on embryos within certain time limits and permit individual decisions for research projects by bioethics panels,” says Jochen Taupitz of the Institute for German, European, and International Medical Law, Public Health Law, and Bioethics at the universities of Heidelberg and Mannheim. Winnacker is confident that some order can be brought to the situation. “We shall present our ideas [to the European Parliament]. Up to now we have had a good response from European committees.


    Keck Gives $110 Million for USC Initiatives

    1. Jocelyn Kaiser

    The W. M. Keck Foundation, best known for funding giant telescopes that help scientists peer into the distant universe, has decided to invest $110 million to help life on Earth. Yesterday the foundation announced its second-largest grant ever to bolster the University of Southern California's (USC's) medical school and to advance the field of neurogenetics. USC hopes the money will “help propel USC into the first ranks of medical research,” says Robert A. Day, president of the $1.5 billion foundation, which along with USC is located in Los Angeles.

    About $50 million of the grant, by far Keck's largest contribution to biomedicine, will fund studies of the genetic roots of diseases such as Alzheimer's, Parkinson's, and glaucoma, and the research will span everything from gene sequencing to mouse knockouts, drug development, and molecular epidemiology. Thirty researchers will be hired in the next 5 years to join 50 current USC faculty in the initiative, to be headed by USC cancer epidemiologist Brian Henderson.

    A former president of the Salk Institute for Biological Studies in La Jolla, California, Henderson says he plans to take advantage of the university's strengths in clinical medicine and epidemiology, including a long-term health study of a multiethnic group of 215,000 people. “We're really hoping to use the fruits of the human genome project,” Henderson says. A portion of the initiative will be housed in a $40 million neurosciences center to open in 2001. Neuroscientist Ira Black of the Robert Wood Johnson Medical School in New Brunswick, New Jersey, says the Keck grant should help USC move into the front ranks of neurogenetics now occupied by Johns Hopkins, Harvard, and other universities.

    The remaining $60 million will help USC expand what will be renamed the Keck School of Medicine, strengthening the school's endowment, scholarship funds, and faculty. “The money is going to move us toward the 200-plus people we need to be a top-ranked center,” says Henderson. The university has promised to raise $330 million to complement the grant.


    X-ray Observatory Takes to the Sky

    1. Ann Finkbeiner*
    1. Ann Finkbeiner is a writer in Baltimore.

    Starting next month, a spacecraft called Chandra will image the hottest and most violent parts of the universe and inaugurate x-ray astronomy's golden age

    After a couple of decades, a couple billion dollars, and some deflected careers, U.S. x-ray astronomers finally have a telescope of their own. Last week, anticipation finally gave way to reality when the Chandra X-ray Observatory rode the Space Shuttle into space. Planning for Chandra began when the last American x-ray telescope, called Einstein, was launched in 1978; and by the time Chandra's 5- to 10-year lifetime is over, it will have cost $2.8 billion. The payoff will start to come in 2 weeks, when the telescope's doors will open and its instruments start recording x-rays from objects all the way back to the beginning of the universe. For x-ray astronomers, says Claude Canizares of the Massachusetts Institute of Technology, principal investigator for one of Chandra's instruments, “Einstein just cracked the door open. Chandra's a thousand times better.”

    Attention to details.

    A simulated image from Chandra reveals a jet of hot gas at the center of the galaxy M87 (left). The jet, known from other observations, is a smudge in an image from an earlier x-ray satellite, ROSAT (middle).


    As a result, Chandra, named for the late astrophysicist Subrahmanyan Chandrasekhar, is likely to be the revelation to x-ray astronomy that its sister space telescope, the Hubble, has been to optical. Like Hubble, it's been subject to delays that have taken a toll on the U.S. astronomical community, while European and Japanese x-ray astronomers pushed ahead with their own telescopes. But for now no one is inclined to carp. Once Chandra starts observing, says Riccardo Giacconi, who is generally acknowledged to be the father of x-ray astronomy, “I'll be so happy. I'll play the old man and bless everybody in sight.”

    X-ray astronomy began with the space age; because (happily for us) Earth's atmosphere blocks x-rays, they can be detected only in space. A small rocket experiment came first; then, in 1970, a team led by Giacconi, now director of Associated Universities Inc., launched a pointable instrument similar to a Geiger counter and detected x-rays coming from outside our galaxy. Einstein, a true telescope, followed in 1978—“it was a dynamic field where we worked fast,” says Giacconi —with the first images of the violence that emits x-rays, including supernova explosions, material disappearing into black holes, and clouds of fiery intergalactic gas.

    That was the warm-up; the 14-meter, 4.6-ton Chandra is the long-delayed ball game. Thanks to its exquisite mirrors and instruments, an x-ray telescope will for the first time be able to see detail as fine as most optical telescopes. X-rays don't reflect off ordinary glass mirrors but go straight through, so Chandra, like all x-ray telescopes, has mirrors consisting of sets of nested reflective cans arranged so that x-rays graze them rather than strike them head-on. Chandra's four sets of iridium-coated quartz mirrors, however, are smoother and lighter than any previous. “Chandra's got the best x-ray mirrors that have ever been made,” says Stephen Murray of the Harvard-Smithsonian Center for Astrophysics (CfA), principal investigator for Chandra's high-resolution camera. That camera is 20 times more sensitive than anything before; the imaging spectrograph has a resolution eight times finer.

    Such finesse had a messy birth—a record of “de-scoping” (in NASA's jargon), missed deadlines, software bugs, and doors that stuck in tests. The Challenger disaster slowed the start of construction, and the program suffered a major cut in 1992, when designers stripped out mirrors and instruments to save money. Last April the explosion of an Air Force Titan 4 rocket, which has an upper stage like the one that boosted Chandra toward its final orbit after it was released from the shuttle, set back the schedule by a couple of weeks. And just last week a faulty sensor and bad weather delayed the launch two more times.

    All along, the pace has been slowed by the need to make Chandra near-perfect because its orbit is unreachably high, a third of the way to the moon. “Sending it high,” says Giacconi, “you don't have Earth in the way of observing,” nearly doubling the effective observing time. But the high orbit rules out repair missions by the Space Shuttle, and given the several rounds of fixes that Hubble has needed, that's a worry. “Of course I'm concerned,” says Giacconi, who directed the Space Telescope Science Institute (STScI) in Baltimore during Hubble's repairs. “But everything we've learned about redundancy and reliability” has gone into Chandra, he says.

    Whether Chandra's prolonged incubation has hurt x-ray astronomy is a matter of debate—“it's hard to estimate the effects of something that didn't happen,” says Wallace Tucker of CfA. In the 20 years between Einstein and Chandra, Europe and Japan have taken the lead. A 1990 German satellite called ROSAT used a telescope that improved on Einstein's to survey x-ray sources, and an instrument aboard the 1993 Japanese satellite ASCA is a prototype of one of Chandra's instruments. Both the Europeans and Japanese are now set to launch their own giant x-ray telescopes, which will not match the quality of Chandra's images but will surpass Chandra in some other ways (see sidebar). And although the instruments and mirrors aboard Chandra are state of the art, says Giacconi, the art is old. Murray concurs: “If we'd flown in '86 or '87 when we thought, we'd be working on new kinds of optics by now.”

    The fairest mirrors.

    Chandra's concentric quartz mirrors, which focus x-rays that graze their surfaces.


    Gordon Garmire of Pennsylvania State University in University Park, principal investigator for the imaging spectrograph, points out a more human cost: “If I'd known it would be this long, I might not have stuck with it.” Ethan Schreier, an x-ray astronomer who didn't stick with it and moved on to STScI, notes that the delay is reflected in an aging community: “[It] doesn't get a lot of new blood. People dominating the field now are the ones who dominated it 20 years ago.” But CfA's John Huchra says that “once a telescope in space goes up, astronomers are back in fast. The field is pretty robust.” Indeed, 800 proposals came in for Chandra's first cycle of 200 observations. X-ray astronomers won't be the only ones using the data and images from Chandra, says Huchra: “Chandra will probably touch half of all astronomers in the country.” In anticipation, NASA is now inviting astronomers to write joint proposals for observations on both Hubble and Chandra.

    In one crosscutting effort, Chandra will map the million-degree gas bound within galaxies and clusters of galaxies, allowing astronomers to chart the distribution of much of the universe's matter and to understand the formation of galaxy clusters. Chandra's resolution is so good, says CfA's Tucker, “you'll see shock waves from galaxies falling into the cluster.” Chandra will also observe active galaxies, whose brilliance at radio and other wavelengths is fueled by gas falling into a mammoth black hole; Meg Urry at STScI hopes to find out whether active galaxies can thrive in isolation or whether they have to live in clusters of other galaxies to keep the black hole well fed.

    Garmire will use Chandra to reobserve a tiny patch of sky called the Deep Field, which Hubble observed in exquisite detail at optical wavelengths, seeing galaxies in the farthest reaches of the cosmos. “Lots of [optical Deep Field] objects are ho-hum things,” Garmire says, but the same field in x-rays should show galaxies filled with hot young stars, helping astronomers understand the universe's history of star formation. Chandra may also pick out black holes at that great distance, allowing astronomers to test the possibility that black holes are born small and have grown larger over cosmic history. And like every new instrument, Chandra is certain to find something completely unexpected, says Harvey Tananbaum, director of the Chandra X-ray Observatory Center: “My favorite observation is going to be the one I couldn't have told you about today.”

    Such far-reaching, serendipitous, interdisciplinary astronomy was the reasoning behind NASA's Great Observatories program, in which Chandra is the third of four. (The Compton Gamma Ray Observatory and Hubble preceded it, and a planned infrared telescope called SIRTF will follow.) These ambitious missions suffer from lengthy time scales and high costs, but Schreier says nothing else offers as good a chance for fundamental discoveries. Agrees NASA's Alan Bunner: “You can do breakthrough science with small missions if you're clever and lucky, but usually it takes breakthrough observatories.”

    For now, says Murray, “seeing the light at the end of the tunnel and knowing it's not a train, I'm so excited you can't imagine.”


    Other Eyes on the X-ray Sky

    1. Alexander Hellemans*
    1. Alexander Hellemans is a science writer in Naples, Italy.

    For the next few months, NASA's giant new x-ray telescope Chandra will reign alone (see main text). But by next January, after launches by the Europeans and the Japanese, it will be one of a triumvirate of x-ray observatories, each with unique strengths.

    Whereas Chandra's forte is very high resolution imaging, capturing x-ray sources such as the hot gas between galaxies in detail never seen before, the European X-ray Multi-Mirror (XMM) mission will outdo it in sheer x-ray gathering power. And the Japanese Astro-E will be able to analyze very short wavelength, “hard” x-rays from the universe's most violent corners. “The three [spacecraft] complement each other very nicely,” says Steve Holt of the Goddard Space Flight Center in Greenbelt, Maryland, NASA's project scientist for Astro-E.

    Four telescopes in one.

    Europe's X-ray Multi-Mirror mission, due to be launched in December.


    XMM, a “cornerstone” mission in the European Space Agency's (ESA's) Horizons 2000 program, is scheduled for launch on 15 December atop an Ariane 5, Europe's new heavy-duty rocket. The craft's size—a length of 10 meters and a weight of 3.9 tons— approaches that of Chandra, and it will follow a similar orbit, a high, eccentric path taking it as far as 114,000 kilometers from Earth, letting it view a single target continuously for a day or more. Sometime in March the $640 million craft (the cost figure includes launch and operations for 2 years) will make its first observations. Like Chandra, it will collect x-rays with conical mirrors shaped so that photons will graze the surfaces and be funneled to a camera and a spectrograph. But XMM's designers opted for sensitivity over ultimate resolution.

    The craft carries a bundle of three x-ray telescopes with 58 concentric gold-plated nickel alloy mirrors each, giving it a total mirror area of 4500 square centimeters—four times that of Chandra. Its ability to capture x-rays should enable XMM to find 100 new x-ray sources in each patch of sky it examines, estimates Robert Lainé, ESA's project manager for XMM. Because it will bring in such a large haul of photons, XMM should also excel at spectroscopy—in which the x-rays are analyzed by wavelength—and at detecting the subtle x-ray flickers that might, for example, be the signature of a black hole. And while XMM's x-ray telescopes capture images and spectra, a fourth, 30-centimeter optical telescope will observe the same object in visible light. “So if we observe a fluctuating source, like a pulsar, we can see if it pulsates simultaneously in the visible, in x-rays, and what the spectral contents of the source are,” says Lainé.

    In late January 2000, the third member of the x-ray triumvirate, Astro-E, will be launched from the Kagoshima Space Center in Japan aboard an M-5 rocket. The $200 million craft, a joint project of NASA and Japan's Institute of Space and Astronautical Science, will orbit just 550 kilometers up. The low orbit means that the spacecraft will not be able to observe most objects for more than about an hour before Earth blocks its view or its instruments have to be shut down as it passes through the radiation belts near Earth.

    Like XMM, Astro-E carries multiple telescopes—four of them, each containing 130 nested, foil-like mirrors, making a total collecting area twice that of Chandra's mirrors. Astro-E won't have anything like Chandra's eye for detail, or even XMM's; its strength will be analyzing the hard x-rays given off by the turbulent centers of “active” galaxies and by the debris clouds left by exploding stars. The key instrument, developed at Goddard, is a microcalorimeter—an array of 32 heat detectors, essentially—placed at the focus of one of the telescopes. Operating at liquid-helium temperatures, the calorimeter can precisely determine the energy (equivalent to wavelength) of each hard x-ray photon by measuring the tiny amount of heat deposited when it strikes a cooled crystal.

    Although some astrophysical events will play to the strengths of just one instrument, the three spacecraft will spend a lot of time looking at the same things. But x-ray astronomers welcome the overlap. Says Jeffrey McClintock of the Harvard-Smithsonian Center for Astrophysics in Cambridge, Massachusetts, “They will milk different things out of these objects.”


    Gaining New Insight Into the Molecular Basis of Evolution

    1. Elizabeth Pennisi

    By tying mutations in the sequences of visual proteins to altered function, researchers get a handle on how those changes may influence fitness

    Every step in evolution, from a darkening of a moth's pigment to the development of the opposable thumb, is caused by a change in molecules. But biologists have rarely traced adaptive changes to their molecular roots in genes and proteins. Now Shozo Yokoyama, an evolutionary geneticist at Syracuse University in New York, has studied how natural selection works at the molecular level in a fish that is already celebrated in textbooks of evolution: the “living fossil” called the coelacanth.

    At a meeting* last month, Yokoyama and his colleagues described how they pinpointed the changes in visual pigment genes that enabled the coelacanth to see in the dim light of the deep ocean, 200 meters below sea level. And in an upcoming report in Genetics, the group describes changes in similar genes from a wide range of other animals, which may have enabled them to adapt to their particular habitats. “There's just a few changes being driven by selection, and it's hard to ferret those out,” says Charles Aquadro, an evolutionary geneticist at Cornell University in Ithaca, New York. “What Yokoyama has done is really clever.”

    Custom eyes.

    Amino acid changes yielded two pigments (circles) that are better adapted than the ancestral opsin (center) for deep-sea vision in the coelacanth.


    The work may have applications beyond evolutionary biology, in the practical realm of biotechnology and protein engineering. “If you want to be able to engineer enzymes to carry out novel reactions, what better way [to design them] than by looking at how nature did it,” explains Anthony Dean, a biochemist and evolutionary biologist at the University of Minnesota, St. Paul.

    Yokoyama decided to focus on how the light-sensing pigments had changed through evolutionary time nearly 15 years ago after the first genes for human eye pigment proteins, which are called opsins, were cloned, an achievement that prompted the cloning of these genes from many more species. He wanted to see not just which changes took place during the evolution of a particular species but how each one affected the protein's function and the organism's ability to see. “It was very important to me to be able to manipulate these molecules,” he recalls.

    Other researchers had already devised one opsin assay. It involves putting an opsin gene, with or without specific mutations, into monkey kidney cells growing in laboratory dishes, giving the cells time to produce the opsin, and then adding the second component of the visual pigment, the light-absorbing chromophore, to the cells. Then the complete pigment can be purified and its light- absorbing properties tested.

    As Yokoyama described at the meeting, he decided to use this assay to look at opsins from the coelacanth as part of his study of the molecular evolution of vision. The coelacanth was one creature for which the opsin genes had not yet been identified, so Yokoyama's team used mammalian opsin sequences as probes to pull out similar genes from the coelacanth genome. This search picked up two genes belonging to the rhodopsin family, plus a third gene that appeared to be incapable of producing a complete protein.

    Rhodopsins are generally most sensitive to green light of 500 nanometers—a longer wavelength than that of the light penetrating down to where coelacanths live. To identify which amino acid changes in the coelacanth rhodopsins had nudged their peak absorbencies to the shorter wavelengths, Yokoyama and his colleagues compared the sequences of the coelacanth rhodopsins to those of other fish that live closer to the ocean surface. They found two amino acid changes in each of the two coelacanth rhodopsins that seemed likely to underlie the shift in wavelength absorbencies.

    The researchers went on to test their prediction by introducing mutations into the coelacanth rhodopsin genes to change those amino acids, one at a time, to those found in the typical fish rhodopsin. Measurement of the wavelength sensitivities of the normal and altered coelacanth rhodopsins told the researchers that their predictions were correct. Each mutation contributed additively to shifting the coelacanth opsins from their native sensitivity peaks—a wavelength of 485 nanometers for one and 478 nanometers for the other—to longer wavelengths. The coelacanth's distinctive amino acids apparently alter the fit between the opsin and the chromophore, which starts to vibrate when hit by light, and therefore affect the chromophore's responsiveness to particular wavelengths. Together, Yokoyama says, the amino acid changes enable the coelacanth “to see the limited range of color available in that environment.”

    After comparing the rhodopsin genes in coelacanths and other fish with the same genes in birds and reptiles, Yokoyama thinks that one of the changes in one gene occurred after coelacanths and other fish went their separate evolutionary ways but before coelacanths and legged animals split up. That same change—replacement of a glutamic acid by a glutamine—occurred independently in the second gene after the coelacanth diverged from this ancestor. Each coelacanth rhodopsin then underwent a second change; in one case an alanine became a serine and in the other, a methionine became a leucine.

    More recently, Yokoyama and his graduate student F. Bernhard Radlwimmer have laid the groundwork for more studies of how various opsins have specialized in the course of evolution. Previous work by Yokoyama and others pointed to five sites in the opsins where amino acid changes affect the absorbencies of the so-called red and green pigments used to detect middle wavelengths. He and Radlwimmer have now cloned and sequenced the genes for those opsins in the cat, horse, gray squirrel, white-tailed deer, and guinea pig and, based on which amino acids they contain at those sites, estimated the light-sensing properties of each protein.

    Yokoyama and Radlwimmer then made the pigments in cultured cells and tested their predictions. “The additive effects of these amino acid changes fully explain virtually all the [peak light-absorption] values,” says Yokoyama. “We can predict what kind of vision [an animal] will have based on the [amino acids] at these five sites.” Indeed, says Hiroshi Akashi, an evolutionary biologist at the University of Kansas, Lawrence, “[the] work beautifully demonstrates that a few sites within proteins can account for most differences in the wavelength absorbence in visual pigments.” (These results will appear this fall in Genetics.) At this point, however, it's less clear how these changes improve each species' ability to see in its particular environment.

    For years, Dean says, too many evolutionary biologists have used gene sequence differences simply to assess the evolutionary distance between species, rather than trying to tie them to function. But Yokoyama, he notes, is in a small band of researchers who have taken the next step and begun to study evolution's course from a molecular perspective as well. Dean hopes the approach will spread, because tracing adaptation to molecular changes “allows evolutionary hypotheses to be tested far more rigorously than had been imagined.”

    • *The American Genetic Association met from 11 to 13 June in State College, Pennsylvania.


    And Now, the Asteroid Forecast ...

    1. Govert Schilling*
    1. Govert Schilling is an astronomy writer in Utrecht, the Netherlands.

    Astronomers have devised a scale to rate the danger posed by asteroids headed for Earth, comparable to the Richter scale of earthquake fame. The so-called Torino scale, which ranges from 0 (no collision) to 10 (certain collision causing Earth-wide devastation), was developed by Richard Binzel of the Massachusetts Institute of Technology and presented to colleagues during a June workshop in Turin (Torino), Italy. The International Astronomical Union endorsed it last week.

    The topic of asteroids is “prone to sensationalism,” says Binzel. Twice in recent years media hype erupted after astronomers discovered a rock that had a remote possibility of slamming into Earth (Science, 20 March 1998, p. 1843, and 23 April 1999, p. 565). “It's very hard to communicate extremely low probabilities to the general public,” says Binzel. “The new scale gives us a common lexicon.”

    The scale, which Binzel had been working on since 1994, takes into account the chances that an asteroid will hit as well as its size and speed relative to Earth. Torino scale values of 8, 9, and 10 refer to certain collisions, with local, regional, and global consequences, respectively. But “the average citizen shouldn't be concerned about an asteroid with a Torino value of 1,” says Binzel. The two recently discovered asteroids both would have been rated 1 when they were first discovered, but subsequent observations would have placed them firmly in the 0 category. Binzel says he was advised by science writers Kelly Beatty of Sky & Telescope and David Chandler of The Boston Globe. “In formulating the scale, we tried to be sociologists as well as scientists,” he says.

    Carl Pilcher of NASA's Office of Space Science calls the Torino scale “a major advance in our ability to explain the hazard posed by a particular [object].” But will astronomers adopt the new scale? “This will have to sink in a little bit,” says Tom Gehrels of the Lunar and Planetary Laboratory of the University of Arizona, who heads one of the projects searching for near-Earth objects. But he adds, “I think we ought to use it.”


    Glimmerings of Hope From the Bottom of the Well

    1. Jon Cohen

    The number of AIDSvaccines entering clinical trials is at an all-time low, but researchers are planning to begin testing new approaches soon

    Five years ago the AIDS vaccine field went into a tailspin when the two leading vaccines failed critical laboratory tests, and it still has not recovered. The number of new vaccines entering early testing in the U.S. trials network set up by the National Institutes of Health (NIH) has reached an all-time low; three trials were begun last year and none has been launched so far this year, compared with an average of six a year from 1990 through 1997. And earlier this month, researchers gave a decidedly mixed reception to results from preliminary trials of a combination vaccine—a strategy deemed more promising than the one that derailed 5 years ago. But some researchers now see glimmerings of hope.

    Science has learned that Merck & Co., a pharmaceutical powerhouse that dropped out of the HIV vaccine field in the early 1990s, is aggressively reentering the arena with plans to launch tests of two different vaccines before the end of the year. And researchers are experimenting with new combination vaccines that they hope to move into early clinical trials next year. AIDS vaccine researchers are especially encouraged by the rebirth of Merck's program. “The more [players] we can get in with good ideas, the more the whole field benefits,” says Donald Burke, who heads AIDS vaccine clinical trials at Johns Hopkins University. “The momentum right now is less than optimal.”

    The momentum virtually halted in June 1994, when NIH declined to fund large-scale efficacy trials of vaccines made by Chiron and Genentech. Both vaccines were based on genetically engineered versions of HIV's surface protein, gp120. The hope was that these viral proteins would raise antibodies that would attach to the virus in the bloodstream, before it infected cells, but NIH got cold feet when test tube experiments showed that the antibodies these vaccines produced could only stop laboratory-grown strains of HIV—not ones freshly isolated from patients (Science, 24 June 1994, p. 1839). (Genentech's vaccine now is in efficacy trials, funded privately by a new offshoot, VaxGen.)

    When it pulled back from supporting the gp120 vaccines, NIH had high hopes that another approach would soon be ready for efficacy trials. As Jack Killen, a top AIDS official at NIH, predicted at the time, “the very realistic likelihood is within 2 to 3 years we would be ready to go with a different concept.”

    View this table:

    The “different concept” Killen and others had in mind was a combination of a gp120 vaccine and a vaccine manufactured by Pasteur Mérieux Connaught that consists of various HIV genes spliced into a live, but harmless, canarypox virus “vector.” The logic behind this so-called “prime-boost” approach is that the two vaccines activate different arms of the immune system, which theoretically should work in concert to thwart HIV. The gp120 vaccine triggers antibodies, while the canarypox vaccine stimulates cell-mediated immunity, which occurs when the immune system dispatches cytotoxic T lymphocytes (CTLs) and other forces to rid the body of cells that the virus has infected.

    Two weeks ago, researchers at a conference on sexually transmitted diseases in Denver, Colorado, revealed results from the largest test yet done of a prime-boost vaccine. The study, sponsored by the National Institute of Allergy and Infectious Diseases (NIAID), began in May 1997 and involved 435 people, more than 80% of whom had a high risk of HIV infection because of drug use or sexual behavior. Although the study was not designed to determine whether the vaccines worked—it aimed to assess safety and the various immune responses triggered by the approach—researchers can glean hints of the vaccine's chances of success from the results.

    More than 90% of those who received a “prime” shot of canarypox and a “boost” of gp120 developed antibodies that, in a test tube, could stop a laboratory-grown strain of HIV, and some 30% produced CTLs against the virus. These results meet milestones that NIAID has said must be achieved before launching an efficacy trial (Science, 1 March 1996, p. 1227). “This trial is a necessary step toward initiating an efficacy trial,” says Peggy Johnston, head of NIAID's AIDS vaccine effort. “But,” she adds, “several questions remain.”

    One question is whether newer canarypox vaccines made by Pasteur Mérieux stimulate higher levels of CTLs. A comparative trial of these vectors should be completed this summer. Johnston says another small-scale trial of this concept is also needed to determine the best timing of the booster shot and whether a different gp120 vaccine might work better. So even if these tests pan out, an efficacy trial is at least 1 year away.

    Some AIDS vaccine researchers, however, have serious reservations about the results so far. Oxford University's Andrew McMichael, a CTL expert, is underwhelmed by the fact that killer cells were elicited in only one-third of the vaccinees. “Two-thirds of the people [may have had] no CTL response, and, if CTLs are important, they wouldn't be protected,” he says. Similarly, HIV antibody specialist John Moore of the Aaron Diamond AIDS Research Center in New York City has strong doubts about the value of the antibody response, which still only blocks laboratory-grown HIV. “It's a completely meaningless antibody response,” says Moore. “It just gives them a security blanket. They might as well not use [the gp120 boost] at all.”

    McMichael, with support from the privately funded International AIDS Vaccine Initiative (IAVI), plans next spring to start trials of a different prime-boost approach that he hopes will produce much higher levels of CTLs. He believes that priming with the poxvirus presents the immune system with too many proteins from poxvirus as well as HIV, so he intends to use a prime vaccine that contains HIV genes stitched into a stretch of DNA called a plasmid. He theorizes that this so-called DNA vaccine, which can infect cells and produce viral proteins, will focus the immune system's attention; he then hopes to boost these primed CTL responses with a modified vaccinia virus that holds HIV genes.

    Although Merck is sketchy about its vaccine plans, it, too, is focusing on triggering strong CTL responses. Emilio Emini, a virologist who heads the company's vaccine program, says that, like McMichael, the company is working on a DNA vaccine. It also is developing a live, but defective, viral vector that Emini declines to discuss publicly. “One of the reasons we've kept a low profile is we don't want to raise expectations,” he says. “The likelihood for failure is pretty high.” Then again, he says, Merck is putting a lot of resources into the project. “It's a big program for us.”

    Wayne Koff, who formerly headed the AIDS vaccine program at NIAID and now is scientific director at IAVI, hopes Merck is more committed to its vaccine program than in the past. More support from the pharmaceutical industry is sorely needed, says Koff, who notes that NIH has fewer trials under way now than he has seen in 10 years. “Right now we're at a nadir. But it's clear there will be a lot more vaccines in trials in a few years.”


    A Cooler Way to Balance the Sea's Salt Budget

    1. Richard A. Kerr

    Mineral-laden volcanic springs in the deep sea had seemed to explain the ocean's chemistry, but cooler springs away from the volcanoes may play a bigger role

    The hot springs and billowing black smokers of the deep sea looked like a spectacular answer to a long-standing mystery when they were discovered in the late 1970s. Perched along the crest of the volcanic midocean ridges, these spouts of mineral-laden, often blistering-hot water not only hosted a menagerie of bizarre animals but promised to be the missing factor that balances the ocean's chemical books. Seawater isn't simply river water concentrated by eons of evaporation; it contains too much of some minerals and too little of others. Ridge-crest hydrothermal activity—where seawater sinks into the crust, is heated and chemically transformed by hot rock, and then gushes back into the sea—looked like it might explain these disparities. But it now appears that a cooler, gentler interplay of water and rock may play a far bigger role in setting seawater's composition.

    On page 721 of this issue of Science, oceanographers Stephanie de Villiers of the University of Cape Town in South Africa and Bruce Nelson of the University of Washington, Seattle, present high-precision measurements tracing a plume of chemically altered seawater that includes water from warm springs kilometers away from the seething black smokers of the ridge crest. “This is a very exciting discovery,” says oceanographer Michael Mottl of the University of Hawaii, Manoa. “If proved to be correct, it will solve a lot of problems. The data point to the importance of hydrothermal activity other than the spectacular black smokers that have gotten so much attention.”

    Until the discovery of deep-sea hot springs, oceanographers had hardly a clue about how or where seawater took on its distinctive composition. Rivers carry in so much sodium, magnesium, and potassium that the ocean should be far richer in these elements than it is. Calcium presented the opposite problem. Shell-forming plankton appeared to be taking calcium and carbonate out of seawater and incorporating it into sediments twice as fast as rivers carry the metal into the sea. But black smokers seemed to be balancing the books. Seawater was sinking into the fractured ridge crest, picking up heat, calcium, and other elements from the rock, leaving behind its own magnesium, and rising back into the sea.

    But then surveys began suggesting that superhot water might not be the only factor controlling seawater chemistry. As Keir Becker of the University of Miami in Florida and Andrew Fisher of the University of California, Santa Cruz, will soon report, something like 10 times more fluid seeps from the flanks of the ridge than from the crest. Although this water interacts with the crust at lower temperatures—20° to 200°C, compared with 350°C degrees for a black smoker —it too might deposit some minerals and pick up others, transforming the ocean's chemical composition.

    Compared to black smokers, with their heat and dramatic mineral formations, these warm springs are hard to find. So de Villiers and Nelson developed a procedure for analyzing seawater using mass spectrometry that was precise enough to pick up telltale variations in the chemistry of deep waters that have flowed across a ridge. Catching a ride on a research ship that happened to be crossing the East Pacific Rise at 17.5°S, de Villiers collected seawater from the surface to near the bottom above and to the west of one of the most active midocean ridges in the world.

    Armed with the high- precision technique, de Villiers and Nelson found a plume of water trailing off to the west of the ridge in which magnesium was depleted by as much as 1% and calcium was enriched, just as expected from hydrothermal alteration. To work out the proportions of plume water from black smokers and tamer warm springs, they checked helium isotope measurements made near their sites by other researchers. The lighter isotope of helium, helium-3, is a signature of black smokers, because only the hottest water manages to extract helium-3 from the newly formed rock of the ridge crest.

    Helium-3 was scarce in the plume given the amount of missing magnesium, leading the researchers to estimate that “the lowtemperature flux of magnesium is three to 10 times greater than the high-temperature flux,” de Villiers says. She concludes that far more chemical processing takes place in the warm rock within 2 to 10 kilometers of the central ridge axis than at the ridge axis itself.

    The report has elicited a mix of caution and guarded enthusiasm. “That's a very high precision she's claiming for an element [magnesium] that is very hard to measure with a mass spectrometer,” says chemical oceanographer John Edmond of the Massachusetts Institute of Technology. “I would be cautious.” Mottl agrees that more profiles need to be analyzed by a number of groups, but he feels de Villiers and Nelson “have found something very important.” He adds that much of the water in their plume could be coming from even cooler springs, farther from the ridge axis, than de Villiers thinks. “I certainly wouldn't rule out that she's got a substantial input from the [ridge] flank,” he says.

    If de Villiers and Nelson's technique for tracing plumes of chemically altered seawater to their source pans out, he says, “it will probably be the best way to get the hydrothermal flux” of salts coming off the midocean ridges. The final answer about how the sea gets its salt should be satisfying, if not as spectacular as it once seemed.


    Switchable Reflections Make Electronic Ink

    1. Meher Antia*
    1. *Meher Antia is a staff writer in Vancouver, Canada.

    An electronic version of paper and print would combine the optical principle used in reflective signs with a scheme for squelching the reflections at will

    Despite the fantastic array of technologies now put to use to display information, the old-fashioned printed word shows little sign of fading away—people just love the comforting look of black ink on white paper. Display researchers have come up with a host of schemes to mimic that partnership in a rewritable medium, many relying on small particles of dark pigment moved around in a liquid suspension by electric fields. So far, most of these electronic ink technologies lack the right mixture of properties, such as low cost, high contrast, and fast rewrite speed. But Lorne Whitehead, a physicist at the University of British Columbia (UBC) in Vancouver, and his colleagues have devised a new electronic display principle that they believe may have the speed and contrast to take some of the shine off ink and paper.

    Their technique forms black characters on a white background by switching on and off the reflection of light from a screen. It combines a century-old optical principle called total internal reflection (TIR)—the same principle that makes stop signs bright—and a technique for turning off TIR at will. “The science involved is delightfully simple,” says physicist George Beer of the University of Victoria in British Columbia. “The principle has been shown to work.” He cautions, however, that the team has not yet shown how to build a working display screen. “The missing ingredient is the technological details.”

    TIR takes place when light that has penetrated glass—or some other material that bends light sharply—reaches a boundary with a less refractive material, such as air. If the light strikes the interface at a shallow angle, none of it escapes, and it is all reflected back into the glass. TIR is the principle that keeps beams of light careering down fiber optic cables. Whitehead had been trying to improve light guides, but about 2 years ago he wondered if the same phenomenon could be harnessed in a display. “If you can make a surface look white because of TIR and then stop [the TIR] where you want words to form,” he says, “it will look like a black-on-white display.”

    At a meeting of the Canadian Association of Physicists in New Brunswick last month, Whitehead's colleague Andrzej Kotlicki and his student Michele Mossman showed how they would turn that idea into a display screen. They demonstrated a white, flexible sheet made of minute interconnected polycarbonate prisms. The sheet looks bright because, like the reflective coating on a stop sign, it bounces light off the internal interfaces at the correct angle for TIR, eventually directing the light back out toward a viewer.

    To achieve an “ink” effect, the researchers exploited the fact that, because of light's fuzzy wavelike nature, some of the waveform extends about 1 micrometer beyond the surface where the light is being internally reflected. This leaked light is known as an evanescent wave. “To stop TIR, all you have to do is stick an absorbing material into the evanescent wave,” says Whitehead. The researchers provided the absorbing material by backing the reflective sheet with a thin layer of fluorinated hydrocarbon liquid in which charged particles were suspended. Using electric fields, they could maneuver the charged particles into the evanescent wave region behind the reflecting sheet to switch off the TIR.

    Other schemes for producing electronic ink also use electric fields to move charged particles in a fluid, but those particles must move laterally by about 10 or 20 micrometers to achieve a good contrast between black and white. In the UBC scheme, the particles have to move only about a micrometer toward the screen to switch off TIR. “It is a very sensitive way to switch light,” says Whitehead. “You can move an absorbing material a very short distance and get a dramatic change in absorption.”

    That could mean a faster response and lower power needs than other electronic ink schemes, Mossman says. So far, the group has switched TIR on and off in just a single patch of screen. But they are now planning a prototype display consisting of many pixels, controlled with circuitry like that of the liquid crystal displays in palm-top computers.

    The challenge is easier for larger scale displays that need not change as quickly, such as highway signs, and the group is working on several ways to move an absorbing material in and out of the evanescent wave for efficient, low-power displays. One uses air pressure to push an absorbing silicone gel against the walls of the miniature zinc sulfide prisms. Group member Robin Coope is working with the company 3M to commercialize this technology, and the researchers have already shipped a prototype sign for testing.

    Physicist Edward Sternin of Brock University in Ontario says that he is eager to watch the group's progress: “It is always exciting to see the application of an old physics principle that has immediate and direct applications to technology.”


    Indian Scientists Question Government Grip on Data

    1. Pallava Bagla

    Geological, health, and environmental data are often generated but kept out of the hands of those who could put it to good use

    NEW DELHIWhen a team of researchers announced last fall that they had discovered putative tracks made by wormlike creatures 1.1 billion years ago, colleagues from around the world expressed an interest in visiting the site in central India where they were found. But even Indian paleontologists familiar with the region had difficulty pinpointing the exact location. The reason: Available maps of the Churhat area offer a resolution of 2 to 5 km, an order of magnitude less than scientists require to help them get up close and personal with the formation. More detailed maps do exist; indeed, they lie piled up on dusty shelves in the offices of the Geological Survey of India in Nagpur, in central India. But the survey, citing security concerns, has never made them public.

    In an era when information is power, Indian scientists are complaining bitterly that the government's tightfisted control of scientific data has turned them into 98-pound weaklings. On 14 to 15 July, a panel of the Indian Academy of Sciences in Bangalore held a first-ever meeting of its kind to explore how to open up a trove of data now shielded from scientific eyes. Its Panel on Scientific Data of Public Interest is also exploring ways to improve the quality and archiving of meteorological, geographic, oceanographic, health, and agricultural data. But its first and greatest concern is simple access. “These are absolutely useless and archaic rules which only hinder the progress of science,” says Dhiraj Mohan Banerjee, a sedimentologist at the University of Delhi, referring to procedures imposed in 1967 by the Survey of India, the main topographical mapping agency of India, that placed severe restrictions on the sale of maps of the type that would be useful at Churhat.

    Two views.

    India sells high-resolution satellite images, like this one of monsoon cloud cover, to the world while restricting scientific access to adequate maps of much of the country.


    Geographical information is not the only kind of data subject to such restrictions. Whereas all development projects in India need an environmental clearance, for example, the Environment Impact Assessment reports that lead to such clearances are never made public. That policy includes projects such as the dams being built on the Narmada River that will affect millions of people, says Ashish Kothari, an environmentalist with Kalpavriksha, an Indian nongovernmental organization. “They fear that people may actually question the quality of such technical and scientific data” upon which those decisions are made, he says.

    The health sector is another case in point. Relevant medical data collected since the 1984 Bhopal gas tragedy remain under wraps in the Indian Council of Medical Research (ICMR), 15 years after methyl isocyanate leaked from a Union Carbide factory in the central Indian city of Bhopal, killing over 4000 people and maiming thousands of others in the world's worst industrial disaster. The council's research projects ended in 1994, says Pushpa M. Bhargava, a well-known molecular biologist and member of the Sambhavna Trust that is working with the victims of the accident, but a final report on its findings has never been made public. “It is simply shocking,” says Bhargava. For their part, ICMR officials say that no such final report exists, and they have refused to comment on a leaked 1992 draft report that has been widely circulated.

    The 3-decades-old rules restricting access to geographical data have attracted the most attention, however. Written on the advice of the defense ministry, they apply to maps for an 80-km belt along the border areas and coastal zones, as well as to gravity maps and high-resolution maps that depict geological formations and rocks. Gaining access to such maps means navigating through an interministerial bureaucratic procedure with at least 15 stages of negotiations. The time- consuming process—too onerous for most academicians to endure—was designed to protect the country from outside threats and to ensure that only an elite corps of government officials had access to the data. But today, in an era of remote sensing and global positioning systems, scientists say it serves neither purpose. “No longer do I need a topographical map to know my exact grid point,” says Sampige Venkateshaiya Srikantia, a Himalayan geologist and secretary of the Geological Society of India in Bangalore.

    Srikantia also believes that the ongoing Kashmir conflict provides a clear example of how restrictions have not helped. “Having restricted the sale of maps did not in any way stop Pakistani intruders from entering Indian territory and occupying strategic points,” he notes. What suffers in the bargain, he says, is genuine Himalayan research, which tries to map areas prone to landslides and avalanches.

    Many international researchers share his concern over the government's restrictive practices. Nicholas Christie-Blick, a sedimentary geologist with the Lamont-Doherty Earth Observatory at Columbia University who has worked extensively in the Indian lesser Himalayas, says India's rules are “counterproductive.” “Mapping that is accessible only to those who did the work accrues no credit, encourages duplication of effort, and provides no mechanism for evaluating the quality of what has been done,” he says. Such a waste of resources and impediments to scientific progress discourage the international geological community from joining Indian research projects, he adds.

    The 30 scientists who attended the academy meeting—intended as the first in a series —are preparing a document for the government to support their recommendation that “access to geographical data should be eased.” But it's not clear whether anyone is listening. Valangiman Subramanian Ramamurthy, a nuclear physicist and secretary of the Department of Science and Technology in New Delhi, admitted to Science that “many of the older rules deserve a reexamination in the context of changed perceptions and technological advancements.” But he hastens to add a caveat that has been used in the past to justify all manner of secrecy: The nation's security must remain uppermost. “If science is not done it is not a catastrophe,” he says. “But a defense slipup can lead to a catastrophe.”

  17. ENERGY

    Bright Future--or Brief Flare--for Renewable Energy?

    1. Kathryn S. Brown*
    1. Kathryn S. Brown is a free-lance writer in Columbia, Missouri.

    Solar, wind, and other forms of renewable energy are making surprising gains as some U.S. states open their power markets to competition. But with fossil fuel prices near all-time lows, experts are split on whether alternative energy can maintain its momentum

    PALM SPRINGS, CALIFORNIAIt's a warm day in late May, and the steady gale in the hills above this desert oasis drowns out the shouts of construction workers and pelts them with sand. Penetrating the howl is a rhythmic whoosh from three huge blades atop a 50-meter-tall turbine. The propeller-like blades spin in sync with 3500 other turbines, casting playful somersaulting shadows on the sand. “You almost think they have a personality,” says one worker.

    Crews have added more than two dozen turbines so far this year to the wind farm straddling San Gorgonio Pass. The machines embody the hopes of an era in which renewable energy providers could —after decades of unfulfilled promise—gain a sizable slice of the power market. The new turbines run for Enron Wind Corp., Green Mountain Energy Resources, and other firms that have plugged into the power grid after California last year became the first state to open its electricity market to competition. These and other energy suppliers will serve roughly 125,000 families —and companies such as Patagonia Inc.—that have chosen to pay extra to go green.

    California customers aren't the only ones turning to renewable energy from the wind, sun, rivers, geothermal vents, and even corn stalks. In Pennsylvania earlier this year, 300,000 residents chose to dump their old electricity suppliers, often switching to renewable alternatives. And last month, Texas Governor George Bush Jr. signed a bill into law that will restructure the state's power market, mandating that electric companies add 2000 megawatts (MW)—the equivalent of two large coal-fired plants—of new renewable resources over the next decade, the largest provision of its kind in any state. At least 20 more states plan to deregulate their markets soon, giving renewable companies a chance to compete for the $250 billion a year in sales racked up by utilities. And although natural gas is cheap and abundant—conjuring fantasies of boundless reserves—surveys suggest that a growing number of people may pony up extra to go green, if it means avoiding pollution that may degrade health or cause climate change.

    Most consumers, however, will let their wallets do the choosing, which gives natural gas a huge advantage. For renewable energy's road ahead to be paved with gold, or at least profits, it must become cheaper— and that will require better technologies. That's a troubling prospect for the United States, which trails other countries in the research arena: Japan spends three times as much as the United States on solar photovoltaic (PV) cells, for instance. And the disparities may grow. As Science went to press, a Senate panel had lopped about 20% off the Department of Energy's (DOE's) proposed $446 million budget for renewable energy R&D. Congress is now negotiating a final budget, as well as considering a pivotal tax credit for wind companies and bills that would mandate nationwide electricity deregulation. Some experts contend such federal handouts are the only way renewable energies can compete with fossil fuels.

    Renewables do have one powerful force pulling in their favor: concerns about climate change. Power plants may have to cut greenhouse gas emissions substantially under the Kyoto climate change treaty. But if the Kyoto deal falters and global warming forecasts become less dire, “fossil fuels are out of the woods, so to speak, and we can look forward to the fossil fuel era extending for a long time to come,” says Robert Bradley Jr., president of the Institute for Energy Research, a think tank in Houston, Texas.

    Still, analysts say renewable energy is catching on in states that have opened their electricity markets, as energy providers move to tap customer desire for green power. “The federal government is not the leader in electric deregulation,” says Thomas Corr, manager of regulatory relations at the Electric Power Research Institute in Palo Alto, California. With consumers suddenly going green, he adds, anything could happen.

    Green Revolution

    The stirrings of a U.S. market have been a long time coming for renewables advocates. Although Japan and European countries such as Germany and the Netherlands have for decades strived to wean themselves of oil dependence, the United States took its first baby steps only after the 1973 oil embargo led to gasoline shortages and skyrocketing fuel prices. Around that time, DOE announced an ambitious goal to produce 20% of the country's energy from renewable resources by the year 2000, beginning with a handful of well- publicized wind turbines and the establishment of what would become the National Renewable Energy Laboratory (NREL) in Golden, Colorado.

    In the 1980s, however, the Reagan Administration slashed funding for renewables R&D, and the technologies have since hobbled along on subsidies and tax breaks. Low natural gas and coal prices have also turned alternative energy sources into underachievers in the United States. In 1997, renewable energy accounted for just 8% of total U.S. energy consumption, compared to 24% for natural gas, according to DOE's Energy Information Administration. The renewable energy that makes the largest contribution to U.S. energy production is hydroelectricity, but the future of hydropower is uncertain. Environmental concerns have sparked a plan to remove four dams on the Snake River in Washington state (Science, 23 April, p. 574), as well as a dam on the Kennebec River in Maine. Meanwhile, geothermal energy, culled from steam trapped in Earth's crust, at best can contribute about 5% of North America's needs, according to the Geothermal Energy Association, a Washington, D.C.-based trade association. Incinerating biomass has carved a market niche but gets low grades from environmentalists because of its contributions to air pollution.

    For now the spotlight is shining brightest on wind and solar power, which contribute less than 2% of the nation's total energy. Although inching along in the United States, these energy sources are leaping ahead worldwide: Global wind and solar power capacities, in megawatts, have been growing by roughly 22% and 16% a year, respectively, since 1990, according to the Worldwatch Institute, a nonprofit public policy research organization in Washington, D.C. In 1998, for the first time in recent memory, the world's consumption of coal—a stalwart source of energy for electricity—fell 2%, partly because China cut subsidies to its coal producers.

    Fueling the global gains of wind and solar power are impressive technical achievements. Thanks to more efficient turbine design, wind power costs about 5 cents per kilowatt-hour (kWh), less than a tenth of the 1980 price, and PV power averages less than 20 cents per kWh. (A kilowatt-hour can light a 100-watt bulb all night or run a typical hair dryer for 1 hour. In California, homeowners use about 16 kWh a day and 6000 kWh a year.) These renewable energy costs have met—or beaten—projections made by economists in the early 1980s, according to an April report from Resources for the Future (RFF), a Washington, D.C., think tank.

    The advances allow renewables to compete with gas or coal in niche markets: mountains in the southwestern United States, for example, with lots of sun and few connections to the power grid. Mainstream markets are tougher to penetrate, largely because fossil fuel prices have also fallen since the 1970s. “The world has not stood still,” says RFF economist Dallas Burtraw. Low costs for shipping coal and a surge in natural gas discoveries have helped drive these sources of electricity down to about 3 cents per kWh.

    But thanks to the deregulation revolution now under way, renewables may at last have a chance to compete with fossil fuels. Renewables entered the energy market in 1978, when the Carter Administration—seeking to boost the country's energy independence—won passage of the National Energy Act, which included a bill ordering utilities to buy power from renewable energy producers at favorable rates. In an unrelated trend, big companies, eager to get electricity cheaply, began lobbying states for a competitive market that might lower power prices. On 31 March 1998, California's $20 billion power market opened, giving consumers served by three investor-owned utilities a chance to choose among eight firms, six of which offered alternative energy sources. By February 1999, some 125,000 homes and businesses had switched providers—and analysts suggest that at least half the small customers opted for some mix of renewable energy, either sold by independent power producers like Green Mountain Energy Resources or by standard utilities offering new programs. Santa Monica became the first U.S. city to switch all its public buildings to renewable power.

    The new market spurred Enron to build a 22-turbine, 16.5-MW project outside Palm Springs. Installed last month, the turbines will supply enough electricity to light up 5000 California homes. The company is considering wind farms in other states that plan to deregulate, according to Albert Davies, director of project development.

    If California offered renewable energy companies a taste of success, Pennsylvania, which deregulated its market in January, is serving up a full plate. In just 2 months, 378,000 electricity consumers switched suppliers, a quarter for green power specifically. One reason for the deluge is a state law that set a default price—the electric price offered to consumers who do not switch—high enough that new companies can meet or beat it. Other states will likely follow this pricing model, offering competitive markets, says Ryan Wiser, a policy analyst at Lawrence Berkeley National Laboratory in California.

    Energy rush

    A boost in the U.S. market can't come too soon for renewables advocates. “We're in a technology war—and we're losing,” says Scott Sklar, executive director of the Solar Energy Industries Association in Washington, D.C. Although sales of solar cells for items like highway signs, roofs, and radios jumped 21% last year, most of that growth came overseas. Over 70% of solar cells made in the United States are sent abroad, often to remote spots, like rural India, that are not connected to the grid. Since 1996, the U.S. share of the global market for solar cell products has dropped from 44% to 35%. Next year, DOE predicts, Japan—where electricity is relatively expensive—will edge ahead to lead solar PV sales worldwide. Not coincidentally, Japan will spend $240 million on solar power this year, more than triple DOE's $72 million PV research budget.

    Wind sales are also booming, with Germany in the lead. Since 1998, the world's wind energy capacity has grown more than 35%, topping 10,000 MW this spring—double the amount of 3 years ago. Germany contributed a third of last year's wind gains, largely by guaranteeing wind farms access to the grid at a competitive price for the power they generate. And as U.S. companies begin building 1- or 2-MW wind turbines, European firms are exploring 5-MW machines. “In the technology race, they're at least half a step ahead of us,” says Robert Thresher, director of NREL's wind technology center.

    Despite lagging behind, the U.S. wind market is enjoying its own heyday. Scrambling to seize an expiring wind energy-production tax credit, companies added roughly 1000 MW of new capacity in the past year, bringing the country's total to about 900 MW. DOE hopes to sustain the momentum. Last month, Energy Secretary Bill Richardson announced a new initiative, “Wind Powering America,” that aims to quadruple U.S. wind energy capacity by the year 2010, so that wind would provide enough energy to power 3 million households a year. To kick off the project, DOE will spend $1.2 million on wind turbines in 10 states. There are plenty of choices—although California hosts 90% of the country's wind turbines, 16 other states have even greater wind energy potential, according to the American Wind Energy Association in Washington, D.C.

    With the energy market shake-up, even firms selling electricity from conventional sources are taking a closer look at tapping renewables. Just last month, Central and South West Corp. (CSW), a Dallas-based utility, broke ground on the largest wind facility in Texas—a 107-turbine, 75-MW farm that will generate enough power for 30,000 homes. The utility began building it after a survey suggested customers were willing to pay, on average, $5 more a month for electricity from renewable energy. “That really opened our eyes,” says CSW's Ward Marshall. “Competition is coming, and we suddenly realized how out of touch we've been.”

  18. ENERGY

    Solar Homes for the Masses

    1. Alexander Hellemans*
    1. Alexander Hellemans is a writer in Naples, Italy. With reporting by Kathryn Brown.

    For the sun to compete with fossil fuels (see main text), solar engineers will have to think bigger. Success is “about making square miles of [solar panels], not onesies-twosies,” says Ken Zweibel of the Department of Energy's National Renewable Energy Laboratory in Golden, Colorado. “We have to make modules like carpet.”

    A novel Dutch effort is setting out to do just that. Near Amersfoort, the Netherlands, the NV REMU power company is leading a $13 million project to build 500 houses with roofs covered with photovoltaic (PV) panels. By the time the homes are finished next year, they should be drawing 1.3 megawatts of energy from the sun, enough to supply about 60% of the community's energy needs, with the rest coming from the power grid. The development, called Nieuwland, is the world's largest attempt at “building-integrated photovoltaics” (BIPV). “We want to demonstrate the construction of a solar energy system at the level of a precinct,” says project co-leader Frans Vlek, manager of REMU's energy conversion division. “Everything has been designed from scratch.”

    Amersfoort gets much less sunshine than the world average of 1700 watts per square meter: Nieuwland's homes should be bathed in about 1050 watts worth of energy per square meter. From this Vlek expects they should glean as much as 128 watts per square meter, thanks to nifty PV cells that respond best to light reflected by clouds. Each kilowatt-hour from the solar panels will cost about four times more than electricity supplied by the grid, says Nieuwland co-leader Ingmar Gros, an engineer at REMU. His company and local authorities will subsidize the difference. The cost could come down with refinements in manufacturing PV cells: “We are now still at the level of the blacksmith,” says Vlek.

    Getting more BIPV projects off the ground should help drive technical advances and perhaps make solar energy a commercial winner sometime in the next 5 or 10 years, predicts Zweibel, who heads DOE's Thin Film Photovoltaics Partnership, in which federal and corporate scientists are collaborating to develop better solar cells. One of the partnership's products, PV shingles, could spur BIPV projects in the United States, he says. For an energy source now better known, perhaps, for its failures—satellites losing contact after not having their solar panels oriented toward the sun, for instance—the rise of BIPV communities could be a much-needed success story.


    U.S. Supercars: Around the Corner, or Running on Empty?

    1. David Malakoff

    A collaboration between automakers and the federal government to develop high-mileage, low-emission cars is set to unveil its first prototypes next year; observers don't expect to see consumers cruising in them anytime soon

    GOLDEN, COLORADOIn a government complex nestled against the Rocky Mountains sits a torture chamber so brutal it could crack the Energizer bunny. Technicians lower their victims into a pit and repeatedly broil, freeze, or zap them with electricity—taking careful notes as life slowly drains away. But researchers here at the Department of Energy's (DOE's) National Renewable Energy Laboratory (NREL) show no remorse. They are, after all, only torturing batteries, as part of a program to create energy-efficient automobiles.

    Such “supercars” are the ultimate goal of the Partnership for a New Generation of Vehicles (PNGV), an ambitious, government-industry R&D collaboration begun in 1993 by the Clinton Administration. By 2004, the effort aims to produce vehicles that travel three times farther on a liter of gas and spew far less pollutants than do current family sedans, without costing more or being less safe. Halfway into a 10-year program that has spent $2 billion on research, the three major U.S. automakers are on schedule to unveil demonstration supercars—probably diesel-electric hybrids—next year. PNGV has “provided a push to getting these cars ready for the road,” says Terrey Penney, who manages NREL's hybrid-car research program.

    But critics charge that PNGV is headed down the wrong road. Some claim that the program is betting on the wrong technologies by emphasizing polluting diesel engines instead of potentially cleaner technologies, such as hydrogen fuel cells. Others view the entire enterprise with distaste, asserting that carmakers and the government make unsavory bedfellows. “I don't see why industry can't build these cars themselves; this is not an appropriate area for government subsidies,” says Stephen Moore of the Cato Institute, a libertarian think tank in Washington, D.C.

    The ultimate goal of the effort is to reduce energy use and pollution. Some 200 million U.S. cars and trucks consume more than a third of the nation's 18 million barrel per day oil supply. Vehicles emit smog- forming hydrocarbons and nitrogen oxides and, according to DOE, release about 15% of annual U.S. emissions of carbon dioxide, a greenhouse gas. Such statistics have long motivated environmentalists to push for tougher federal fuel-efficiency standards, which currently require passenger cars sold in the United States to get at least 12 kilometers per liter (26 miles per gallon) of gas, and light trucks—including minivans and sport utility vehicles—to travel at least 9 kilometers on a liter. In 1992, however, Congress rejected a White House bid to boost the standards—calling stiffer controls costly and unnecessary—and has since barred any tinkering with the regulations.

    PNGV rose from the ashes of that defeat. Changing tactics, White House officials the next year moved to enlist the industry's help in designing high-mileage cars that might do away with the need to joust over regulations. As an incentive, seven federal agencies promised to spend hundreds of millions of dollars on long-term R&D on technologies —from cheaper nickel-metal-hydride batteries to ceramic engines—that companies considered too risky to fund themselves.

    While some lawmakers grumbled that the plan amounted to corporate welfare, the big three automakers—Chrysler, Ford, and General Motors—signed up, agreeing to share the results of “precompetitive” research, such as studies of how fuel burns or battery chemicals interact. The companies also agreed to match the federal investment and design one-of-a-kind concept vehicles by 2000, which they would aim to develop into production-ready models by 2004. The goal is a deceptively ordinary-looking supercar that consumers “shouldn't be able to tell apart from today's models,” says Penney.

    The PNGV cars may end up handling like a regular sedan, but an unfamiliar sight will await anyone who peers under the hood. Although the supercar guts are still being designed at dozens of academic, corporate, and government labs, the PNGV partners are converging on a similar approach. After sifting through hundreds of possible technologies—from gas-turbine engines to carbon-fiber frames—the partners agreed in 1997 to focus their efforts on developing a lightweight hybrid electric vehicle that uses both an electric motor and a combustion engine to turn the wheels. The program also committed to continuing promising research into fuel cells, which generate electricity directly from hydrogen (see p. 682), but concluded that hybrids were closer to fruition.

    According to current hybrid schemes, a combustion engine that burns diesel fuel or gasoline will be used for highway cruising, while an electric motor will give an extra nudge up hills or when accelerating, improving fuel efficiency. The hybrid can also produce some of its own power, using braking friction to generate electricity that can be stored in a battery.

    Building batteries that can withstand the rigors of life on the road, however, is one of the daunting challenges facing hybrid designers. At NREL, for instance, a team led by engineer Ahmad Pesaran is studying how the multibattery packs needed for hybrids behave under stress. Using their torture chamber, the researchers have measured how battery cells heat up and discharge electricity under various conditions. And in a kind of wind tunnel, they have discovered how subtle changes in air flow can alter the performance of battery packs, heating up some cells while cooling others. The hottest, weakest cell can reduce the entire battery pack's output.

    Such research has revealed that PNGV batteries have a ways to go before they are ready for widespread use, according to a report ( released in April by a National Academy of Sciences panel. The panel, chaired by engineer Trevor Jones of Biomec Inc. in Cleveland, Ohio, says current designs are “unlikely” to meet PNGV's demanding life-span, power, cost, or safety targets.

    Batteries aren't the only PNGV technology facing “extremely difficult challenges,” the panel says. Perhaps the thorniest problem is getting the lightweight diesel engine that PNGV engineers want to put in the cars—called a Compression Ignition Direct Injection (CIDI) engine—to meet pollution standards. Although the CIDI engines get higher marks than gas engines for fuel economy, so far they have flunked a key emissions test: They produce more nitrogen oxides and soot particles than proposed standards allow.

    Diesel-engine designers have always faced a perplexing Catch-22. Because of the way the engines burn fuel, techniques that reduce NOx emissions—such as recirculating exhaust gases back into the engine to be burned again—increase soot production, while reducing soot ratchets up NOx. To address the problem, PNGV-funded scientists are tinkering with fuel variations and a filter that can sop up twice as much NOx before it leaves the tailpipe. It's a daunting challenge: To meet NOx standards, for instance, sulfur may have to be virtually eliminated from fuel, cut from 500 to 50 parts per million.

    Critics are skeptical that CIDI engines can clean up their act. Under proposed California low-emission standards that would take effect in 2004, for instance, even CIDI-based hybrids “might be virtually illegal to sell in California,” the nation's largest car market, says Jason Mark of the Union of Concerned Scientists in San Francisco. He would rather see the $40 million a year spent on the diesel program go toward developing cleaner technologies such as fuel cells.

    Other critics are calling for an end to PNGV. Taxpayers “should not be forced to help private companies,” says Moore, who supports the efforts of some budget hawks in Congress to trim the program. The opponents got some new ammunition last year, after Chrysler merged with Daimler, Germany's car giant, prompting Kasich and others to question whether the United States was funding research that would benefit foreign competitors. PNGV skeptics also seized on Toyota's 1997 introduction of Prius, a hybrid gas-electric car, noting that the Japanese company built the car on its own dime (see sidebar).

    Such complaints have done little to erode PNGV support, however. In recent testimony before Congress, Administration officials dismissed concerns about foreign companies siphoning intellectual property from the project, and they hold up Toyota's supercar as a reminder that PNGV is necessary to keep U.S. automakers competitive. Supporters also note that low gas prices provide little incentive to invest in developing high-mileage cars. “This field would be asleep without federal funds,” says a Senate aide.

    But with low gas prices and no requirement to market cars based on the PNGV prototypes, observers express skepticism that supercars—at least those made by U.S. firms—will roll into showrooms anytime soon. Jokes one industry official: “You've probably got about a decade to save up for a downpayment.”


    Toyota's Hybrid Hits the Streets First

    1. Dennis Normile

    TOKYOWant to know what a U.S. supercar might look like when it debuts in 5 years? Stand on a street corner here and watch the traffic whiz by. You're likely to spot a Prius. As U.S. automakers struggle to draft blueprints for their future fuel-efficient cars, the Toyota Motor Co. has beaten them to the punch with a gas-electric hybrid that gets about double the gas mileage and spews half the carbon dioxide of similarly sized sedans.

    What's more, the Prius has made it to market without the benefit of taxpayer- sponsored research and without any looming domestic requirements for zero-emissions vehicles. Toyota officials say they needed no prodding—or cash—from the government to meet rising interest in cars that are environmentally friendly and fuel efficient. One reason for that interest: Gas sells here, on average, for $0.70 per liter ($2.70 per gallon)—more than twice the price in the United States. The Prius is a big step in the right direction, says Yuichi Moriguchi, who studies transportation energy and pollution issues at the National Institute for Environmental Studies in Tsukuba.

    The Prius is no shot in the dark: Toyota spends about $3.7 billion a year on R&D and sells an electric car. But electric cars are not yet truly practical, says Hiroyuki Watanabe, a Toyota board member who oversees electric and hybrid vehicles. Battery packs, which weigh more than 300 kilograms, cost too much, give electric cars sluggish acceleration and poor handling, and can carry a car only 215 kilometers on a single charge.

    By melding old and new technology, says Watanabe, “the Prius solves those problems.” Under the hood is a conventional 1.5-liter gasoline engine with an electric motor, and a battery pack—about one-sixth the weight of batteries in electric cars—designed to last a car's lifetime. The gas engine charges the batteries, so they don't have to be plugged into a socket. The combination boosts fuel efficiency: When accelerating from a standstill to about 32 kilometers per hour, the Prius relies on the battery-powered electric motor. The gas engine kicks in at higher speeds, where it can operate more efficiently. Any excess power gets shunted to a generator to charge the batteries.

    The result is a car that gets 28 kilometers per liter (67 miles per gallon) in a standard mixed city-highway test. And besides halving carbon dioxide emissions, the Prius spews about 90% less carbon monoxide and nitrogen oxides than comparable sedans. Toyota is hoping to improve on those numbers with the second-generation Prius, a fine-tuned version to be launched in North America in mid-2000. By the end of next year, Honda, Nissan, and Mitsubishi plan to have hybrid cars on the market in Japan.

    The modest success of the Prius—Toyota has sold more than 25,000 so far—shows that consumers will buy alternative vehicles if they perform well enough at a price close to the equivalent conventional car, says Daniel Roos, a founder of the Massachusetts Institute of Technology's International Motor Vehicle Program. Like Moriguchi, Roos sees hybrid cars as “a transition technology” to bridge the gap until fuel cell cars appear. He may not have long to wait. Both Toyota and Honda have announced they intend to have virtually pollution-free fuel cell vehicles ready for sale by 2003—a year before U.S. hybrids are slated to roll out of labs.


    Bringing Fuel Cells Down to Earth

    1. Robert F. Service

    Automakers are banking on fuel cells, used to run equipment aboard spacecraft, to power the first zero-emission vehicles; the type of fuel that supplies the cells could determine how deeply these cars penetrate the market

    Hydrogen has long been touted as the fuel of the future. Combine it with oxygen in a fuel cell, and it will give you electricity and a little heat, with one byproduct: water. Trouble is, the future never quite seems to arrive. Although fuel cells helped the Apollo astronauts make it to the moon, they have never made much of an impact on Earth. But if you listen to automakers these days, you may think they've seen the future—and that the future is on its way to a showroom near you.

    In a bid to keep up with ever tighter air pollution standards, automakers are pushing hard to introduce fuel cells to mundane family sedans and pickup trucks. Virtually every major car company is now working on the technology. Early demonstration vehicles running on hydrogen and methanol are already on the road. The California Fuel Cell Partnership, a new collaboration between car- and fuel cell-makers, oil companies, and government agencies, plans to put some 50 demo cars and buses through their paces in the next 4 years. And DaimlerChrysler is so confident the partnership will like one of the early designs that it has promised to roll out 40,000 fuel cell vehicles by 2004. If fuel cells, as expected, become cheaper and can match the performance of traditional car engines, they “will be the most prominent power source in the next century,” predicts Ron Sims of Ford's research lab in Dearborn, Michigan.

    Those are brave words, considering that the internal combustion engine—thanks to a stream of technological advances since Henry Ford's day—has managed to beat back challenges from every upstart alternative for powering automobiles. And the fuel cell's challenge could be blunted by a bruising battle over which fuel should provide the hydrogen the cells will consume. It's a battle that could undermine the technology before it ever gets up to speed.

    Engineers and clean-air experts say the simplest and cleanest option is hydrogen gas itself. But it would cost tens of billions of dollars to outfit all the filling stations in the United States to supply hydrogen—not to mention an intense marketing campaign to convince the public of the safety of a fuel still associated with the fiery demise of the Hindenburg, a hydrogen-filled zeppelin, in 1937. Car and oil companies would prefer to equip vehicles with miniature chemical factories to convert liquid fuels, such as gasoline or methanol, into hydrogen gas that can be fed into fuel cells. Critics, meanwhile, argue that the converters likely will be expensive and prone to breaking down. “Everybody is pushing their own version of the technology,” says Reinhold Wurster, a fuel cell expert at LB Sustain Technique in Ottobunn, Germany.

    The outcome of this battle will set the course for fuel cell technology—and perhaps alter the world's energy map—well into the next century. Because the United States uses over 40% of the gasoline produced globally, “it's the gorilla that drives the rest of the world,” says John Turner, a fuel cell expert at the U.S. Department of Energy's (DOE's) National Renewable Energy Lab in Golden, Colorado. “What we do here will have a lot of influence on future energy use.”

    Space-to-Earth odyssey. Fuel cells didn't start nipping at the heels of traditional car engines overnight. The technique was invented in 1839—more than 20 years before the first internal combustion engine—by Sir William Grove, a Welsh judge who began his career as a physicist. He shot an electric current through water, splitting the molecules into oxygen and hydrogen. When the gases recombined in his experimental cell, Grove noted, they produced a current.

    The technology remained little more than a curiosity until the late 1950s, when General Electric developed proton-exchange membrane (PEM) fuel cells as a lightweight solution for providing onboard electricity to the Gemini spacecraft that put some of the first U.S. astronauts in orbit. Around that time, other firms developed alkaline fuel cells, which use a liquid electrolyte instead of a membrane to control the flow of charged ions in the cell. Alkaline cells proved more efficient for cosmic voyages: Space shuttles still use them to generate about 45 kilowatts (kW) of power. But this electricity comes at a pretty penny. The shuttle's alkaline cells typically cost more than $500,000 per kW, about 10,000 times the price—roughly $50 per kW—that would start to interest carmakers.

    To avoid fuel cell sticker shock, firms are revisiting GE's PEM technology as a rough blueprint for making fuel cells for cars (see sidebar, p. 683). Since first starting to tinker with this design in the mid-1980s, Ballard Power Systems of Burnaby, Canada, for one, has developed fuel cell assemblies that generate 25 kilowatts of power (a trio of which can power a family sedan) and run at a cool 85 degrees Celsius. Along the way Ballard has lowered the price 100-fold, to about $275 per kilowatt. Based on this success, and the belief that the price will be right when fuel cells are made en masse, Ballard has inked investment and research deals with major automakers totaling more than $1 billion.

    Ballard projects that, by converting hydrogen to electricity directly, fuel cell engines could make productive use of up to 60% of the power generated; internal combustion engines, by comparison, squander about 75% of the power they generate. That should allow hydrogen vehicles to travel about 87 miles per gallon (37 kilometers per liter), without the tailpipe pollutants of today's cars. What's more, according to Wurster, fuel cell engines —thanks to a simpler design—should be cheaper than combustion engines when mass-produced. Such attributes, says Sims, make fuel cell vehicles the leading contender to meet California's strict clean air rules, which require that 10% of cars sold in the state after 2003 produce zero emissions. Maine, Massachusetts, New York, and Vermont are all mulling over similar zero-emissions laws, too.

    Nagging questions. Despite the impressive achievements at Ballard and other fuel cell-makers, many observers say the technology has a bumpy road ahead. Looming over the industry is the question of which fuel will power tomorrow's cells—a “major topic that will drive fuel cells for the next 5 years,” says Neil Rossmeissl, who manages the hydrogen research program at DOE in Washington, D.C.

    Gasoline is the obvious default option, as the infrastructure for delivering it to cars is already in place. But gasoline has a big strike against it: It is composed of a broad mix of hydrocarbons that must be converted—or in industry parlance, “reformed”—to hydrogen. That makes the onboard conversion plant, or reformer, very difficult to design and has put gasoline reformer technology “at least a decade behind” reformers that work with methanol, a simpler hydrocarbon, says C. E. “Sandy” Thomas, who analyzes fuel cell economics at Directed Technologies Inc. in Arlington, Virginia. Oil companies would also have to create a new grade of gasoline purged of impurities that can ruin reformer catalysts.

    With gasoline thus trammeled, methanol has sprinted ahead. Although only half as energy-dense as gasoline—cars would need nearly a double-sized fuel tank for a similar cruising range—methanol is a liquid at room temperature, which means that it can be handled and transported more easily than gaseous hydrogen. According to a recent analysis by Thomas and others, U.S. chemical companies now have the capacity to produce enough methanol to support 1.5 million fuel cell vehicles. A study financed by the American Methanol Institute in Washington, D.C., concluded last year that each gas station that wants to sell the highly corrosive fuel would have to install a new type of underground tank that costs about $50,000. Vehicles too would have to be equipped with special tanks. Still, Daimler-Chrysler, Honda, and others say they are betting on methanol. “For the everyday conventional car, methanol seems to be the fuel of choice,” says Wurster.

    A second look. But others argue that the economics, environmental concerns, and science favor a dramatic shift in car fuels to hydrogen gas. “As you go from direct hydrogen to methanol to gasoline, you increase the levels of emissions,” explains Sims. “Many people think hydrogen is a dark horse,” adds Thomas. “But if you look at all the different costs, it comes out ahead.”

    Perhaps the biggest impediment facing hydrogen-powered vehicles is building from scratch an infrastructure for distributing the gas. Today, U.S. chemical plants make hydrogen gas and ship it nationwide by chilling it to 20 degrees above absolute zero, at which point it becomes a liquid. Because that takes a lot of energy, Thomas argues that it would be better for each filling station to make its own hydrogen and store the gas in underground tanks. Gas stations could run electrolysis machines at night, when electricity is cheaper, turning water into hydrogen and oxygen. Or existing natural gas pipelines could deliver hydrocarbons such as methane, which can be reformed easily to hydrogen. Either scheme would cost less than $1 to create an amount of hydrogen equivalent to a gallon of gasoline, Thomas contends.

    Big savings would also come from doing away with the need for reformers in cars. “The typical automobile is used less than 2 hours a day,” and even then its engine is rarely working at full capacity, says Thomas. “If I take the onboard reformer from one fuel cell vehicle and put it on the curb, it could serve 110 vehicles” by churning out hydrogen continuously, he says. With all the costs factored in, Thomas calculates that creating the infrastructure to power fuel cells with methanol would cost roughly $1300 to $2800 per vehicle. Gasoline comes in at around $2350 to $5200, largely due to a more complex reformer. And hydrogen would add about $990 to $1150. The extra expense for reformer-based systems, along with the likelihood of the complex devices breaking down, says Turner, could sour consumers on fuel cell cars early on. “If that happens, everybody loses,” he says.

    Critics, however, contend that hydrogen is impractical for long hauls. Because of the gas's low density, even a pressurized steel tank would have to be about four times the size of a conventional one to give a hydrogen-powered car the same 560-kilometer range as a 1999 Ford Taurus, says Sims. Rossmeissl and others note that experimental lightweight carbon-fiber tanks and other improvements could extend the driving range of hydrogen-powered cars.

    The dominant fuel should emerge in the next few years, say Sims and others. Hydrogen is poised to sprint to an early lead: In the next 2 years the California Fuel Cell Partnership plans to build two hydrogen fueling stations in the state which will pump hydrogen gas into onboard fuel tanks. As for methanol, it could take a few years before reformers are reliable enough to mass-produce. “Until then, we've got this window to prove the [hydrogen] technology” and to devise better ways to make it and store it, says Sigmund Gronich, a DOE hydrogen specialist. But unless hydrogen blows away the field, it is unlikely to conquer the passenger car arena. “As onboard fuel processing reaches maturity in the middle of the next decade, you're probably going to see methanol become a fuel of choice,” says Sims.

    Whatever version of the technology gets the checkered flag will have a long reign, says Turner. Even if gasoline wins, it could provide a hydrogen source until fossil fuels become scarce decades or even centuries from now. But at that point, Turner predicts, consumers will need to switch to hydrogen, which is easier than methanol to generate from solar power and other renewable energy sources. “Ultimately we will get there,” says Turner. “The question is, do we generate an interim infrastructure and then 50 years from now do it all over again?”


    Company Aims to Give Fuel Cells a Little Backbone

    1. David Voss

    ELKTON, MARYLANDThe future of electricity generation might be housed in an unassuming green box perched on a table in a conference room here at W. L. Gore and Associates. It may not look like much, but the toaster-sized metal container, which lets out an occasional hiss from the hydrogen gas seeping through its innards, can generate enough power to run a television and a VCR.

    The device is a prototype fuel cell that converts bottled hydrogen gas and oxygen from the air into electricity and water. For decades fuel cells have eked out a small but vital niche in the energy world, performing tasks such as powering spacecraft and remote field stations. Now these electrochemical power packs are on the verge of penetrating a mass market, as automakers are gearing up to unveil in the next decade cars equipped with fuel cell engines (see main text). Companies like H Power of Belleville, New Jersey, which built the model on display here with parts from Gore, are hoping to put fuel cell generators into our homes someday—giving us independence from the power grid at very little environmental cost.

    But before fuel cell-makers can challenge utility companies for our business, they must first lower the price and ratchet up the power of their devices. A crucial part of the strategy is to improve the membrane that lies at the heart of the machines. “The membrane technology is directly related to what power you can get out,” says Tom Zawodzinski, a fuel cell researcher at Los Alamos National Laboratory in New Mexico.

    In hydrogen fuel cells, membrane assemblies do the heavy lifting, serving as catalyst, electrode, and chemical separator. As hydrogen gas streams into a fuel cell, it meets a catalytic electrode, usually platinum or some other precious metal. The catalyst strips electrons from the hydrogen atoms, leaving only hydrogen ions—protons—behind. These protons diffuse through a barrier called a proton exchange membrane, while the electrons flow out of the fuel cell, via the electrode, to an external circuit. Once the protons reach the other side of the membrane, they huddle around the other electrode, hungry for electrons that would allow them to react with oxygen to make water. The chemical reaction pulls electrons through the circuit, a flow of electricity that supplies power for appliances (see diagram).

    The challenge for engineers is to make membranes impermeable to hydrogen and oxygen, while conducting protons efficiently. An early breakthrough came in the 1960s, when DuPont created fluoropolymers (like Teflon, which had been developed during the 1940s) that happened to have just the right chemical properties to work in fuel cells. One polymer, comprised of sulfonic acid groups strung on a fluoropolymer backbone, turned out to be an excellent proton exchange medium. This material, sold under the trade name Nafion, is well suited for fuel cells, because protons are happiest when swimming among the highly acidic sulfonic groups. “It is basically a plastic version of battery acid,” says Zawodzinski. One drawback, however, is that the Nafion membrane must be saturated with water to work well. And because soggy objects usually swell and become weak, Nafion membranes end up thick. That's not good, because any membrane hinders the flow of protons. So the challenge for fuel cell designers was to make the thinnest possible membranes.

    Five years ago, Jeff Kolde and Bamdad Bahar, two chemical engineers at Gore, realized they could make a better membrane if they combined a proton-conducting material like Nafion with the company's well-known Gore-Tex membrane, a water-repelling mesh, permeable to gases, that's used in everything from mountaineering parkas to synthetic blood vessels. When the duo first described their idea to fuel cell experts, Bahar recalls, “people told us we were crazy.” Gore-Tex membrane is an insulator, skeptics said, so how could the material turn an ion-exchange membrane into a better conductor? Yet the same polymer structure that gives Gore-Tex membrane its strength and porosity also helps hold the ion exchange membrane together, like reinforcing bars in a concrete wall. By embedding the proton conductor into the open spaces between the rebars, Bahar and Kolde thought they could make a better membrane: one that could absorb water without becoming soggy. Their colleagues encouraged the duo to give it a try in a small trailer behind the company's building.

    The early results were promising. Kolde and Bahar found they could make the fuel cell membranes extremely thin, which increased the flow of protons through the membrane as much as 10 times. When they sent the membrane to a fuel cell designer in late 1994, they got a surprising call a few days later. “He said he'd melted the wires on his fuel cell,” Bahar says. Delighted with the power output—and with happy customers—Bahar and Kolde have since moved from the trailer to a new state-of-the-art building, where they are part of a global fuel cell membrane team.

    Gore and other companies have plenty of challenges left to solve before fuel cells can compete on the market. One is making the cells less choosy about their diet. “Fuel cells like every fuel, as long as it's hydrogen,” jokes Zawodzinski. To use gasoline or methanol, a device called a “reformer” must break the hydrocarbons down to hydrogen. Reformers, however, tend to produce traces of carbon monoxide, which locks up active sites on the metal catalysts. “Carbon monoxide poisoning can be alleviated by running at higher temperature,” which drives off the CO, says Zawodzinski. But the necessary heat would evaporate the water in a fuel cell. Either cells must be run under high pressure to keep the water from boiling, or a novel nonaqueous membrane must be developed.

    Technical hurdles are one thing, says Zawodzinski: “The showstopper is cost.” According to studies by the consulting firm of Arthur D. Little Inc. in Cambridge, Massachusetts, fuel cells, which now run about $3000 per kilowatt, won't penetrate power markets until they come down to about $1500 per kilowatt. Until Gore and other companies can slice into that differential, it may be a while before you can watch your favorite soap opera even when the rest of your neighborhood is plunged into darkness by a power outage.


    Turning Engineers Into Resource Accountants

    1. Jocelyn Kaiser

    A new discipline is trying to persuade companies that tracking the flow of materials and energy over a product's lifetime makes good business sense

    To Robert Frosch, a computer is like a frog. Both are made of energy-intensive materials: organic molecules for the amphibian, plastics and metals for the computer. Both use energy as they operate. And, like a dead frog decomposing in a marsh, an obsolete computer will decay somewhere, maybe in a landfill. But Frosch, an industrial ecologist at Harvard University's Kennedy School of Government, believes their life cycles could be made even more similar: Before the computer is junked, he would like to see it picked over by a scrap dealer—someone, he says, “a little like the microorganisms that turn waste into fertilizer.”

    This view of organisms and consumer goods leading parallel lives is gaining a wider audience thanks to Frosch and like-minded scientists, whose goal is to scrutinize every gram of material and joule of energy going into and out of a product. “It's a way of organizing and systematically studying the built environment,” explains Iddo Wernick, a physicist at Columbia University. The philosophy has begun to pay off—mainly in Europe—in everything from appliances designed with reusable parts to schemes for capturing precious metals that may otherwise end up in landfills or riverbeds.

    However, a cradle-to-grave approach to doing business hasn't yet caught fire in the United States. Efforts to get U.S. companies to feed off each other's waste, for instance, have sputtered (see sidebar), and only a handful of corporate titans have embraced the concept of scrutinizing their products' material and energy flows. That irks Frosch, who decries the consumption in developed countries, which deplete natural resources at a prodigious rate: about 1000 times the body weight of each inhabitant per year, according to a 1997 study led by the nonprofit World Resources Institute in Washington, D.C. “You find amazing amounts of things being thrown away that are perfectly useful,” Frosch says. Industrial ecologists hope to curb this compulsion by convincing companies—and people—to make the most of every iota of energy and substance. But that may happen only if governments end policies that embrace waste, such as cheap landfills and tax subsidies that skew the real costs of virgin materials.

    What goes in must come out. Industrial ecology was born in the late 1980s, when Frosch and Nicholas Gallopoulos, then at General Motors, described in a Scientific American article how to analyze a factory the same way you might an ecosystem, by assessing its energy cycles and decay and reuse. “A key question is, where did this stuff come from and where is it gonna end up?” Frosch says. The budding field, a mix mainly of social scientists, engineers, physicists, and ecologists, got a boost 2 years ago from the debut of the Journal of Industrial Ecology (

    Fueling the academic push is a wave of regulations and voluntary targets aimed at cutting back on waste. Since the early 1990s, several European countries and Japan have begun to mandate that companies use less packaging and take back old consumer appliances. In the United States, the Clinton Administration has pushed voluntary initiatives, including a program in which the Environmental Protection Agency (EPA) works with companies to help them “design for the environment”—for example, by finding alternatives to lead solder to make computer circuit boards. Some European countries, such as Germany, are even discussing an ambitious goal to curtail their consumption of natural resources by 90% by 2040.

    One way to reach these targets might be to employ the principles of industrial ecology. In its most reductionist form, the discipline involves life-cycle assessment —drawing a circle around, say, a television or a toaster, then tallying the materials and energy that go into their parts, manufacture, and use and the waste and pollution that come out. “You go as far back in the chain as the coal coming out of the ground,” says H. Scott Matthews of Carnegie Mellon University in Pittsburgh. The next step is to design a product that uses less raw materials and thus produces less waste.

    AT&T, a pioneer in this area, in the early 1990s began stamping plastics with ID tags for sorting at recycling centers, assembling phones with snaps instead of glue, and using less packaging. The idea of design with disassembly in mind has since caught on for everything from electric hot water kettles to BMW cars. In Japan, for instance, TV consoles are now often made of a magnesium alloy that, unlike plastic, doesn't degrade with age and can be recycled more easily.

    But although most big-name electronics, auto, and personal products firms now do life-cycle assessments, many other companies lack the stomach for it. They are wary of “paralysis by analysis,” says Richard Dennison of the Environmental Defense Fund in Washington, D.C., because a detailed assessment can take years and $100,000 or more. Some firms question what's in it for them, says Diana Bendz, director of environmentally conscious products for IBM—“although the more you get into it,” she says, “the more you realize you can save money.”

    A river of waste. Academic industrial ecologists usually get in the game at the next level up, by tracking the flow of materials through whole industries or society. These studies “tend to be assumption busters,” says David Rejeski, an EPA policy expert with the White House Council on Environmental Quality. For example, a few years ago researchers at the University of California, Los Angeles, and the California EPA found that silver in San Francisco Bay—blamed for poisoning fish and marine mammals—comes mostly from a commercial solvent used to fix x-ray images in dental offices and hospitals. The study prompted Kaiser Permanente to set up a recycling facility in Berkeley to recover silver in spent fixer from more than 50 medical centers, bringing in about $240,000 a year in profits from silver sales.

    “It tends to be a small number of leading-edge companies that are routinely incorporating these practices,” says Yale's Reid Lifset, editor-in-chief of the Journal of Industrial Ecology. “We've still got a ways to go.” Allen Kneese, an economist at Resources for the Future, a Washington, D.C., think tank, argues that more firms will take industrial ecology to heart only if the government revises policies—such as tax breaks for mining companies that make virgin materials cheaper than recycled ones—that obscure the true costs of depleting resources. Like Kneese, environmental scientist Helias Udo de Haes of Leiden University in the Netherlands thinks countries will only “dematerialize” if governments impose a tax on companies for each ton of pollution they emit. “It would be very difficult to do without taxes,” he says.

    Spreading the gospel will also mean further developing an academic discipline. At least a dozen universities offer industrial ecology programs and courses in Europe and the United States, says Rejeski, but “it's still a pretty small community.” Such principles, says Thomas Graedel, chair of Yale's industrial ecology program, “ought to be part of what any technologist does.”


    In This Danish Industrial Park, Nothing Goes to Waste

    1. Jocelyn Kaiser

    If there's anything that sums up the hopes of industrial ecology (see main text), it's a tiny pipeline-laced town in eastern Denmark called Kalundborg, where companies have been swapping byproducts like gypsum and waste water for up to 25 years. This “industrial symbiosis” is drawing keen interest from policy-makers in the United States, although opinions vary on its odds of success.

    The idea behind “ecoparks” is that one company's sludge is another's manna. As five firms that sprang up at Kalundborg over the years encountered new environmental regulations, they forged exchanges. For instance, flare gas from an oil refinery heats other factories; a power plant sends gypsum—produced by scrubbing sulfur dioxide from flue gas—to a drywall factory; and a biotech's fermentation waste gets shipped to farmers for fertilizing fields. Cooling water from the refinery is used by the power plant as boiler water, while the power plant's excess steam heats Kalundborg's 4300 homes. “There basically is no waste generation, and the energy efficiency is quite high,” says John Ehrenfeld, an industrial ecologist at the Massachusetts Institute of Technology.

    In the United States, the ecopark idea has been pumped by an advisory body called the President's Council on Sustainable Development, which points to at least 15 examples on the drawing board in places like Cape Charles, Virginia, and Londonderry, New Hampshire. The approaches range from making “green” products, like photovoltaic panels, to featuring energy-efficient lighting and nature walks. Few of these projects, however, will exchange waste materials, which some experts say is crucial to making a major dent in resource consumption.

    Easier to achieve, and common in the United States and elsewhere, is “green twinning”: exchanges between two companies, like a steel mill in Midlothian, Texas, that sends waste slag from its furnaces to a nearby cement plant. Another idea is to set up “virtual” ecoparks, in which far-flung companies can exchange materials. The U.S. Environmental Protection Agency has set up databases and developed software tools to help industries find out what each other is throwing away.

    Some experts are skeptical that anything like Kalundborg will ever exist in the United States. “The key difference,” says Ehrenfeld, “is that Kalundborg is an open culture. They don't have this notion of the corporation as secretive.” Yale industrial ecologist Marian Chertow, however, sees an opportunity in the recent push to deregulate the electric industry. More-efficient power plants are expected to spring up from fiercer competition, and these plants would be ideal anchors for ecoparks, she says. “What we're seeing now is the kernels of their evolution.”

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution